[sqlalchemy] Re: Where is the exactly .db file if I use sqlite code in the example

2022-12-06 Thread Ryan Lee
thank you   I understand

2022년 12월 7일 수요일 오전 2시 56분 57초 UTC+9에 leleg...@gmail.com님이 작성:

> Ryan Lee  writes:
>
> > https://docs.sqlalchemy.org/en/14/orm/quickstart.html#create-an-engine
> >
> > ```
> > engine = create_engine("sqlite://", echo=True, future=True)
> > ```
> > If I use the code in example ,
> > I cannot find the exact file , where is it ? 
>
> Accordingly to
> https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#connect-strings,
> that syntax is equivalent to sqlite://:memory:, and thus there is no
> file involved.
>
> ciao, lele.
> -- 
> nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
> real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
> le...@metapensiero.it | -- Fortunato Depero, 1929.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/4977a1ff-4ff1-4812-8355-5645ce649315n%40googlegroups.com.


[sqlalchemy] Where is the exactly .db file if I use sqlite code in the example

2022-12-06 Thread Ryan Lee
https://docs.sqlalchemy.org/en/14/orm/quickstart.html#create-an-engine

```
engine = create_engine("sqlite://", echo=True, future=True)
```
If I use the code in example ,
I cannot find the exact file , where is it ? 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/b3b3fa06-2ed3-4734-9b57-357376de2503n%40googlegroups.com.


[sqlalchemy] how can I select schema while using sqlacodegen crawling postgresql db models

2022-10-11 Thread Ryan Lee
when I prepare crawling db dto from postgresql( AWS RDS)  for my flask 
application, 
I set SQLAlchemy URI like follow 

f'{RDS_PROTOCOL}://{RDS_USERNAME}:{RDS_PASSWORD}@{RDS_HOSTNAME}:{RDS_PORT}/{RDS_DB_NAME}'

but I don't know how to define one specific schema 

pls let me knowTT

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/ae26d9d1-bd5a-403c-84c6-84679e7a6133n%40googlegroups.com.


[sqlalchemy] Re: Connection error

2021-06-16 Thread Ryan Bandler
Anyways guys I was able to fix the issue by installing a flask-specific 
version of the same package (even though I am not using flask) and I was 
able to get it to work just fine. Thanks everyone for your help.

On Wednesday, June 16, 2021 at 9:20:37 AM UTC-4 Ryan Bandler wrote:

> Yeah I connect to the DB over localhost:5432 via psql all the time and 
> also haven't had any issues connecting via psycopg2. I also have no issue 
> connecting to the database in sqlalchem, it's only an issue with 
> sqlacodegen.
> On Tuesday, June 15, 2021 at 7:53:51 PM UTC-4 jonatha...@gmail.com wrote:
>
>> Have you confirmed that you can connect to 127.0.0.1 at port 5432 using 
>> psql? On my development system, I normally use a local (UNIX domain) 
>> socket, which is libpq's default behavior. When I run "psql -h 127.0.0.1", 
>> I get the following error:
>>
>> psql: could not connect to server: Connection refused
>> Is the server running on host "127.0.0.1" and accepting
>>
>>     TCP/IP connections on port 5432?
>>
>> On Tuesday, June 15, 2021 at 1:06:23 PM UTC-4 Ryan Bandler wrote:
>>
>>> Hello everyone,
>>>
>>> I am a first-time SQLalchemy user planning on using SQLalchemy on a new 
>>> project. We already have an established postgres database (currently still 
>>> on localhost) which I do not want to handwrite the SQLalchemy model for. So 
>>> I am planning to use sqlacodegen. Unfortunately, sqlacodegen is giving me:
>>>
>>> sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not 
>>> connect to server: Connection refused
>>> Is the server running on host "localhost" (127.0.0.1) and 
>>> accepting
>>> TCP/IP connections on port 5432?
>>>
>>> My systems at work are very restrictive, but I was able to gain 
>>> permissions to the postgres config files and edited them to allow all TCP 
>>> connections. I restarted the postgres service and I am still experiencing 
>>> this error. I dont understand why this is happening, because it seems to be 
>>> an error coming from psycopg2 being called in sqlalchemy, and i have ever 
>>> had any issues connecting to the DB with psycopg2 before. 
>>>
>>> If anyone has any experience with sqlacodegen, any help would be much 
>>> appreciated!!
>>>
>>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/ef5f8cda-983b-4657-9020-ad6be4201522n%40googlegroups.com.


[sqlalchemy] Re: Connection error

2021-06-16 Thread Ryan Bandler
Yeah I connect to the DB over localhost:5432 via psql all the time and also 
haven't had any issues connecting via psycopg2. I also have no issue 
connecting to the database in sqlalchem, it's only an issue with 
sqlacodegen.
On Tuesday, June 15, 2021 at 7:53:51 PM UTC-4 jonatha...@gmail.com wrote:

> Have you confirmed that you can connect to 127.0.0.1 at port 5432 using 
> psql? On my development system, I normally use a local (UNIX domain) 
> socket, which is libpq's default behavior. When I run "psql -h 127.0.0.1", 
> I get the following error:
>
> psql: could not connect to server: Connection refused
> Is the server running on host "127.0.0.1" and accepting
>
> TCP/IP connections on port 5432?
>
> On Tuesday, June 15, 2021 at 1:06:23 PM UTC-4 Ryan Bandler wrote:
>
>> Hello everyone,
>>
>> I am a first-time SQLalchemy user planning on using SQLalchemy on a new 
>> project. We already have an established postgres database (currently still 
>> on localhost) which I do not want to handwrite the SQLalchemy model for. So 
>> I am planning to use sqlacodegen. Unfortunately, sqlacodegen is giving me:
>>
>> sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not 
>> connect to server: Connection refused
>> Is the server running on host "localhost" (127.0.0.1) and 
>> accepting
>> TCP/IP connections on port 5432?
>>
>> My systems at work are very restrictive, but I was able to gain 
>> permissions to the postgres config files and edited them to allow all TCP 
>> connections. I restarted the postgres service and I am still experiencing 
>> this error. I dont understand why this is happening, because it seems to be 
>> an error coming from psycopg2 being called in sqlalchemy, and i have ever 
>> had any issues connecting to the DB with psycopg2 before. 
>>
>> If anyone has any experience with sqlacodegen, any help would be much 
>> appreciated!!
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/5a5cd4de-df65-4713-97e5-317910dddf79n%40googlegroups.com.


Re: [sqlalchemy] Connection error

2021-06-15 Thread Ryan Bandler
Hi Rich,

I appreciate the response. I understand how the structure of the SA model 
works, it really is quite simple. My question was about an error while 
using a third party tool called sqlacodegen, which is a tool which connects 
to a DB and automatically generates the SA model. I am using this tool 
because I have many tables with lots of columns and I dont like typing 
boilerplate code when I dont have to. My question was not about SA itself 
or how writing the model works ,but about this error I am experiencing with 
this generator tool. I appreciate the response though!

Best,
Ryan

On Tuesday, June 15, 2021 at 1:13:15 PM UTC-4 rshe...@appl-ecosys.com wrote:

> On Tue, 15 Jun 2021, Ryan Bandler wrote:
>
> > I am a first-time SQLalchemy user planning on using SQLalchemy on a new
> > project.
>
> Ryan,
>
> I started to learn SA long ago, but put off the project. Now I'm starting
> over again with much to learn.
>
> > We already have an established postgres database (currently still on
> > localhost) which I do not want to handwrite the SQLalchemy model for. So 
> I
> > am planning to use sqlacodegen. Unfortunately, sqlacodegen is giving me:
>
> My postgres databases exist and have data. I used the short version of
> declarative models; don't know if the concept and syntax still holds for SA
> version 1.4.x
>
> For one project the model.py file is:
> """
> This is the SQLAlchemy declarative mapping python classes to postgres
> tables for the business tracker.
> """
>
> from sqlalchemy import create_engine
> from sqlalchemy.ext.automap import automap_base
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy import Column, Unicode, Integer, String, Date
> from sqlalchemy.orm import sessionmaker
> from sqlalchemy import ForeignKey
> from sqlalchemy.orm import relationship
> from sqlalchemy import CheckConstraint
> from sqlalchemy.orm import Session
> from sqlalchemy.dialects import postgresql
>
> """Base = declarative_base()"""
> Base = automap_base()
>
> engine = create_engine('postgresql+psycopg2:///bustrac')
>
> # reflect the tables
> Base.prepare(engine, reflect=True)
>
> State = postgresql.Enum('AL', 'AK', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 
> 'DC', 'FL', 'GA', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 
> 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM', 
> 'NY', 'NC', 'ND', 'OH', 'OK', 'OR', 'PA', 'RI', 'SC', 'SD', 'TN', 'TX', 
> 'UT', 'VT', 'VA', 'WA', 'WV', 'WI', 'WY', 'AB', 'BC', 'MB', 'NB', 'NL', 
> 'NT', 'NS', 'NU', 'ON', 'PE', 'QC', 'SK', 'YT', name='states')
>
> Industries = Base.classes.industries
> Status = Base.classes.status
> StatusTypes = Base.classes.statusTypes
> ActivityTypes = Base.classes.activityTypes
> Organizations = Base.classes.organizations
> Locations = Base.classes.locations
> People = Base.classes.people
> Activities = Base.classes.activities
> Projects = Base.classes.projects
>
> Base.metadata.create_all(engine)
>
> Session = sessionmaker(bind=engine)
>
> HTH,
>
> Rich
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/2d5750de-c490-4757-8042-570ae72591d0n%40googlegroups.com.


[sqlalchemy] Connection error

2021-06-15 Thread Ryan Bandler
Hello everyone,

I am a first-time SQLalchemy user planning on using SQLalchemy on a new 
project. We already have an established postgres database (currently still 
on localhost) which I do not want to handwrite the SQLalchemy model for. So 
I am planning to use sqlacodegen. Unfortunately, sqlacodegen is giving me:

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not 
connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?

My systems at work are very restrictive, but I was able to gain permissions 
to the postgres config files and edited them to allow all TCP connections. 
I restarted the postgres service and I am still experiencing this error. I 
dont understand why this is happening, because it seems to be an error 
coming from psycopg2 being called in sqlalchemy, and i have ever had any 
issues connecting to the DB with psycopg2 before. 

If anyone has any experience with sqlacodegen, any help would be much 
appreciated!!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/bd0c54b5-8057-46eb-adec-411a79c52911n%40googlegroups.com.


Re: [sqlalchemy] Aldjemy many to many modeling

2021-05-19 Thread Ryan Hiebert
And nope. It's coming when I'm running `aliased`, but it doesn't seem to be 
directly related, even though it's the same warning. I think it has to do 
with how I'm translating inherited models. Sorry for the false alarm.

On Wednesday, May 19, 2021 at 8:50:56 PM UTC-5 Ryan Hiebert wrote:

> So I am able to just add it on the many-to-many relationship, that worked 
> just fine. Unfortunately, I'm now having a similar issue come up when I use 
> `aliased` on a generated model. Do you have any suggestion for how to 
> handle that? It strikes me that either (a) aliased doesn't expect to be 
> working with models with relationships, or (b) aliased really shouldn't be 
> causing this warning, because of course it will be duplicating 
> relationships, right?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/9fec53bf-e160-4970-a616-b1a669db7074n%40googlegroups.com.


Re: [sqlalchemy] Aldjemy many to many modeling

2021-05-19 Thread Ryan Hiebert
So I am able to just add it on the many-to-many relationship, that worked 
just fine. Unfortunately, I'm now having a similar issue come up when I use 
`aliased` on a generated model. Do you have any suggestion for how to 
handle that? It strikes me that either (a) aliased doesn't expect to be 
working with models with relationships, or (b) aliased really shouldn't be 
causing this warning, because of course it will be duplicating 
relationships, right?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d822f896-a044-46fd-997f-63c15b81e970n%40googlegroups.com.


Re: [sqlalchemy] Aldjemy many to many modeling

2021-05-19 Thread Ryan Hiebert


On Wednesday, May 19, 2021 at 6:50:57 PM UTC-5 Mike Bayer wrote:

>
> It mostly depends on what people need to do.  If people have been fine 
> with the overlapping relationships and not had a problem it's not 
> necessarily a bad thing, otherwise the assoc proxy is pretty common.  But, 
> if there are no significant columns in AtoB other than the A's and B's, I'd 
> probably just go with the single many-to-many relationship.
>

Thanks. With this in mind, I think that the general mode of Django is 
pretty different from SQLAlchemy in this regard. Django users expect 
different relationships could indeed have conflicts like that, so as you 
suggest, it might be best to keep the overlapping relationships, so that 
existing flows don't break.

In the general case I'm working with, I can't know whether there are any 
additional interesting fields other than the foreign keys on the secondary 
table. In some cases there are, but in other cases there are not.

One challenge is that I generate these one model at a time. I was 
originally hoping that I could specify the overlapping relationships only 
on the many-to-many relationship and solve this case, but my attempt at 
that did not work the way I'd hoped, and you mentioned needing to add it to 
all the relationships, so I suspect that this is indeed correct. I suspect 
I will need to gather all of these conflicts before I create the 
relationships, and that it may not be possible, or would be bad form, to 
adjust existing relationships with this after they are created. Does that 
sound right?

Thanks for all your help, and for the excellent docs on these warnings. 
While I still have questions, I was able to understand quite a lot of it 
before I needed to come ask for assistance.

Ryan

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/3a4964d5-9bc5-4536-9b14-37452eaeefa6n%40googlegroups.com.


[sqlalchemy] Aldjemy many to many modeling

2021-05-19 Thread Ryan Hiebert
I maintain Aldjemy, a library that converts Django models into SQLAlchemy 
models. I'm having some trouble figuring out how I should resolve an issue 
I'm working on, and I could use your help. Here's the link to the issue I'm 
working on, though hopefully the context I give below will be sufficient.

https://github.com/aldjemy/aldjemy/issues/159

Django models many to many relationships with a ManyToManyField on one side 
of the relationship, and an automatically added reverse relationship on the 
other side. Depending on how you configure it, it will either automatically 
create the intermediate table, or you can manually specify it. Either way, 
the "through" model, representing the intermediate table, is a 
fully-functional model, with the appropriate foreign key relationships.

Aldjemy models this currently by creating a relationship for each of the 
foreign key relationships on the secondary (through) table, as well as a 
relationship (notably missing the backref, though fixing that isn't my 
priority right now) that includes the `secondary` argument to make it model 
the many-to-many relationship.

Starting with SQLAlchemy 1.4, this gives warnings that these relationships 
conflict, and suggesting that these conflicting relationships either need 
to have one be read-only, or use the "overlaps" parameter to specify that 
we know about this, and it's what we intend. However, I think this is a 
strong indication that this is *not* how we should be modeling this.

Mark over in IRC suggested that he'd model it either by dropping the 
relationship with the secondary table, or by replacing that relationship 
with an association proxy. If the either is present, it should give access 
to model instances, so I wasn't immediately sure if an association proxy 
would be able to do that, but it does seem like it's possible. It seems 
from some experiments he did that an association proxy would not be able to 
be used to do join.

How would you recommend I model this in Aldjemy? I figure whatever I choose 
is technically going to be a breaking change, so I'd like to choose the 
wisest option, that most cleanly fits into the patterns that SQLAlchemy 
users will most readily understand.

Thank you,

Ryan




-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/cdcecf3a-3e11-441f-bd96-3216d76ffc42n%40googlegroups.com.


Re: [sqlalchemy] FetchedValue() updated by UPDATE to a different table

2020-06-19 Thread Ryan Kelly
Ahh, didn't find the after_flush event. This is perfect! Thank you!

On Thu, Jun 18, 2020 at 7:30 PM Mike Bayer  wrote:

>
> the Session doesnt scan every object on every flush, only those that have
> been marked as dirty.
>
> you should use an event handler such as after_flush() to go through the
> list of parent objects that have changed, traverse through the child
> objects you care about and call session.expire() on the attributes in
> question.
>
> On Thu, Jun 18, 2020, at 7:47 PM, Ryan Kelly wrote:
>
> Hi,
>
> We have some functionality where some identifiers are pushed down into
> child tables for the purposes of enforcing a unique constraint that would
> otherwise not be as simple to enforce. What happens is that during an
> update to a parent table, a trigger runs that pushes the update down to
> it's child, which may cause another trigger to invoke and push this update
> further down, so on and so on. It seems, however, that after a flush,
> FetchedValue() does not cause this value to be fetched (and I believe that
> this is because FetchedValue() only applies to the object on which it is
> set). I'm wondering what approach I might take here to get these linked
> tables to expire in a manner that is centralized and doesn't require users
> to explicitly require certain objects (or all objects).
>
> -Ryan
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAHUie2463tG0QP0NhgvT7PW1sa0%2Byigjg1OGCtfxJimC4g9Hpg%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAHUie2463tG0QP0NhgvT7PW1sa0%2Byigjg1OGCtfxJimC4g9Hpg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/95170303-271c-4ce6-9a95-568ecd91b916%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/95170303-271c-4ce6-9a95-568ecd91b916%40www.fastmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAHUie27LxOJruQfMB62BKao7_DvwutG-ytCB98AvX3svogZp%3DA%40mail.gmail.com.


[sqlalchemy] FetchedValue() updated by UPDATE to a different table

2020-06-18 Thread Ryan Kelly
Hi,

We have some functionality where some identifiers are pushed down into
child tables for the purposes of enforcing a unique constraint that would
otherwise not be as simple to enforce. What happens is that during an
update to a parent table, a trigger runs that pushes the update down to
it's child, which may cause another trigger to invoke and push this update
further down, so on and so on. It seems, however, that after a flush,
FetchedValue() does not cause this value to be fetched (and I believe that
this is because FetchedValue() only applies to the object on which it is
set). I'm wondering what approach I might take here to get these linked
tables to expire in a manner that is centralized and doesn't require users
to explicitly require certain objects (or all objects).

-Ryan

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAHUie2463tG0QP0NhgvT7PW1sa0%2Byigjg1OGCtfxJimC4g9Hpg%40mail.gmail.com.


Re: [sqlalchemy] AWS RDS generate-db-auth-token and Redshift get-cluster-credentials

2020-05-23 Thread Ryan Kelly
Excellent, I’ll test with the below. Thanks for the advice!

On Fri, May 22, 2020 at 8:18 PM Mike Bayer  wrote:

> engine strategies are gone in 1.4 so you're going to want to make use of
> event and plugin hooks such as:
>
>
> https://docs.sqlalchemy.org/en/13/core/connections.html?highlight=plugin#sqlalchemy.engine.CreateEnginePlugin
>
>
> https://docs.sqlalchemy.org/en/13/core/events.html?highlight=do_connect#sqlalchemy.events.DialectEvents.do_connect
>
> these two hooks are both available in all 1.x versions and if they are not
> sufficient for what you need, if you can let me know that would be great,
> strategies are already removed from master as these were not really the
> "public" hook.
>
> On Fri, May 22, 2020, at 4:31 PM, Elmer de Looff wrote:
>
> For reference, we've used this engine strategy for a while, which seems to
> get the job done. We're strictly on Postgres so the code could do with some
> alterations to make it compatible with multiple backends, that's left as an
> exercise to the reader :-)
>
> The main work is done in _rds_engine_creator() which gets the
> necessary (short-lived) credentials for the connection just before it's
> actually created. There's a couple of ways to do this, this is simply one
> that got us a nice hands-off result where all we needed was to provide a
> different engine strategy in the config. Adjust for your particular use
> case.
>
> # Register this engine strategy somewhere in your imported models
> class RdsEngineStrategy(PlainEngineStrategy):
> name = 'rds'
>
> def create(self, name_or_url, **kwargs):
> """Adds an RDS-specific 'creator' for the engine connection."""
> engine_url = make_url(name_or_url)
> kwargs['creator'] = self._rds_engine_creator(engine_url)
> return super().create(engine_url, **kwargs)
>
> def _rds_engine_creator(self, engine_url):
> instance_id, region = engine_url.host.split('.')
> connector = engine_url.get_dialect().dbapi().connect
> rds = boto3.client('rds', region_name=region)
> if self._rds_first_instance_by_name(rds, instance_id) is None:
> raise ValueError('No RDS instances for the given instance ID')
>
> def engine_func():
> instance = self._rds_first_instance_by_name(rds, instance_id)
> password = rds.generate_db_auth_token(
> DBHostname=instance['Endpoint']['Address'],
> DBUsername=engine_url.username,
> Port=instance['Endpoint']['Port'])
> return connector(
> host=instance['Endpoint']['Address'],
> port=instance['Endpoint']['Port'],
> database=engine_url.database,
> user=engine_url.username,
> password=password,
> sslmode='require')
> return engine_func
>
> def _rds_first_instance_by_name(self, client, name):
> response = client.describe_db_instances(DBInstanceIdentifier=name)
> return next(iter(response['DBInstances']), None)
>
>
> # Make sure to actually register it
> RdsEngineStrategy()
>
> # Caller code
> engine = sqlalchemy.create_engine("postgres://user@instance-name.region
> /dbname", strategy="rds")
>
>
> On Fri, May 22, 2020 at 9:54 PM Mike Bayer 
> wrote:
>
>
> You can modify how the engine makes connections using the do_connect event
> hook:
>
>
> https://docs.sqlalchemy.org/en/13/core/events.html?highlight=do_connect#sqlalchemy.events.DialectEvents.do_connect
>
> each time the engine/ pool go to make a new connection, you can affect all
> the arguments here, or return an actual DBAPI connection.
>
>
>
>
> On Fri, May 22, 2020, at 1:39 PM, Ryan Kelly wrote:
>
> Hi,
>
> I am looking to use credentials provided by the above functionality from
> AWS. Basically, using either of these methods, you can obtain temporary
> credentials (for RDS, just password, and Redshift both username and
> password) that can be used to access the database. However, for long
> running processes, connection failures and subsequent reconnections as well
> as new connections initiated by the connection pool (or even just waiting a
> long time between generating the credentials and making your first
> connection) the credentials configured on a URL as passed to create_engine
> will eventually begin to fail.
>
> At first I thought I'd simply subclass URL and make username/password
> properties that could be refreshed as needed, but digging into
> create_connection it seems like those properties are read out of the URL
> object and into cargs/cwargs and provided to pool as such.
>
> I took then a roundabout approach or creating a proxy object that is
> capable of refreshing the value and using this object as the
> username/password, which only works because psycogp2 is helpfully calling
> str() on them as it constructs the connstring/dsn. Which... I

[sqlalchemy] AWS RDS generate-db-auth-token and Redshift get-cluster-credentials

2020-05-22 Thread Ryan Kelly
Hi,

I am looking to use credentials provided by the above functionality from
AWS. Basically, using either of these methods, you can obtain temporary
credentials (for RDS, just password, and Redshift both username and
password) that can be used to access the database. However, for long
running processes, connection failures and subsequent reconnections as well
as new connections initiated by the connection pool (or even just waiting a
long time between generating the credentials and making your first
connection) the credentials configured on a URL as passed to create_engine
will eventually begin to fail.

At first I thought I'd simply subclass URL and make username/password
properties that could be refreshed as needed, but digging into
create_connection it seems like those properties are read out of the URL
object and into cargs/cwargs and provided to pool as such.

I took then a roundabout approach or creating a proxy object that is
capable of refreshing the value and using this object as the
username/password, which only works because psycogp2 is helpfully calling
str() on them as it constructs the connstring/dsn. Which... I mean, is an
interesting, but also unsustainable, solution.

What I am asking, I suppose, is 1) am I missing something obvious that
would make this achievable? and 2) if not, what kind of best-approach pull
request could I produce that could make this happen?

Thanks,
-Ryan

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAHUie25g0G5OPuyDHaNn8oWkTzizwQxGY0tnkaJvOewLMQR4DQ%40mail.gmail.com.


[sqlalchemy] How to speed up Pandas read_sql (with SQL Alchemy as underlying engine) from Oracle DB?

2019-12-02 Thread Ryan Wolniak


I'd like to optimize querying and converting a list of Oracle tables into 
pandas dataframes.


The eventual goal is to convert to Parquet, write to disk, then upload to 
S3, but for now I just want to focus on the pandas / sqlalchemy / 
parallelism part. My code sort of works, but it's very slow and seems to 
hang after completing 10 tables.


Any advice for speeding things up or alternative suggestions?


import sqlalchemyfrom sqlalchemy.orm import sessionmaker, scoped_sessionfrom 
multiprocessing.dummy import Pool as ThreadPool from multiprocessing import 
Poolimport pyarrow as paimport pyarrow.parquet as pq
def process_chunk(chunk, table_name, index):
table = pa.Table.from_pandas(chunk)
local_file_name = "./" + table_name + "-" + str(index) + ".parquet"
pq.write_table(table, local_file_name)
def process_table(table):
db_session = DBSession()
# helper function that creates the SQL query (select col1, col2, col3, ..., 
colX from table)
query = setup_query(table)
i=0
# is this the right way to use the db_session?
for chunk in pd.read_sql(query, db_session.bind, chunksize=30):
process_chunk(chunk, table, i)
i+=1

oracle_connect_str = #string_here#
oracle_engine = sqlalchemy.create_engine(
oracle_connect_str,
arraysize=1)
# set up session object to be used by threadsDBSession = scoped_session(
sessionmaker(
autoflush=True,
autocommit=False,
bind=oracle_engine
))

pool = ThreadPool(4)

table_list = ['tbl1','tbl2','tbl3','tbl4','tbl5',...,'tbl20']
# use pool.map instead of creating boiler-plate threading class
pool.map(process_table, table_list)
# are these in the right spots?
pool.close()
pool.join()



Thanks!


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/0e7bf9a8-0335-4a07-af90-6b217daeb2ea%40googlegroups.com.


[sqlalchemy] server_default=text('1970-01-01 00:00:01') changed when run in a different timezone

2019-10-09 Thread Thomas Ryan
I have the following fields defined in a table.

  `updated` datetime NOT NULL DEFAULT '1970-01-01 00:00:01',
  `deleted` datetime NOT NULL DEFAULT '1970-01-01 00:00:01',

When someone runs SQLACODEGEN it changes the definition of the 
server_default time based on the timezone the developer is in when they run 
it.  Someone in the eastern timezone gets 

updated = Column(TIMESTAMP, nullable=False, index=True, 
server_default=text("'1970-01-01 00:00:01'"))
deleted = Column(TIMESTAMP, nullable=False, index=True, 
server_default=text("'1970-01-01 00:00:01'"))

However, someone in the central timezone gets 

updated = Column(TIMESTAMP, nullable=False, index=True, 
server_default=text("'1969-12-31 23:00:01'"))
deleted = Column(TIMESTAMP, nullable=False, index=True, 
server_default=text("'1969-12-31 23:00:01'"))

Is there a parameter that can be provided to not change the model.py and 
use the 1970 date?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/0b86a03a-016a-40bb-8a0e-91b4ee62ce3a%40googlegroups.com.


Re: [sqlalchemy] SQLlite: enforce a unique constraint on a relation?

2018-03-12 Thread Ryan Holmes
Thanks Mike,

While I would rather this be a hard constraint on the database, not 
enforced in python, I understand that SQLite is pretty limited in this 
regard, so I'm down to try anything really. I also had the idea of using a 
custom collection simply because the project already uses them heavily. I 
will look into possibly creating triggers for the database to check before 
insert (if that's a thing in SQLite), and also SQLAlchemy validators.

Thanks for the resources! Would be interested in hearing other solutions 
that people might come up with :)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLlite: enforce a unique constraint on a relation?

2018-03-11 Thread Ryan Holmes
(cross posted from https://stackoverflow.com/questions/49225846/)

Lets say I have 3 tables: parent, child, and a linker table for the two for 
a many-to-many relationship, `parentChildren`. Each parent can have 
multiple children, and each child can have multiple parents. If parent 1 
already has child 1, another link between these two cannot be inserted 
(this is easily done by making both parentID and childID as part of the 
primary key)

However, I would like to enforce another constraint: each parent can only 
have children with a unique `col1`. so, lets say that 4 children exists, 
each with a `col1` different than the other, except the last one, which has 
the same `col1` as the first child.

If I add child 1 then try to add child 2, that should be fine since they do 
not share the same `col1`. However, if I add child 1 and child 4, I want to 
get a constraint error.

Here's the code: (using Classic mappings, not Declarative. This question 
relates to an older project that hasn't been updated to use declarative 
syntax yet)

from sqlalchemy import create_engine

from sqlalchemy.orm import relation, mapper
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey


engine = create_engine('sqlite:///:memory:', echo=True)

metadata = MetaData()
parents = Table('parent', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
)

children = Table('child', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('col1', String),
)

parent_children = Table('parentChildren', metadata,
Column('parentID', ForeignKey("parent.id"), primary_key=True),
Column('childID', ForeignKey("child.id"), primary_key=True),
)

metadata.create_all(engine)

class Parent(object):
def __init__(self, name):
self.name = name
self.children = []

class Child(object):
def __init__(self, name, col1):
self.name = name
self.col1 = col1

mapper(Child, children)
mapper(Parent, parents,
properties={
"children": relation(
Child,
cascade='all,delete-orphan',
backref='parent',
single_parent=True,
primaryjoin=parent_children.c.parentID == parents.c.id,
secondaryjoin=parent_children.c.childID == Child.id,
secondary=parent_children
),
}
)

parent1 = Parent("Parent 1")

child1 = Child("Child1", "test1")
child2 = Child("Child2", "test2")
child3 = Child("Child3", "test3")

child4 = Child("Child4", "test1")

parent1.children.append(child1)
parent1.children.append(child2)
parent1.children.append(child3)

# up to here we should have 3 children
print(parent1.children)

#adding this next one shoudl result in a constraint error, because it has the 
same col1 value as another in the collection (child1)
parent1.children.append(child4)

# removing child1 should allow child 4 to be added
print(parent1.children)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Only return single entity from query

2017-06-20 Thread Ryan Weinstein
Thanks for the reply Mike. SQLAlchemy is a fantastic ORM and you've done an 
amazing job with it.

You're suggestion hasn't fixed my issue though, perhaps there's something 
fundamental I'm misunderstanding?

Here is the query I'm testing:

r_product_category_history_list = 
db.Model.metadata.tables['r_product_category_history_list'] query = 
db.session.query(rProduct, r_product_category_history_list.c.time_updated)\ 
.join(rProductCategoryHistory)\ 
.order_by(desc(rProductCategoryHistory.time_updated))


It should be returning two rProduct's but it only returns one

On Tuesday, June 20, 2017 at 10:20:51 AM UTC-7, Mike Bayer wrote:
>
>
>
> On 06/20/2017 01:04 PM, Ryan Weinstein wrote: 
> > 
> > 
> > On Tuesday, June 20, 2017 at 9:53:10 AM UTC-7, Jonathan Vanasco wrote: 
> > 
> > you're almost there. 
> > 
> > flip a few lines around to rewrite the query to select from 
> > product...  e.g. something like 
> > 
> > SELECT r_product_list.* FROM r_product_list 
> > JOIN r_product_category_history_list on 
> > r_product_list.r_id=r_product_category_history_list.r_id 
> > JOIN r_product_reviews_historical_details_list on 
> > r_product_list.most_recent_historical_reviews_id=
> r_product_reviews_historical_details_list.id 
> > <http://r_product_reviews_historical_details_list.id> 
> > 
> > then it should be clear how to convert to the sqlalchemy syntax. 
> > 
> > 
> > The problem is when I do that it doesn't return the correct number of 
> > instances. I need one instance of rProduct per entry into 
> > r_product_category_history_list. 
>
> the ORM deduplicates full entities (e.g. mapped objects) when returned 
> from a Query object.  This is so that eager loading schemes don't return 
> dupes.  If you need each object associated with something distinct, add 
> that to the query: 
>
> session.query(Product, product_category_list.some_column) 
>
>
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Only return single entity from query

2017-06-20 Thread Ryan Weinstein


On Tuesday, June 20, 2017 at 9:53:10 AM UTC-7, Jonathan Vanasco wrote:
>
> you're almost there.
>
> flip a few lines around to rewrite the query to select from product... 
>  e.g. something like
>
> SELECT r_product_list.* FROM r_product_list
> JOIN r_product_category_history_list on 
> r_product_list.r_id=r_product_category_history_list.r_id
> JOIN r_product_reviews_historical_details_list on 
> r_product_list.most_recent_historical_reviews_id=
> r_product_reviews_historical_details_list.id 
>
> then it should be clear how to convert to the sqlalchemy syntax.
>


Here are my models btw:v


class rProduct(Product):
__tablename__ =  'r_product_list'
id = db.Column(ForeignKey(u'product_list.id'))
r_id = db.Column(String(64), nullable=False, index=True, 
primary_key=True)

category_history = relationship("rProductCategoryHistory")
image = db.Column(String(512))
category_history_entry = relationship("rProductCategoryHistory", 
uselist=False)
history = relationship("rProductHistoricalDetails", 
back_populates='r_product', foreign_keys='rProductHistoricalDetails.r_id')

most_recent_historical_details_id = db.Column(Integer,
ForeignKey('r_product_historical_details_list.id'))
most_recent_historical_details_entry = 
relationship("rProductHistoricalDetails", uselist=False, 
foreign_keys=[most_recent_historical_details_id])

most_recent_historical_reviews_id = db.Column(Integer,
ForeignKey('r_product_reviews_historical_details_list.id'))
most_recent_historical_reviews_entry = 
relationship("rProductReviewsHistoricalDetails", uselist=False, 
foreign_keys=[most_recent_historical_reviews_id])

__mapper_args__ = {
'polymorphic_identity':'r_product'
}


class rProductCategoryHistory(db.Model):
__tablename__ =  'r_product_category_history_list'
id = db.Column(Integer, primary_key=True)
r_id = db.Column(String(64), ForeignKey('r_product_list.r_id'), 
nullable=False)
product = relationship("rProduct", uselist=False)

in_new = db.Column(db.Boolean(1), default=False, server_default="0")
in_trending = db.Column(db.Boolean(1), default=False, 
server_default="0")
in_top_finds = db.Column(db.Boolean(1), default=False, 
server_default="0")
in_deals = db.Column(db.Boolean(1), default=False, server_default="0")
is_tsv = db.Column(db.Boolean(1), default=False, server_default="0")
is_big_deal = db.Column(db.Boolean(1), default=False, 
server_default="0")

availability = db.Column(String(64))

time_updated = db.Column(TIMESTAMP, default=func.now(),
onupdate=func.now(), server_default=text("CURRENT_TIMESTAMP"))



class rProductHistoricalDetails(db.Model):
__tablename__ = 'r_product_historical_details_list'
id = db.Column(Integer, primary_key=True)

r_id = db.Column(String(64), ForeignKey('r_product_list.r_id'), 
nullable=False)
r_product = relationship("rProduct", uselist=False, foreign_keys=[r_id])
# product = relationship("rProduct", uselist=False, 
back_populates=[r_id])

r_price = db.Column( Float ) 
sale_price = db.Column( Float ) 
video = db.Column(String(512))

time_updated = db.Column(TIMESTAMP, default=func.now(),
onupdate=func.now(), server_default=text("CURRENT_TIMESTAMP"))

class rProductReviewsHistoricalDetails(db.Model):
__tablename__ = 'r_product_reviews_historical_details_list'
id = db.Column(Integer, primary_key=True)
r_id = db.Column(String(64), ForeignKey('r_product_list.r_id'), 
nullable=False)
r_product = relationship("rProduct", uselist=False, foreign_keys=[r_id])
total_reviews = db.Column(db.Integer())
average_rating = db.Column(db.Float())

time_updated = db.Column(TIMESTAMP, default=func.now(),
onupdate=func.now(), server_default=text("CURRENT_TIMESTAMP"))

 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Only return single entity from query

2017-06-20 Thread Ryan Weinstein


On Tuesday, June 20, 2017 at 9:53:10 AM UTC-7, Jonathan Vanasco wrote:
>
> you're almost there.
>
> flip a few lines around to rewrite the query to select from product... 
>  e.g. something like
>
> SELECT r_product_list.* FROM r_product_list
> JOIN r_product_category_history_list on 
> r_product_list.r_id=r_product_category_history_list.r_id
> JOIN r_product_reviews_historical_details_list on 
> r_product_list.most_recent_historical_reviews_id=
> r_product_reviews_historical_details_list.id 
>
> then it should be clear how to convert to the sqlalchemy syntax.
>

The problem is when I do that it doesn't return the correct number of 
instances. I need one instance of rProduct per entry into 
r_product_category_history_list. 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Only return single entity from query

2017-06-20 Thread Ryan Weinstein


On Monday, June 19, 2017 at 6:40:04 PM UTC-7, Jonathan Vanasco wrote:
>
> You're not joining anything onto the `rProductCategoryHistory` table, so 
> depending on your DB you may be getting populated rows for rProduct that 
> don't match anything.
>
> you probably want something like this...
>
> query = db.session.query(rProduct)\
> .join(rProductHistoricalDetails, 
> rProduct.most_recent_historical_details_id==rProductHistoricalDetails.id)\
> .join(rProductReviewsHistoricalDetails, 
> rProduct.most_recent_historical_reviews_entry==rProductReviewsHistoricalDetails.id)\
> .order_by(rProductHistoricalDetails.time_updated.desc())
>
> unless you missed joining rProductCategoryHistory.
>
> you should try writing a raw sql query that gets the right data.
>


This gets the right data:

SELECT * FROM r_product_category_history_list JOIN r_product_list on 
r_product_list.r_id=r_product_category_history_list.r_id JOIN 
r_product_reviews_historical_details_list ON 
r_product_list.most_recent_historical_reviews_id=r_product_reviews_historical_details_list.id
 
\G


But the problem is I want that data in my rProduct model 

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Only return single entity from query

2017-06-19 Thread Ryan Weinstein
  query = db.session.query(rProductCategoryHistory,rProduct)\ 
.join(rProduct,rProduct.r_id==rProductCategoryHistory.r_id)\ 
.join(rProductHistoricalDetails, 
rProductHistoricalDetails.id==rProduct.most_recent_historical_details_id)\ 
.join(rProductReviewsHistoricalDetails,rProductReviewsHistoricalDetails.id==rProduct.most_recent_historical_reviews_entry)\
 
.order_by(desc(rProductHistoricalDetails.time_updated))\

I've got this query that only works if I put rProductCategoryHistory and 
rProduct 
into it, but I only care about the rProduct instances returned. Whats the 
best way to have this same query while only returning rProduct instances?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Feedback on "Basic Relationship Patterns" docs

2016-04-11 Thread Ryan Govostes
Hi,

I'm a beginner reading the "Basic Relationship Patterns" documentation:

http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html

The examples are confusing to me because all of the examples use "Parent" 
and "Child," which in reality have a Many-to-Many relationship. It was 
unclear to me whether the "One To Many" example illustrates "one parent to 
many children" or "one child to many parents." Understanding from the code 
samples (which are all very similar) presupposes a certain level of 
familiarity with how relationships are constructed in SQLAlchemy.

I think I've understood the first two examples to be:

One To Many:
- one Parent to many Children
- Child has foreign key referencing Parent
- Parent has relationship referencing a collection of Children

Many To One:
- one Child to many Parents
- Parent has foreign key referencing its Child
- Parent has relationship referencing a single Child

The latter example is confusing because so many aspects are being swapped 
around. It would be clearer to instead use the same "one Parent to many 
Children" scenario: 

Many To One:
- one Parent to many Children
- Child has foreign key referencing its Parent
- Child has relationship referencing a single Parent

This makes it clearer that only the third aspect---which side the 
relationship was declared on---has changed. 

The documentation would be improved by switching to more intuitive 
examples. How about

One To Many, Many To One: Countries and Cities
One To One: Capitals and Countries
Many To Many: Organizations and Members

It would help to explicitly state the scenario that is being modeled in 
each case, even if it appears obvious. The descriptions of the current 
examples all immediately jump into technical details of foreign keys, 
scalars, and association tables.

Ryan

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Filter by first in relationship

2015-09-02 Thread Ryan Eberhardt
Hello,

I have User and Session classes like this:

class User(Base):
__tablename__ = 'users'

   id = Column(Integer, primary_key=True)
   sessions = relationship('Session')

   @property
def last_login_time(self):
return sorted(self.sessions, reverse=True, key=lambda x: x.time)[0]
...

class Session(Base):
__tablename__ = 'sessions'

id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey('users.id'))
time = Column(DateTime)
...

The last_login_time property sorts the user's sessions by date and returns 
the most recent one. Now I want to query users using this property. I know 
I can use hybrid properties to do this, but I have no idea how to write the 
sql expression with sqlalchemy (i.e. I don't know how to get a user's most 
recent session using sqlalchemy expressions). Any ideas on how to do this?

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Accessing association-specific data from a self-referential many-to-many relationship using association_proxy?

2015-07-06 Thread Ryan Holmes
I have tried this, but still cannot get it figured out. I keep getting 
exceptions such as  Could not determine join condition between 
parent/child tables on relationship..., etc. There are two things that 
seems to be hindering me:

- I'm using classical mappings. Many resources online assume declarative 
style, and I'm having trouble converting it. The use of classical mapping 
is due to how the project was started and is the convention used. To move 
to declarative style would be a rather large undertaking, though I guess 
it's possible.
- I'm using a self-referential mapping, which also complicates things, 
since most resources I find have two distinct objects along with the 
association.



On Monday, July 6, 2015 at 1:14:29 PM UTC-4, Michael Bayer wrote:

  

 On 7/6/15 12:06 PM, Ryan Holmes wrote:
  
  I have an interesting problem that I haven't been able to solve for 
 quite some time. I keep finding information about association proxies and 
 the like, but nothing that really helps (or maybe I'm implementing it 
 incorrectly).

  Let's start with this:

  class HandledProjectedItemList(list):
 def append(self, proj):
 proj.projected = True   
 list.append(self, proj)
 
 
 item_table = Table(items, saveddata_meta,
 Column(ID, Integer, primary_key = True),
 Column(itemID, Integer, nullable = False, index = True),
 Column(name, String, nullable = False),
 Column(timestamp, Integer, nullable = False),
 )
 
 projectedItem_table = Table(projectedItem, saveddata_meta,
 Column(sourceID, ForeignKey(item.ID), primary_key = True),
 Column(destID, ForeignKey(item.ID), primary_key = True),
 Column(enabled, Integer))
 
 mapper(Item, item_table,
 properties = {projectedItem : relation(Item,
 primaryjoin = projectedItem_table.c.destID == 
 item_table.c.ID,
 secondaryjoin = item_table.c.ID == 
 projectedItem_table.c.sourceID,
 secondary = projectedItem_table,
 collection_class = HandledProjectedItemList)
   })
  
  
  I have two tables: a `item` table, and a `projectedItem` table. The 
 `item` table is the main one, and contains information on items. The 
 projected items table is a self-referential many-to-many relationship to 
 the items table, where each item may have a collection of other items 
 attached to it. We use a custom collection class to load this relationship 
 so that we can modify a special attribute in the Item objects (if they are 
 loaded via this relationship, they have their `projected` attribute set to 
 `True`).

  This works great. But I also want read / write access to that extra 
 `enabled` column in the relationship table, while maintaining the current 
 functionality of loading the projected items into this custom collection 
 class. I haven't found any information on that that helps, or like I said 
 before, maybe I'm just not using it correctly.
  
 you would make a new class ProjectedItem and map it to the 
 projectedItem_table.  This is the association object pattern (get that to 
 work first, without the proxy part), illustrated at 
 http://docs.sqlalchemy.org/en/rel_1_0/orm/basic_relationships.html#association-object.
 
 Once that works, and all that's left is the inconvenience of navigating 
 from Item-projecteditems-item, then you can apply the association proxy 
 pattern to convert those two hops into just one (
 http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/associationproxy.html
 ).


  
  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Accessing association-specific data from a self-referential many-to-many relationship using association_proxy?

2015-07-06 Thread Ryan Holmes
I have an interesting problem that I haven't been able to solve for quite 
some time. I keep finding information about association proxies and the 
like, but nothing that really helps (or maybe I'm implementing it 
incorrectly).

Let's start with this:

class HandledProjectedItemList(list):
def append(self, proj):
proj.projected = True   
list.append(self, proj)


item_table = Table(items, saveddata_meta,
Column(ID, Integer, primary_key = True),
Column(itemID, Integer, nullable = False, index = True),
Column(name, String, nullable = False),
Column(timestamp, Integer, nullable = False),
)

projectedItem_table = Table(projectedItem, saveddata_meta,
Column(sourceID, ForeignKey(item.ID), primary_key = True),
Column(destID, ForeignKey(item.ID), primary_key = True),
Column(enabled, Integer))

mapper(Item, item_table,
properties = {projectedItem : relation(Item,
primaryjoin = projectedItem_table.c.destID == 
item_table.c.ID,
secondaryjoin = item_table.c.ID == 
projectedItem_table.c.sourceID,
secondary = projectedItem_table,
collection_class = HandledProjectedItemList)
  })


I have two tables: a `item` table, and a `projectedItem` table. The `item` 
table is the main one, and contains information on items. The projected 
items table is a self-referential many-to-many relationship to the items 
table, where each item may have a collection of other items attached to it. 
We use a custom collection class to load this relationship so that we can 
modify a special attribute in the Item objects (if they are loaded via this 
relationship, they have their `projected` attribute set to `True`).

This works great. But I also want read / write access to that extra 
`enabled` column in the relationship table, while maintaining the current 
functionality of loading the projected items into this custom collection 
class. I haven't found any information on that that helps, or like I said 
before, maybe I'm just not using it correctly.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Preventing duplicate entries for an association proxy

2015-05-22 Thread Lie Ryan
I am currently stuck on creating association proxy.

The composite association_proxy (
http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html#composite-association-proxies)
 
pattern pointed to using UniqueObject recipe (
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/UniqueObject) to 
prevent creator from creating duplicate Keyword objects.

How do I obtain the session from inside the creator function when I manage 
sessions explicitly (i.e. not using scoped_session)?

Here is what I got to so far:

from sqlalchemy import Column, Integer, String, ForeignKey, Table, 
create_engine, UniqueConstraint
from sqlalchemy.orm import relationship, sessionmaker, backref
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm.collections import attribute_mapped_collection

Base = declarative_base()
DBSession = sessionmaker()

# taken verbatim from 
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/UniqueObject
def _unique(session, cls, hashfunc, queryfunc, constructor, arg, kw):
cache = getattr(session, '_unique_cache', None)
if cache is None:
session._unique_cache = cache = {}

key = (cls, hashfunc(*arg, **kw))
if key in cache:
return cache[key]
else:
with session.no_autoflush:
q = session.query(cls)
q = queryfunc(q, *arg, **kw)
obj = q.first()
if not obj:
obj = constructor(*arg, **kw)
session.add(obj)
cache[key] = obj
return obj


def creator(key, value):
#return Tag(key=key, value=value) # this creates a Unique Constraint 
error on Tag table
return _unique(
session, # NameError: global name 'session' is not defined
Tag,
lambda key, value: (key, value),
lambda query, key, value: query.filter_by(key=key, value=value),
Tag,
arg=[],
kw={'key': key, 'value': value},
)


class Object(Base):
__tablename__ = 'obj'
id = Column(Integer, primary_key=True)
name = Column(String(64))

tags_obj = relationship(
'Tag',
secondary='obj_tag',
backref='objs',
collection_class=attribute_mapped_collection('key'),
)
tags = association_proxy(
'tags_obj',
'value',
creator=creator,
)

def __init__(self, name):
self.name = name


class ObjectTag(Base):
__tablename__ = 'obj_tag'
obj_id = Column(Integer, ForeignKey('obj.id'), primary_key=True)
tag_id = Column(Integer, ForeignKey('tag.id'), primary_key=True)


class Tag(Base):
__tablename__ = 'tag'
id = Column(Integer, primary_key=True)
key = Column('key', String(64))
value = Column('value', String(64))

__table_args__ = (
UniqueConstraint('key', 'value', name=tag_uq'),
)


def main():
engine = create_engine('sqlite://')
session = DBSession(bind=engine)

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

obj1 = Object('foo')
obj2 = Object('bar')
obj1.tags['cheese'] = 'inspector'
obj2.tags['cheese'] = 'inspector'
session.add(obj1)
session.add(obj2)
assert session.query(Tag).count() == 1 # (IntegrityError) UNIQUE 
constraint failed: tag.key, tag.value
session.commit()

main()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Difference between this SQL query and this sqlalchemy one?

2014-08-02 Thread Ryan Kelly
On Sat, Aug 2, 2014 at 9:11 PM, Elliot offonoffoffon...@gmail.com wrote:
 I am using sqlalchemy on a database of Google Transit Feed data.  The models
 are from pygtfs:
 https://github.com/jarondl/pygtfs/blob/master/pygtfs/gtfs_entities.py

 Basically, StopTime is related both to Stop and Trip.  I want all of the
 stoptimes of all of the trips which have one StopTime that has a particular
 stop.

 I believe this SQL does that:

 stoptimes1 = session.execute('''
 SELECT * FROM stop_times
 JOIN trips ON stop_times.trip_id=trips.trip_id
 JOIN stop_times AS st2 ON st2.trip_id=trips.trip_id
 WHERE stop_times.stop_id=635 ORDER BY st2.stop_sequence
 ''').fetchall()

 And I thought that this would be a translation of that into sqlalchemy:

 from sqlalchemy.orm import aliased
 st2 = aliased(StopTime)
 stoptimes2 =
 session.query(StopTime).join(StopTime.trip).filter(StopTime.stop_id ==
 u'635').join(st2,Trip).order_by(st2.stop_sequence)

 But it is not:

 len(stoptimes1)
   2848

 len(stoptimes2)
   109

print(stoptimes2) should reveal your issue. See also:
https://stackoverflow.com/questions/4617291/how-do-i-get-a-raw-compiled-sql-query-from-a-sqlalchemy-expression

 Where 109 is how many trips have a StopTime with stop_id == 635, and 2848 is
 a much more reasonable number because each of those trips has many of stop
 times.

 I would really appreciate getting some insight into how to do in sqlalchemy
 what was done with the SQL.

 Thanks,
   Elliot

-Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: FAQ for CREATE TABLE output incomplete

2014-06-18 Thread Ryan Kelly
On Wed, Jun 18, 2014 at 11:15 AM, Mike Bayer mike...@zzzcomputing.com wrote:

 On 6/18/14, 2:06 AM, rpkelly wrote:
 It seems like passing literal_binds=True to the call to
 sql_compiler.process in get_column_default_string will work, so long
 as SQLAlchemy can convert the values to literal binds. Which, in the
 example given, isn't the case.

 the long standing practice for passing literals into server_default and
 other places is to use literal_column():

 server_default=func.foo(literal_column(bar))),
The issue with this is that in my actual code, the values are read
from somewhere else, so
I was trying to find a safe way to use them without having to deal
with quoting/escaping
issues so my code is Jimmy-proof.

 for the array, you need to use postgresql.array(), not func.array().
 It will work like this:
When I created the example, I changed the name of the function from
make_array to array.
So it actually is a function call to make_array, so it seems I need to
put postgresql.array()
inside of func.make_array() (or use the variadic form and unpack the list).


 tbl = Table(derp, metadata,
 Column(arr, ARRAY(Text),
 server_default=array([literal_column('foo'),
 literal_column('bar'),
 literal_column('baz')])),
 )

 the docs suck.
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3086/server_default-poorly-documented
 is added (referring to
 http://docs.sqlalchemy.org/en/rel_0_9/core/defaults.html#server-side-defaults).

 then for literal_binds.  We've been slowly adding the use of this new
 parameter with a fair degree of caution, both because it can still fail
 on any non-trivial kind of datatype and also because a feature that
 bypasses the bound parameter logic is just something we've avoided for
 years, due to the great security hole it represents.We added it for
 index expressions in #2742.
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3087/literal_binds-in-server_default
 will add it for server_default.   it's 1.0 for now but can be
 potentially backported to 0.9.5.
Right, but I also don't think it's safe to issue DDL with arbitrary
input as it currently stands,
even values which are correctly escaped/formatted/etc. might result in
name collisions or
shadowing, or other undesirable behavior. I'm not sure if the
documentation makes a
statement about issuing DDL using information from untrusted sources,
but it probably should.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] FAQ for CREATE TABLE output incomplete

2014-06-17 Thread Ryan Kelly
Hi:

It seems that the fact describes a procedure to get a string to create
a table: 
http://docs.sqlalchemy.org/en/latest/faq.html#how-can-i-get-the-create-table-drop-table-output-as-a-string

However, this does not work correctly when a table has a
server_default which is a function that takes arguments:

tbl = Table(derp, metadata,
Column(arr, ARRAY(Text), server_default=func.array([foo,
bar, baz])),
)

When using the method described in the docs:

stmt = str(schema.CreateTable(tbl).compile(dialect=session.bind.dialect))

We get:

CREATE TABLE derp (
arr TEXT[] DEFAULT array(%(array_1)s)
)

The solution (for psycopg2, anyway):

stmt = schema.CreateTable(tbl).compile(dialect=session.bind.dialect)
dialect = session.bind.dialect
enc = dialect.encoding
params = {}
for k,v in stmt.sql_compiler.params.iteritems():
if isinstance(v, unicode):
v = v.encode(enc)
params[k] = sqlescape(v)
stmt = (str(stmt).encode(enc) % params).decode(enc)

Which results in the expected:

CREATE TABLE derp (
arr TEXT[] DEFAULT array(%(array_1)s)
)

Of course, there isn't a generic solution to this problem, but I think
the docs should describe the limitations of the given solution, at a
minimum.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Adding where clause to existing SELECT without wrapping SELECT

2014-05-19 Thread Ryan Kelly
Is there some issue with doing: sel2 = sel.where(product_table.c.name
== water) ?

If you really have no reference to the product_table, I think you can
hack it with sel._raw_columns to try and find the column you want to
filter to.

-Ryan Kelly

On Mon, May 19, 2014 at 9:12 PM, gbr doubl...@directbox.com wrote:
 I'm trying to modify a select statement which is fairly complex and which is
 created in a function that I don't want to modify. It is returned from this
 function and I'd like to add more WHERE clauses by just referencing the
 returned select statement. How can I do this without causing SQLA wrapping
 the returned select statement in another select?

 See code below for demonstration. `sel` is the complex select statement (in
 my application returned from the function) and by `sel2 = sel.where()` I try
 to add another clause.

 from sqlalchemy import *
 metadata = MetaData()
 product_table = Table('product', metadata, Column('id', Integer),
 Column('name', String(32)))
 sel = select(columns=[product_table.c.id.label('product_id'),
 product_table.c.name.label('product_name'),
 order_table.c.id.label('order_id'), 
 order_table.c.name.label('order_name')],
 from_obj=product_table.join(order_table,
 order_table.c.product_id==product_table.c.id))

 # Fine
 print sel
 SELECT product.id AS product_id, product.name AS product_name, order.id AS
 order_id, order.name AS order_name
 FROM product JOIN order ON order.product_id = product.id

 # Trying to add a where condition to sel
 sel2 = sel.where(sel.c.product_name=='water')
 # Which unfortunately wraps the select in another select. Any way of
 adding the WHERE to `sel` post construction of `select()`?
 print sel2
 SELECT product.id AS product_id, product.name AS product_name, order.id AS
 order_id, order.name AS order_name
 FROM (SELECT product.id AS product_id, product.name AS product_name,
 order.id AS order_id, order.name AS order_name
 FROM product JOIN order ON order.product_id = product.id), product JOIN
 order ON order.product_id = product.id
 WHERE product_name = :product_name_1

 # I would have expected:
 SELECT product.id AS product_id, product.name AS product_name,
 order.id AS order_id, order.name AS order_name FROM product JOIN 
 order
 ON order.product_id = product.id WHERE product_name = :product_name_1

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Argument to LIMIT must be a number?

2014-03-21 Thread Ryan Kelly
I see your compatibility concerns.

Let me see if .offset() without .limit() does the right thing.

-Ryan Kelly

On Thu, Mar 20, 2014 at 10:39 PM, Michael Bayer
mike...@zzzcomputing.com wrote:
 the int() catch here on limit/offset is something we added due to user report 
 which raised the issue that it's a security hazard, back when we used to 
 render the given value directly in the SQL without using a bound parameter.   
 We fixed both that it allowed non-int values as well as that it didn't use a 
 bound parameter; technically only one or the other is needed at most.  But it 
 quickly got picked up by about a hundred security advisory bots blasting 
 for months about it as a SQLAlchemy security advisory, so I haven't gone near 
 liberalizing it.  At this point the int check could probably switch on 
 either int or a SQL expression (just not straight text). Looking at the 
 way limit_clause() is written we'd have to change how _limit/_offset are 
 interpreted in order to allow other expressions in there.  There may be 
 existing recipes that expect _limit/_offset to be plain integers though.

 but without even getting into changing anything, looking at the source for 
 the postgresql compiler it would appear ALL is already emitted if OFFSET is 
 present and LIMIT is not:

 from sqlalchemy.sql import select
 from sqlalchemy.dialects import postgresql
 print select(['foo']).offset(5).compile(dialect=postgresql.dialect())

 output:

 SELECT foo
  LIMIT ALL OFFSET %(param_1)s

 Seems like it's been this way for years, it already emits ALL.  So there's no 
 issue?




 On Mar 20, 2014, at 5:49 PM, Ryan Kelly rpkell...@gmail.com wrote:

 Redshift needs LIMIT ALL to avoid attempting an optimization which
 causes it to crash.

 Note that OFFSET 0 is the same as omitting OFFSET, but it acts as an
 optimization fence. This, I suppose, is a similar thing.

 -Ryan Kelly

 On Thu, Mar 20, 2014 at 5:38 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:

 LIMIT NULL and LIMIT ALL per the PG docs at 
 http://www.postgresql.org/docs/9.0/static/queries-limit.html are the same 
 as omitting the number.These would appear to be syntactical helpers 
 that you wouldn't really need when working with a select() construct 
 (unless you're trying to get at some PG optimizer quirk).

 so why exactly do you need to emit these otherwise unnecessary keywords ?



 On Mar 20, 2014, at 5:29 PM, Ryan Kelly rpkell...@gmail.com wrote:

 Hi:

 It seems that the argument to query.limit must be a number. However,
 NULL (which I imagine could be passed by the null() construct or as a
 string) and ALL (which, I suppose could be text(ALL) or
 literal(ALL) or just the string ALL) are perfectly acceptable
 values on PostgreSQL.

 Is there some way to convince SQLAlchemy to render these values?

 -Ryan Kelly

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Argument to LIMIT must be a number?

2014-03-20 Thread Ryan Kelly
Hi:

It seems that the argument to query.limit must be a number. However,
NULL (which I imagine could be passed by the null() construct or as a
string) and ALL (which, I suppose could be text(ALL) or
literal(ALL) or just the string ALL) are perfectly acceptable
values on PostgreSQL.

Is there some way to convince SQLAlchemy to render these values?

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Argument to LIMIT must be a number?

2014-03-20 Thread Ryan Kelly
Redshift needs LIMIT ALL to avoid attempting an optimization which
causes it to crash.

Note that OFFSET 0 is the same as omitting OFFSET, but it acts as an
optimization fence. This, I suppose, is a similar thing.

-Ryan Kelly

On Thu, Mar 20, 2014 at 5:38 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 LIMIT NULL and LIMIT ALL per the PG docs at 
 http://www.postgresql.org/docs/9.0/static/queries-limit.html are the same as 
 omitting the number.These would appear to be syntactical helpers that you 
 wouldn't really need when working with a select() construct (unless you're 
 trying to get at some PG optimizer quirk).

 so why exactly do you need to emit these otherwise unnecessary keywords ?



 On Mar 20, 2014, at 5:29 PM, Ryan Kelly rpkell...@gmail.com wrote:

 Hi:

 It seems that the argument to query.limit must be a number. However,
 NULL (which I imagine could be passed by the null() construct or as a
 string) and ALL (which, I suppose could be text(ALL) or
 literal(ALL) or just the string ALL) are perfectly acceptable
 values on PostgreSQL.

 Is there some way to convince SQLAlchemy to render these values?

 -Ryan Kelly

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] outerjoin where first selected column is a literal dies with confusing error

2014-03-17 Thread Ryan Kelly
Hi:

When I run the attached example, I get the following error:

sqlalchemy.exc.NoInspectionAvailable: No inspection system is
available for object of type type 'NoneType'

Which is a result of literal(1) appearing first in the select list.
I don't particularly care than I can't order my columns that way, but
the error message cost me about 30 minutes just trying to figure out
why.

Not sure if there is a good way to fix this or what the correct approach is.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
#!/usr/bin/env python

from sqlalchemy import create_engine, MetaData, Table, Column, Integer
from sqlalchemy.orm import sessionmaker
from sqlalchemy.sql.expression import literal


engine = create_engine(sqlite:///:memory:, echo=True)
session = sessionmaker(bind=engine)()

metadata = MetaData()

some_table = Table(some_table, metadata,
Column(foo, Integer),
)

other_table = Table(other_table, metadata,
Column(foo, Integer),
)

results = (
session.query(
literal(1),
some_table.c.foo,
other_table.c.foo)
.outerjoin(other_table, some_table.c.foo == some_table.c.foo)
).all()



Re: [sqlalchemy] TypeError: Range objects cannot be ordered in flush

2013-12-24 Thread Ryan Kelly
On Tue, Dec 12/24/13, 2013 at 11:52:29AM -0500, Michael Bayer wrote:
 
 On Dec 24, 2013, at 10:48 AM, Chris Withers ch...@simplistix.co.uk wrote:
 
  Hi All,
  
  I feel like I've asked this before but apologies, I cannot find the 
  previous thread.
  
  So, when using the support for psycopg2's range types I added, I sometimes 
  see the following during a flush:
  
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/session.py,
   line 1818, in flush
 self._flush(objects)
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/session.py,
   line 1936, in _flush
 transaction.rollback(_capture_exception=True)
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/util/langhelpers.py,
   line 58, in __exit__
 compat.reraise(exc_type, exc_value, exc_tb)
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/session.py,
   line 1900, in _flush
 flush_context.execute()
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/unitofwork.py,
   line 372, in execute
 rec.execute(self)
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/unitofwork.py,
   line 525, in execute
 uow
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/persistence.py,
   line 45, in save_obj
 uowtransaction)
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/persistence.py,
   line 140, in _organize_states_for_save
 states):
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/persistence.py,
   line 767, in _connections_for_states
 for state in _sort_states(states):
   File 
  /Users/chris/buildout-eggs/SQLAlchemy-0.8.2-py2.7-macosx-10.5-x86_64.egg/sqlalchemy/orm/persistence.py,
   line 792, in _sort_states
 sorted(persistent, key=lambda q: q.key[1])
   File 
  /Users/chris/buildout-eggs/psycopg2-2.5.1-py2.7-macosx-10.5-x86_64.egg/psycopg2/_range.py,
   line 138, in __lt__
 'Range objects cannot be ordered; please refer to the PostgreSQL'
  
  What's going on here? Is it SQLAlchemy, psycopg2 or my code that's at fault?
 
 well hard to say “bug” or “feature needed”, SQLAlchemy’s UOW wants to emit 
 UPDATE statements in primary key order so that the chance of deadlocks 
 against other transactions is minimized.  But it appears you’re using a range 
 type as a primary key and that psycopg2 is not very happy about SQLAlchemy’s 
 assumption that primary keys can be sorted.
 
 Workarounds include not using ranges as primary keys, overriding PG’s range 
 type with some decorated type that is sortable.Potential SQLAlchemy 
 feature would be, “don’t sort by primary key” flag?   Guess so.
 
 

IMO psycopg2's implementation should be patched, since they basically
just didn't implement ordering. PostgreSQL itself has no problem
ordering range types (though the ordering is somewhat arbitrary):
http://www.postgresql.org/docs/9.2/static/rangetypes.html#RANGETYPES-GIST

-Ryan


signature.asc
Description: Digital signature


[sqlalchemy] NoSuchColumnError and _key_fallback

2013-10-10 Thread Ryan Kelly
Hi:

One of our applications is generating the following error:

NoSuchColumnError: Could not locate column in row for column 
'client.claims.client_id'

Which is rather strange, because that column is aliased with .label() to
AS Client_ID, so of course that row cannot be located.

The exception is raised from within ResultMetaData._key_fallback, which
has the following comment:

# fallback for targeting a ColumnElement to a textual expression
# this is a rare use case which only occurs when matching text()
# or colummn('name') constructs to ColumnElements, or after a
# pickle/unpickle roundtrip

But this isn't true for us. It's a fairly standard query generated like:

query = (
session.query(tbl.c.client_id.label(Client_ID))
.filter(tbl.c.group_id.in_(group_ids))
)

Digging deeper into the problem, I set a breakpoint inside _key_fallback
to poke around in the ResultMetaData object. A few things seemed
somewhat odd to me. The first is that self._keymap contained client_id
(and not Client_ID). But we're using postgres, and
dialect.case_sensitive is True. Of course, I then looked into how
self._keymap was being populated, and managed to get into the cursor's
underlying cursor.description attribute. And this contained all
lowercase names, which I suppose is how SQLAlchemy got lowercase names.
Also, self._keymap was basically this:

{0: (None, None, 0), client_id: (None, None, 0)}

I'm not sure if this correct or not.

I also managed to get a hold of the underlying result row before it was
used to generate a KeyedTuple (actually, the generation of the first
KeyedTuple is where the error occurred). row.keys() produced a list of
all lowercase aliases, whereas the original aliases (as mentioned above)
were mixed-case.

The query itself (at this point) was also an AliasedSelect instance, I'm
not sure if that has any bearing.

To temporarily work around the problem, we set all of the alias names to
lowercase.

We recently upgraded from SQLAlchemy 0.7.10 to SQLAlchemy 0.8.2. We are
using psycopg2 2.4.4 (dt dec mx pq3 ext).

We have thousands of SQLAlchemy queries (including many queries that use
mixed-case aliases) in our code base and this seems to be the only query
that has any problem.

I'm not sure whether or not this is a SQLAlchemy issue or a psycopg2
issue or what, but I figured I'd start here because the error is
originating from SQLAlchemy.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] NoSuchColumnError and _key_fallback

2013-10-10 Thread Ryan Kelly
On Thu, Oct 10/10/13, 2013 at 12:59:31PM -0400, Michael Bayer wrote:
 
 On Oct 10, 2013, at 8:34 AM, Ryan Kelly rpkell...@gmail.com wrote:
 
  Hi:
  
  One of our applications is generating the following error:
  
  NoSuchColumnError: Could not locate column in row for column 
  'client.claims.client_id'
  
  Which is rather strange, because that column is aliased with .label() to
  AS Client_ID, so of course that row cannot be located.
 
 when the SQL is actually rendered, is the Client_ID name rendered with 
 quotes?  it should be, as that is a case-sensitive name.
Yes, the SQL is correctly quoted.

 0.8 made a change regarding upper/lower case names which is that we no
 longer call lower() on identifier names when producing lookup
 dictionaries for result rows - see
 http://docs.sqlalchemy.org/en/rel_0_8/changelog/migration_08.html#case-insensitive-result-row-names-will-be-disabled-in-most-cases
  

 or the failure here might have something to do with a conflicting
 client_id name elsewhere in the query. 
It's the only one.

 you might want to see is sending case_sensitive=False to
 create_engine() restores the working behavior you saw in 0.7.
 it does sound like something originating within your query and how
 SQLAlchemy handles it, psycopg2 is unlikely to be playing a role here.
I guess I will poke around some more.

-Ryan Kelly


signature.asc
Description: Digital signature


[sqlalchemy] Why does Inspector.from_engine cause ROLLBACK to be issued?

2013-06-27 Thread Ryan Kelly
Attempting to use Inspector.from_engine to get a list a table names from
the database. Running the test suite of a large program I found that my
data would disappear. After digging around in my own code for a few
hours trying to find the problem, I tracked it down to the call to
Inspector.from_engine, and inside I see this:

if self.engine is bind:
# if engine, ensure initialized
bind.connect().close()

It seems to be the cause of the ROLLBACK.

I'm not sure why this is necessary. Thoughts?

-Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Why does Inspector.from_engine cause ROLLBACK to be issued?

2013-06-27 Thread Ryan Kelly

On Thu, Jun 06/27/13, 2013 at 05:14:30PM -0400, Michael Bayer wrote:
 what kind of pooling are you using?   

StaticPool. I'm creating temporary tables over the connection and
without StaticPool I end up with a new connection and my temporary
tables are nowhere to be found.

 normally connect() should return a new connection independent of any other.   
 Only the StaticPool and the SingletonThreadPool, both very special use pools, 
 have any notion of sharing the same connection for multiple connect() calls 
 (or if you're using the ThreadLocal engine, which is highly unusual).
 These pools are never used by default unless you use a SQLite :memory: engine 
 in which case it uses SingletonThreadPool.
 
 The rationale for the call is so that elements which are initialized the 
 first time the Engine connects, such as default_schema_name, are available. 
   The call could be conditionalized.
 
 On Jun 27, 2013, at 3:05 PM, Ryan Kelly rpkell...@gmail.com wrote:
 
  Attempting to use Inspector.from_engine to get a list a table names from
  the database. Running the test suite of a large program I found that my
  data would disappear. After digging around in my own code for a few
  hours trying to find the problem, I tracked it down to the call to
  Inspector.from_engine, and inside I see this:
  
  if self.engine is bind:
 # if engine, ensure initialized
 bind.connect().close()
  
  It seems to be the cause of the ROLLBACK.
  
  I'm not sure why this is necessary. Thoughts?
  
  -Ryan

-Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Why does Inspector.from_engine cause ROLLBACK to be issued?

2013-06-27 Thread Ryan Kelly
On Thu, Jun 06/27/13, 2013 at 05:27:36PM -0400, Michael Bayer wrote:
 Ok how about you connect() at the start of things, then use that Connection 
 as your bind ?  rather than using odd pools.
I'm not exactly sure how to approach that and still use the session at
the same time, but I'll play around with it and see what happens.

 On Jun 27, 2013, at 5:26 PM, Ryan Kelly rpkell...@gmail.com wrote:
 
  
  On Thu, Jun 06/27/13, 2013 at 05:14:30PM -0400, Michael Bayer wrote:
  what kind of pooling are you using?   
  
  StaticPool. I'm creating temporary tables over the connection and
  without StaticPool I end up with a new connection and my temporary
  tables are nowhere to be found.
  
  normally connect() should return a new connection independent of any 
  other.   Only the StaticPool and the SingletonThreadPool, both very 
  special use pools, have any notion of sharing the same connection for 
  multiple connect() calls (or if you're using the ThreadLocal engine, which 
  is highly unusual).These pools are never used by default unless you 
  use a SQLite :memory: engine in which case it uses SingletonThreadPool.
  
  The rationale for the call is so that elements which are initialized the 
  first time the Engine connects, such as default_schema_name, are 
  available.   The call could be conditionalized.
  
  On Jun 27, 2013, at 3:05 PM, Ryan Kelly rpkell...@gmail.com wrote:
  
  Attempting to use Inspector.from_engine to get a list a table names from
  the database. Running the test suite of a large program I found that my
  data would disappear. After digging around in my own code for a few
  hours trying to find the problem, I tracked it down to the call to
  Inspector.from_engine, and inside I see this:
  
  if self.engine is bind:
# if engine, ensure initialized
bind.connect().close()
  
  It seems to be the cause of the ROLLBACK.
  
  I'm not sure why this is necessary. Thoughts?
  
  -Ryan
  
  -Ryan

-Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] HSTORE serialize/de-serialize incorrectly handles backslashes

2013-06-26 Thread Ryan Kelly
On Tue, Jun 06/25/13, 2013 at 02:47:18PM -0400, Michael Bayer wrote:
 
 On Jun 25, 2013, at 2:13 PM, Ryan Kelly rpkell...@gmail.com wrote:

  There are also some other parsing problems that I consider to be corner
  cases and broken as implemented in PostgreSQL, such as:
  
  (postgres@[local]:5432 14:05:43) [dev] select 'a=,b='::hstore;
  (postgres@[local]:5432 14:05:47) [dev] select 'a=, b='::hstore;
  (postgres@[local]:5432 14:06:45) [dev] select 'a= , b='::hstore;
  (postgres@[local]:5432 14:06:48) [dev] select 'a= ,b='::hstore;
  (postgres@[local]:5432 14:06:50) [dev] select 'a=,'::hstore;
  (postgres@[local]:5432 14:10:12) [dev] select ',=,'::hstore;
  
  None of which are parsed by SQLAlchemy but some of which are parsed by
  PostgreSQL.
 
 Posgresql or psycopg2 ?   isn't the serialization here normally done 
 natively if you're on a more recent psycopg2?

Well these are really de-serialization. But yes, serialization and
de-serialization are handled natively by psycopg2. In some circumstances
we actually need to parse/write the values directly ourselves (usually
when reading/writing files in the COPY format). Unfortunately,
psycopg2's parser is much stricter than the documented format as it
requires quoted keys (which in some ways make sense, as the server will
never return anything unquoted) so none of the above examples work.

-Ryan P. Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] HSTORE serialize/de-serialize incorrectly handles backslashes

2013-06-25 Thread Ryan Kelly
As we're trying to convert from our own homegrown version of the HSTORE
type, it seems that our tests have been broken by SQLAlchemy's handling
of serialization/de-serialization for hstores containing backslashes.

The current serialization behavior of SQLAlchemy will do this:
{'\\a': '\\1'} = 'a=1'

Trying to de-serialize the result yields:
ValueError: After 'a=', could not parse residual at position 12: 
'1'

This is using the _serialize_hstore and _parse_hstore functions.

The correct behavior, I believe, should be this:
{'\\a': '\\1'} = '\\a=\\1'

Trying to de-serialize the result yields:
'\\a=\\1' = {'\\a': '\\1'}

Which is what we're looking for. Attached is a patch and tests.

There are also some other parsing problems that I consider to be corner
cases and broken as implemented in PostgreSQL, such as:

(postgres@[local]:5432 14:05:43) [dev] select 'a=,b='::hstore;
(postgres@[local]:5432 14:05:47) [dev] select 'a=, b='::hstore;
(postgres@[local]:5432 14:06:45) [dev] select 'a= , b='::hstore;
(postgres@[local]:5432 14:06:48) [dev] select 'a= ,b='::hstore;
(postgres@[local]:5432 14:06:50) [dev] select 'a=,'::hstore;
(postgres@[local]:5432 14:10:12) [dev] select ',=,'::hstore;

None of which are parsed by SQLAlchemy but some of which are parsed by
PostgreSQL. You can see the bug report I filed about some of them here:
http://www.postgresql.org/message-id/20120426190513.gb31...@llserver.lakeliving.com

-Ryan P. Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


diff --git a/lib/sqlalchemy/dialects/postgresql/hstore.py b/lib/sqlalchemy/dialects/postgresql/hstore.py
index d7368ff..c645e25 100644
--- a/lib/sqlalchemy/dialects/postgresql/hstore.py
+++ b/lib/sqlalchemy/dialects/postgresql/hstore.py
@@ -68,11 +68,11 @@ def _parse_hstore(hstore_str):
 pair_match = HSTORE_PAIR_RE.match(hstore_str)
 
 while pair_match is not None:
-key = pair_match.group('key')
+key = pair_match.group('key').replace(r'\', '').replace(, \\)
 if pair_match.group('value_null'):
 value = None
 else:
-value = pair_match.group('value').replace(r'\', '')
+value = pair_match.group('value').replace(r'\', '').replace(, \\)
 result[key] = value
 
 pos += pair_match.end()
@@ -98,7 +98,7 @@ def _serialize_hstore(val):
 if position == 'value' and s is None:
 return 'NULL'
 elif isinstance(s, util.string_types):
-return '%s' % s.replace('', r'\')
+return '%s' % s.replace(\\, ).replace('', r'\')
 else:
 raise ValueError(%r in %s position is not a string. %
  (s, position))
diff --git a/test/dialect/test_postgresql.py b/test/dialect/test_postgresql.py
index 46a7b31..d277e82 100644
--- a/test/dialect/test_postgresql.py
+++ b/test/dialect/test_postgresql.py
@@ -2948,6 +2948,16 @@ class HStoreTest(fixtures.TestBase):
 'key1=value1, key2=value2'
 )
 
+def test_bind_serialize_with_slashes_and_quotes(self):
+from sqlalchemy.engine import default
+
+dialect = default.DefaultDialect()
+proc = self.test_table.c.hash.type._cached_bind_processor(dialect)
+eq_(
+proc({'\\a': '\\1'}),
+'\\a=\\1'
+)
+
 def test_parse_error(self):
 from sqlalchemy.engine import default
 
@@ -2974,6 +2984,17 @@ class HStoreTest(fixtures.TestBase):
 {key1: value1, key2: value2}
 )
 
+def test_result_deserialize_with_slashes_and_quotes(self):
+from sqlalchemy.engine import default
+
+dialect = default.DefaultDialect()
+proc = self.test_table.c.hash.type._cached_result_processor(
+dialect, None)
+eq_(
+proc('\\a=\\1'),
+{'\\a': '\\1'}
+)
+
 def test_bind_serialize_psycopg2(self):
 from sqlalchemy.dialects.postgresql import psycopg2
 


[sqlalchemy] Forcing a connection recycle/refresh

2013-06-21 Thread Ryan Kelly
I'd like to manually trigger a connection recycle/recreate even, but I'm
not sure how to do so. The recreate method of the Pool class returns a
new pool instance, but I'd be surprised if it was safe to assign it back
to, e.g., session.connection().connection._pool.

What is the correct way to trigger a connection recycle given a session?

-Ryan P. Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [BUG][PATCH] Function names not quoted when necessary

2013-06-06 Thread Ryan Kelly
Function names in SQL can contain pretty much anything, e.g.:

=# create function A Bug?(integer) returns integer as $$ select $1; $$ 
language sql;
CREATE FUNCTION

But when attempting to use the function from SQLAlchemy:

from sqlalchemy.sql.expression import func
bug = getattr(func, A Bug?)(1)
session.query(bug).all()

ProgrammingError: (ProgrammingError) syntax error at or near ?
LINE 1: SELECT A Bug?(1) AS A Bug?_1
 'SELECT A Bug?(%(A Bug?_2)s) AS A Bug?_1' {'A Bug?_2': 1}

-Ryan P. Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


diff --git a/lib/sqlalchemy/sql/compiler.py b/lib/sqlalchemy/sql/compiler.py
index dd2a6e0..ada56c6 100644
--- a/lib/sqlalchemy/sql/compiler.py
+++ b/lib/sqlalchemy/sql/compiler.py
@@ -603,7 +603,10 @@ class SQLCompiler(engine.Compiled):
 if disp:
 return disp(func, **kwargs)
 else:
-name = FUNCTIONS.get(func.__class__, func.name + %(expr)s)
+name = FUNCTIONS.get(
+func.__class__,
+self.preparer.quote(func.name, None) + %(expr)s
+)
 return ..join(list(func.packagenames) + [name]) % \
 {'expr': self.function_argspec(func, **kwargs)}
 
diff --git a/test/sql/test_compiler.py b/test/sql/test_compiler.py
index 473a422..6ea4d2a 100644
--- a/test/sql/test_compiler.py
+++ b/test/sql/test_compiler.py
@@ -2481,6 +2481,49 @@ class SelectTest(fixtures.TestBase, AssertsCompiledSQL):
 and_, (a,), (b,)
 )
 
+def test_func(self):
+f1 = func.somefunc(1)
+self.assert_compile(
+select([f1]),
+SELECT somefunc(:somefunc_2) AS somefunc_1,
+)
+self.assert_compile(
+select([f1.label(f1)]),
+SELECT somefunc(:somefunc_1) AS f1,
+)
+
+f2 = func.somefunc(table1.c.name)
+self.assert_compile(
+select([f2]),
+SELECT somefunc(mytable.name) AS somefunc_1 FROM mytable,
+)
+self.assert_compile(
+select([f2.label(f2)]),
+SELECT somefunc(mytable.name) AS f2 FROM mytable,
+)
+
+f3 = getattr(func, Needs Quotes?)(table1.c.myid)
+self.assert_compile(
+select([f3]),
+'SELECT Needs Quotes?(mytable.myid) AS Needs Quotes?_1 FROM '
+'mytable'
+)
+self.assert_compile(
+select([f3.label(f3)]),
+'SELECT Needs Quotes?(mytable.myid) AS f3 FROM mytable',
+)
+
+f4 = getattr(func, query from pg_stat_activity; --)()
+self.assert_compile(
+select([f4]),
+'SELECT query from pg_stat_activity; --() AS query from '
+'pg_stat_activity; --_1',
+)
+self.assert_compile(
+select([f4.label(f4)]),
+'SELECT query from pg_stat_activity; --(mytable.myid) AS f4'
+)
+
 
 class KwargPropagationTest(fixtures.TestBase):
 


Re: [sqlalchemy] custom __init__ methods not being invoked

2013-02-13 Thread Ryan McKillen
Thanks for the details. Makes sense.

Still not consistent with what I'm experiencing. Although consistent with what 
I'm seeing when I put a simple example/test together. I'll keep digging...

— RM

On Feb 12, 2013, at 4:51 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 its called in all SQL loading scenarios including that of relationships.
 
 A relationship load might not actually result in the object being loaded from 
 the DB in these scenarios:
 
 1. the relationship is a simple many-to-one, and the object could be located 
 by primary key from the identity map without emitting a SQL load.
 
 2. the relationship emitted the SQL, but as it loaded the rows, the objects 
 matching those rows were already in the identity map, so they weren't 
 reconstructed.
 
 In both scenarios above, the objects were still guaranteed to be present in 
 the identity map in only three possible ways: 
 
 1. they were loaded at some point earlier, in which case your reconstructor 
 was called
 
 2. they moved from pending to persistent , meaning you added them with 
 add(), then they got inserted, so you'd want to make sure
 whatever regular __init__ does is appropriate here
 
 3. the objects were detached, and were add()ed back into the session, but 
 this still implies that #1 or #2 were true for a previous Session.
 
 
 
 
 
 
 On Feb 12, 2013, at 5:29 PM, Ryan McKillen ryan.mckil...@gmail.com wrote:
 
 It doesn't appear that the method decorated by @orm.reconstructor is called 
 on objects retrieved/loaded as relationships.
 
 Not my desired behavior, but I guess it is consistent with the docs:
 When instances are loaded during a Query operation as in 
 query(MyMappedClass).one(), init_on_load is called.
 
 So if I need it to be executed in a relationship-loading situation, what's 
 the best way to go about it? Thanks.
 
 — RM
 
 
 On Mon, Jan 7, 2013 at 3:36 AM, Ryan McKillen ryan.mckil...@gmail.com 
 wrote:
 Worked like a charm. Thanks.
 
 — RM
 
 
 On Mon, Jan 7, 2013 at 6:26 PM, Michael van Tellingen 
 michaelvantellin...@gmail.com wrote:
 See 
 http://docs.sqlalchemy.org/en/latest/orm/mapper_config.html#constructors-and-object-initialization
 
 
 
 On Mon, Jan 7, 2013 at 4:47 AM, RM ryan.mckil...@gmail.com wrote:
  I have a class which inherits from Base. My class has a metaclass which
  inherits from DeclarativeMeta. Among other things, the metaclass adds an
  __init__ method to the class dictionary. When I instantiate an instance 
  of
  my class directly, my __init__ method is invoked, but if I use the ORM to
  retrieve an instance, my __init__ method is not invoked.
 
  A metaclass serves better than a mixin for what I am trying to 
  accomplish.
  However, I did experiment with a mixin and saw the same behavior as
  described above.
 
  Any ideas? Many thanks.
 
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To view this discussion on the web visit
  https://groups.google.com/d/msg/sqlalchemy/-/oDj_bHNvP7EJ.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] custom __init__ methods not being invoked

2013-02-12 Thread Ryan McKillen
It doesn't appear that the method decorated by @orm.reconstructor is called
on objects retrieved/loaded as relationships.

Not my desired behavior, but I guess it is consistent with the docs:
When instances are loaded during a Query operation as in
query(MyMappedClass).one(), init_on_load is called.

So if I need it to be executed in a relationship-loading situation, what's
the best way to go about it? Thanks.

— RM


On Mon, Jan 7, 2013 at 3:36 AM, Ryan McKillen ryan.mckil...@gmail.comwrote:

 Worked like a charm. Thanks.

 — RM


 On Mon, Jan 7, 2013 at 6:26 PM, Michael van Tellingen 
 michaelvantellin...@gmail.com wrote:

 See
 http://docs.sqlalchemy.org/en/latest/orm/mapper_config.html#constructors-and-object-initialization



 On Mon, Jan 7, 2013 at 4:47 AM, RM ryan.mckil...@gmail.com wrote:
  I have a class which inherits from Base. My class has a metaclass which
  inherits from DeclarativeMeta. Among other things, the metaclass adds an
  __init__ method to the class dictionary. When I instantiate an instance
 of
  my class directly, my __init__ method is invoked, but if I use the ORM
 to
  retrieve an instance, my __init__ method is not invoked.
 
  A metaclass serves better than a mixin for what I am trying to
 accomplish.
  However, I did experiment with a mixin and saw the same behavior as
  described above.
 
  Any ideas? Many thanks.
 
  --
  You received this message because you are subscribed to the Google
 Groups
  sqlalchemy group.
  To view this discussion on the web visit
  https://groups.google.com/d/msg/sqlalchemy/-/oDj_bHNvP7EJ.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] [BUG] [PATCH] Calling yield_per followed by execution_options results in AttributeError

2013-02-07 Thread Ryan Kelly
Calling yield_per on a query followed by execution_options results in
the following error:

Traceback (most recent call last):
  File /tmp/execution_options.py, line 18, in module
query = query.execution_options(stream_results=True)
  File string, line 1, in lambda
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
50, in generate
fn(self, *args[1:], **kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
1040, in execution_options
self._execution_options = self._execution_options.union(kwargs)
AttributeError: 'dict' object has no attribute 'union'

Attached is a patch with a test case.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


diff -r 53b53ad288ad lib/sqlalchemy/orm/query.py
--- a/lib/sqlalchemy/orm/query.py	Thu Feb 07 20:29:47 2013 -0500
+++ b/lib/sqlalchemy/orm/query.py	Thu Feb 07 23:05:45 2013 -0500
@@ -712,8 +712,7 @@
 
 
 self._yield_per = count
-self._execution_options = self._execution_options.copy()
-self._execution_options['stream_results'] = True
+self.execution_options(stream_results=True)
 
 def get(self, ident):
 Return an instance based on the given primary key identifier,
diff -r 53b53ad288ad test/orm/test_query.py
--- a/test/orm/test_query.py	Thu Feb 07 20:29:47 2013 -0500
+++ b/test/orm/test_query.py	Thu Feb 07 23:05:45 2013 -0500
@@ -1784,6 +1784,13 @@
 except StopIteration:
 pass
 
+def test_yield_per_and_execution_options(self):
+User = self.classes.User
+
+sess = create_session()
+q = sess.query(User).yield_per(1)
+q = q.execution_options(stream_results=True)
+
 class HintsTest(QueryTest, AssertsCompiledSQL):
 def test_hints(self):
 User = self.classes.User


[sqlalchemy] SOME/ANY/ALL in postgres?

2013-01-11 Thread Ryan Kelly
I'm trying to figure out the correct way to use these array comparisons
features specific to postgres, e.g.:

select * from foo where 1 = any(bar);

So I tried this:

from sqlalchemy.sql.expression import func
session.query(foo).filter(1 == func.any(foo.c.bar))

But that didn't work, as I got this (essentially):

select * from foo where any(bar) = 1;

Well, then I tried this:

from sqlalchemy.sql.expression import func, literal
session.query(foo).filter(literal(1) == func.any(foo.c.bar))

And that was better:

select * from foo where 1 = any(bar);

Unfortunately I really wanted something like this:

select * from foo where not 1 = any(bar);

So I tried this:

from sqlalchemy.sql.expression import func, literal, not_
session.query(foo).filter(not_(literal(1) == func.any(foo.c.bar)))

Which gave me this:

select * from foo where 1 != any(bar);

Which is not correct. Of course I could do this:

from sqlalchemy.sql.expression import func, literal, not_
session.query(foo).filter(not_(literal(1) == func.all(foo.c.bar)))

But I should really ask here for help because I've loaded the foot-gun
completely full.

So, what's the correct way to do this?

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] custom __init__ methods not being invoked

2013-01-07 Thread Ryan McKillen
Worked like a charm. Thanks.

— RM


On Mon, Jan 7, 2013 at 6:26 PM, Michael van Tellingen 
michaelvantellin...@gmail.com wrote:

 See
 http://docs.sqlalchemy.org/en/latest/orm/mapper_config.html#constructors-and-object-initialization



 On Mon, Jan 7, 2013 at 4:47 AM, RM ryan.mckil...@gmail.com wrote:
  I have a class which inherits from Base. My class has a metaclass which
  inherits from DeclarativeMeta. Among other things, the metaclass adds an
  __init__ method to the class dictionary. When I instantiate an instance
 of
  my class directly, my __init__ method is invoked, but if I use the ORM to
  retrieve an instance, my __init__ method is not invoked.
 
  A metaclass serves better than a mixin for what I am trying to
 accomplish.
  However, I did experiment with a mixin and saw the same behavior as
  described above.
 
  Any ideas? Many thanks.
 
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To view this discussion on the web visit
  https://groups.google.com/d/msg/sqlalchemy/-/oDj_bHNvP7EJ.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom SQL construct for postgres multirow insert

2012-11-27 Thread Ryan Kelly
On Tue, Nov 27, 2012 at 11:00:25AM -0500, Michael Bayer wrote:
 
 On Nov 27, 2012, at 8:10 AM, Idan Kamara wrote:
 
  Hello,
  
  I'd like to create a construct for postgres's multirow insert. I asked this 
  on IRC
  and got some great help from supplicant, however I have a few things 
  missing:
  
  - proper conversion of Python types to SQL (None to null).
  - handle binding of values like the rest of SQLAlchemy
  
  This is the code supplicant came up with:
  
  class MultirowInsert(Executable, ClauseElement):
  def __init__(self, table, values):
  self.table = table
  self.values = values
  
  @compiles(MultirowInsert)
  def visit_multirow_insert(element, compiler, **kw):
  preparer = compiler.preparer
  
  columns = None
  values_clauses = []
  for value in element.values:
  if columns is None:
  columns = value.keys() # each value must be a dict
  
  local_values = []
  for col in columns:
  local_values.append(str(sqlescape(value[col])))
  local_values_clause = (%s) % , .join(local_values)
  
  values_clauses.append(local_values_clause)
  
  values_clause = (VALUES %s) % , .join(values_clauses)
  
  columns_clause = 
  columns = [preparer.quote(c, '') for c in columns]
  columns_clause = ( + ,.join(columns) + )
  
  return INSERT INTO %s %s %s % (
  compiler.process(element.table, asfrom=True),
  columns_clause,
  values_clause,
  )
  
   ins = MultirowInsert(table, [{'c' : '1', 'c' : '2'}])
   str(ins)
  'INSERT INTO mytable (c) (VALUES (\'1\'), (\'2\'))'
  
  But trying to pass None as one of the values produces something strange 
  (use something other than sqlescape to convert values?):
  
   str(MultirowInsert(table, [{'c' : None}])
  'INSERT INTO mytable (c) (VALUES (psycopg2.extensions.NoneAdapter object 
  at 0x25d0cd0))'
 
 uh ok he is wading into odd territory there, you don't have any imports above 
 but I'm assuming sqlescape is postgresql's escaping function which is why 
 you're getting it's adapter objects stuck into your query.
Well like I said I wouldn't really do it this way. This is the complete
code I provided: http://fpaste.org/AOCr/

Which is basically just a hack around me mushing this together:
http://fpaste.org/RoBJ/

But the correct way to get the quoted value is with ``.getquoted()``,
like this:
sqlescape(value[col]).getquoted()

 
 an insert with many values() I'd not be bypassing psycopg2's usual bound 
 parameter mechanisms and I'd be using a naming scheme, such as:
 
 INSERT INTO table (a, b, c) VALUES (%(a1)s, %(b1)s, %(c1)s), (%(a2)s, %(b2)s, 
 %(c2)s), ...
psycopg2 is just quoting them internally anyway, but this is probably
more performant.

 you'd then need to perform the necessary naming on the values passed to the 
 construct, which also, I'm assuming that code example you have is not real 
 because {'c':'1', 'c':'2'} clearly blows away one of the values:
 
 myinsert(table).values({'a':1, 'b':2, 'c':3}, {'a':4, 'b':5, 'c':6}, ...)
 
 one reason we don't yet have these fancier INSERT constructs yet is because 
 the mechanics of INSERT are very hard.  I'd recommend walking through the 
 source in sqlalchemy/sql/compiler.py visit_insert() and _get_colparams() 
 (which is the real control center for INSERT and UPDATE and is a little bit 
 of a monster).   I will accept well-considered patches to compiler.py that 
 allow for insert() to support multiple values() directly.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Computed Columns

2012-10-16 Thread Ryan Kelly
On Tue, Oct 16, 2012 at 07:06:00AM -0700, Michael Wilson wrote:
 Hi,
 
 I've like to include the distance of a particular row (object) from a given 
 point in a SQLAlchemy query using the haversine formula, and SORT on the 
 distance, similar to this example:
 
 http://stackoverflow.com/questions/574691/mysql-great-circle-distance-haversine-formula
 
 But can't figure out how to do it. It seems like it's a combination of a 
 .label('distance') and .text(…) but I can't find a good example.
 
 My incoming rows have a longitude and latitude column, and I'd like to be 
 able to include the origin (37, -122 in the example) as parameters at runtime.

Completely untested, but hopefully sends you in the right direction:

from sqlalchemy import func, literal, Column, Integer, Numeric, MetaData

metadata = MetaData()
markers = Table(markers, metadata,
Column(id, Integer, primary_key=True),
Column(lat, Numeric),
Column(lng, Numeric),
)   

origin_lat = 37
origin_lng = -122

distance = (
3959 
* func.acos(func.cos(func.radians(literal(origin_lat 
* func.cos(func.radians(markers.lat)) 
* func.cos(func.radians(markers.lng) - func.radians(literal(origin_lng)))
+ func.sin(func.radians(literal(origin_lat)))
* func.sin(func.radians(markers.lat))
)

query = (
session.query(markers.lat, markers.lng, distance.label(distance))
.having(distance  literal(25))
.order_by(distance)
.limit(20)
) 

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] yield_per but with Session.execute or select()?

2012-09-19 Thread Ryan Kelly
Is there a way to do the same thing that yield_per does for queries
generated with Session.query but for queries performed using
Session.execute (e.g. built with text() or select())? Is fetchmany the
right function? I don't see it fetching forward on the cursor as I would
expect.

I'm using postgresql 9.1 and psycopg2, FWIW.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Aliasing a constant within a recursive CTE

2012-07-09 Thread Ryan Kelly
On Mon, Jul 09, 2012 at 03:12:16PM -0700, Russ wrote:
 I'm trying to use the new CTE support in SQLAlchemy in a way that will
 allow me to reference the recursion level as a field in the query
 result.  This is easy in a straight SQL CTE by aliasing a constant in
 the non-recursive part, and then referencing the alias in the
 recursive part.  The limited example below (tested in PostgreSQL)
 demonstrates this with a single field, and yields 0-10 inclusive:
 
 WITH RECURSIVE cte AS (
 SELECT 0 as x
   UNION ALL
 SELECT cte.x + 1 FROM cte WHERE cte.x  10
 )
 SELECT * from cte;
 
 I can't figure out how to replicate this in SQLAlchemy, though.
 Specifically, I'm stumped on how to represent the 0 as x part in
 SQLAlchemy.  How do you do it?  I've tried variations of this:
 
 select(0).alias(name=x)
select(literal(0).alias(x)) should do it, see the documentation at
http://docs.sqlalchemy.org/en/rel_0_7/core/expression_api.html#sqlalchemy.sql.expression.literal

 
 as column specs, but with no luck so far.
 

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [PATCHES] _compiler_dispatch in visit_delete and FROM ONLY support for postgres

2012-06-10 Thread Ryan Kelly
On Sat, Jun 09, 2012 at 11:56:45PM -0400, Michael Bayer wrote:
 for _compiler_dispatch, it looks mostly OK - i think isselect isn't needed? 
   Or if it's meant to accommodate a SELECT that's inside of an UPDATE or 
 DELETE, I think you'd need to use a stack based approach so that when you 
 exit visit_select() the isselect flag is reset.  The 
 isdelete/isupdate/isinsert flags are meant to be mutually exclusive.   I'd 
 just peek inside of self.stack, perhaps via the is_subquery() method, or 
 something more specific, if you truly need to detect SELECT inside of 
 UPDATE/DELETE for that.  Looks OK otherwise. 
Yes. is_subquery() is just what I needed.

 
 only() remains more troubling.   I can see the difficulty, it's somewhere 
 in between a hint, and a directive that actually produces a different 
 result, which is unlike a hint. But using a new selectable construct adds 
 a large amount of complexity that I still see as unwarranted.  
 
 Creating a wrapper around a selectable requires that all the Column objects 
 be proxied into new ones.  Column has a .table attribute that points to 
 the parent table - so you need to do more than just copy over table.columns 
 to self._columns - this breaks the Column which you can see if you run a 
 regular select against the table after the only():
 
 from sqlalchemy import *
 
 m = MetaData()
 mytable = Table('mytable', m, Column('id', Integer, primary_key=True), 
 Column('data', String))
 
 from sqlalchemy.dialects.postgresql import base as postgresql
 mytable_only = postgresql.only(mytable)
 print select([mytable_only.c.id, 
 mytable_only.c.data]).compile(dialect=postgresql.dialect())
 print select([mytable.c.id, 
 mytable.c.data]).compile(dialect=postgresql.dialect())
 
 output:
 
 SELECT mytable.id, mytable.data 
 FROM ONLY mytable
 
 SELECT mytable.id, mytable.data 
 FROM ONLY mytable
 
 So at the very least, a construct like only() would usually subclass Alias 
 so that the column proxying is handled.   
 
 Also let's not forget that this feature needs to work with the ORM too.   In 
 that case, I guess we'd need to use aliased(), something like:
 
   only_of_class = aliased(MyClass, alias=only(MyClass.__table__))
 
 Through all of this, only() as a separate construct still seems unwarranted 
 because as far as I can see, only() doesn't represent a new lexical identity 
 (which is the main job of a FROM element).  That is, you wouldn't have 
 something like this:
 
 SELECT * FROM ONLY mytable, mytable WHERE ...?
 
 Assuming I'm understanding ONLY correctly, if you need the same table twice 
 in the statement, you still need to use alias(), so not much changes there 
 from how things work now (including when using with_hint(), which supports 
 aliases).  When you say ONLY mytable, that isn't creating a new lexical 
 identity within the statement.   You're modifying an existing lexical 
 identity, the identity denoted by the Table or Alias.   So even though not 
 quite a hint, still acts a much more like a hint than a new lexical identity.
 
 The attached patch illustrates some small changes to compiler that lets ONLY 
 come right through as a table hint, without the need to change much else.  
 Usage is like:
 
 select([mytable.c.id, mytable.c.data]).with_hint(mytable, ONLY, 
 dialect_name=postgresql)
 
 mytable_alias = mytable.alias()
 select([mytable_alias.c.id, mytable_alias.c.data]).with_hint(mytable_alias, 
 ONLY, dialect_name=postgresql)
 
 mytable.delete().with_hint(ONLY, dialect_name=postgresql)
 
 This approach, while stretching the definition of hint a bit, makes usage 
 of existing functionality and uses an API that is already familiar in other 
 use cases, with no dialect import required and no complex table proxying 
 going on.   We end up making with_hint() a more capable system rather than 
 adding an entirely new and narrowly-scoped system elsewhere.
 
 If you approve, we can complete and clean up this approach, write some tests, 
 fix up the isselect thing, and all three patches can find their way in.
Well, since you've already done most of the hard work, the hints
approach seems to work perfectly fine :P

I'm attaching two new patches, one for the _compiler_dispatch in
visit_delete/visit_update, and the other for implementing FROM ONLY
using hints. Pretty similar to your patch but with some fixes to make
it work with UPDATE and DELETE as well as proper tests and
documentation.

-Ryan Kelly

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 On Jun 9, 2012, at 6:51 PM, Ryan Kelly wrote:
 
  On Sat, May 05, 2012 at 08:00:20PM -0400, Michael Bayer wrote:
  
  On May 5, 2012, at 7:33 PM, Ryan Kelly wrote:
  
  List:
  
  I am currently trying to add support for FROM ONLY to the postgresql
  dialect. FROM ONLY is used when you wish to query a table which has other
  tables that inherit from it, but you ONLY want results from the parent
  table you are referencing. More information can be found here:
  http://www.postgresql.org/docs/current/static

Re: [sqlalchemy] [PATCH] MATCH clause implementation for foreign keys

2012-06-10 Thread Ryan Kelly
On Sun, Jun 10, 2012 at 12:43:31PM -0400, A.M. wrote:
 
 On Jun 9, 2012, at 10:41 PM, Michael Bayer wrote:
 
  it looks great.
  
  This is in the queue as http://www.sqlalchemy.org/trac/ticket/2502.
 
 
 1765  text +=  MATCH %s % constraint.match 
 
 SQL injection? Shouldn't the argument be one of three constants?
ON UPDATE, ON DELETE, DEFERRABLE, and INITIALLY all work this way. If
this is broken, then we should fix those, too. And there are other
places, like in the dialect-specific index parameters, that do this as
well. I don't agree that it's a problem, however, because if we start
saying what can appear there, we're necessarily limiting ourselves to
the lowest common denominator. PostgreSQL, for instance, supports SET
DEFAULT in ON UPDATE/DELETE, but MySQL does not. How do we handle that
case? It seems like a lot of cruft would accumulate if we start
specifying which values can go in these places.

 
 I suspect there needs to be some specific per-database-driver logic to handle 
 unimplemented cases. PostgreSQL, for example, doesn't support MATCH PARTIAL ( 
 http://www.postgresql.org/docs/9.1/static/sql-createtable.html )
This is correct. I do not believe Oracle does either. But PostgreSQL will
courteously die with an error:
ERROR:  MATCH PARTIAL not yet implemented

 and MySQL, naturally, completely ignores the syntax and triggers other 
 clauses to be ignored:
 
 For users familiar with the ANSI/ISO SQL Standard, please note that no 
 storage engine, including InnoDB, recognizes or enforces the MATCH clause 
 used in referential integrity constraint definitions. Use of an explicit 
 MATCH clause will not have the specified effect, and also causes ON DELETE 
 and ON UPDATE clauses to be ignored. For these reasons, specifying MATCH 
 should be avoided.
 http://dev.mysql.com/doc/refman/5.5/en/create-table.html
I really hope someone has filed a bug report against MySQL about this.

The MySQL dialect can be patched to die with a CompileError (maybe) if
MATCH is used, which seems like the sanest option, given that it causes
unexpected behavior. I don't think the other dialects should be given
the same treatment, however, because those will presumably error out if
even given MATCH.

Can someone test this on other databases to confirm?

 
 Cheers,
 M
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [PATCHES] _compiler_dispatch in visit_delete and FROM ONLY support for postgres

2012-06-09 Thread Ryan Kelly
On Sat, May 05, 2012 at 08:00:20PM -0400, Michael Bayer wrote:
 
 On May 5, 2012, at 7:33 PM, Ryan Kelly wrote:
 
  List:
  
  I am currently trying to add support for FROM ONLY to the postgresql
  dialect. FROM ONLY is used when you wish to query a table which has other
  tables that inherit from it, but you ONLY want results from the parent
  table you are referencing. More information can be found here:
  http://www.postgresql.org/docs/current/static/sql-select.html#SQL-FROM
 
 OK, this looks like a nice patch, though I'm not sure it's consistent with
 how we've approached this kind of SQL feature in other cases.   ONLY looks
 a lot like an optimization hint added to the FROM clause.  We already have an
 API for this, called with_hint().with_hint() is intended to be supported
 by all of INSERT, UPDATE, DELETE in addition to SELECT.  I'm not sure at
 the moment what it does with PG right now but it might be a more appropriate
 approach.Take a look and let me know if you have thoughts on that.
I had seen that when I started implementing this, but I felt that 'logically'
speaking, ONLY was more like an alias than a hint. I tried just now to
implement it using hints, but a lot of the code makes assumptions about
the location of hints with regards to the table name, i.e., the hint
always comes after the table name. ONLY always appears before. And I'm
not entirely sure how it would work if the same table is used twice in a
statement. ONLY essentially is a different kind of table object. Maybe I'm
missing something here.

  
  During the course of implementing this, I found that the visit_delete
  method did not call _compiler_dispatch on the table targeted by the delete,
  preventing table aliases in a delete, and preventing my implementation of
  ONLY for delete. I changed this, but the mssql dialect has some strange
  aliasing rules, which needed some TLC to function correctly in the presence
  of _compiler_dispatch.
 
 Also it seems a little awkward that DELETE now defers to generic compilation
 to format the table, but still not INSERT or UPDATE which still hardcode to
 preparer.format_table().   
For update this should probably be changed, and I've attached a patch to
do so. I'm not sure how much sense this makes for insert. I don't think you
can use anything but the name of the table (schema qualified) in an insert in
any DB. I do not believe that hints/aliases/ONLY could ever make sense in
this context. Maybe I'm missing something, besides the symmetry aspects of
it?

  
  Of course, table aliasing in a delete isn't supported by all dialects, and
  my current implementation doesn't do anything to protect Jimmy[1].
 
 is aliasing of a table also a different feature need here ?Which DBs
 support this ?
As far as I know, it works on PostgreSQL and Firebird. It does not work
on SQLite, MSSQL, DB2, and Oracle. I have not tried on Sybase.

  
  So there are two patches attached to this email, the first being for
  _compiler_dispatch in visit_delete (compiler_dispatch_deletes.patch) and
  the other for FROM ONLY in postgres (from_only.patch). The second one could
  probably be considering more of a work-in-progress and I'm actually
  interested in feedback on whether or not I'm headed in the right direction
  with it. It also depends on the first patch.
 
  
  Also, is this the right list?
 
 
 sure, there's also sqlalchemy-devel, though for features/patches you can also
 use trac tickets or bitbucket forks/pull requests
 
 thanks for the patches and interest !
Welcome. Sorry it took so long to follow up on this, I've been busy.

-Ryan Kelly

 
 
 
 
 
  
  -Ryan Kelly
  
  [1] http://www.globalnerdy.com/2010/05/09/new-programming-jargon/
  
  -- You received this message because you are subscribed to the Google
  Groups sqlalchemy group.  To post to this group, send email to
  sqlalchemy@googlegroups.com.  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.  For more options, visit this
  group at http://groups.google.com/group/sqlalchemy?hl=en.
  
  compiler_dispatch_deletes.patchfrom_only.patch
 
 -- You received this message because you are subscribed to the Google Groups
 sqlalchemy group.  To post to this group, send email to
 sqlalchemy@googlegroups.com.  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.  For more options, visit this group
 at http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

diff -r ea4bd6b54789 lib/sqlalchemy/dialects/mssql/base.py
--- a/lib/sqlalchemy/dialects/mssql/base.py	Fri Jun 08 15:56:58 2012 -0400
+++ b/lib/sqlalchemy/dialects/mssql/base.py	Sat

[sqlalchemy] [PATCH] MATCH clause implementation for foreign keys

2012-06-09 Thread Ryan Kelly
All:

The attached patch implements the MATCH keyword for foreign keys. This
is part of the SQL92 standard, and is supported by PostgreSQL and
Oracle.

Feedback welcome.

-Ryan Kelly

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

diff -r ea4bd6b54789 lib/sqlalchemy/schema.py
--- a/lib/sqlalchemy/schema.py	Fri Jun 08 15:56:58 2012 -0400
+++ b/lib/sqlalchemy/schema.py	Sat Jun 09 19:02:42 2012 -0400
@@ -1191,7 +1191,7 @@
 def __init__(self, column, _constraint=None, use_alter=False, name=None,
 onupdate=None, ondelete=None, deferrable=None,
 schema=None,
-initially=None, link_to_name=False):
+initially=None, link_to_name=False, match=None):
 
 Construct a column-level FOREIGN KEY.
 
@@ -1236,6 +1236,10 @@
 generated/dropped externally from the CREATE TABLE/ DROP TABLE
 statement. See that classes' constructor for details.
 
+:param match: Optional string. If set, emit MATCH value when issuing
+DDL for this constraint. Typical values include SIMPLE, PARTIAL
+and FULL.
+
 
 
 self._colspec = column
@@ -1255,6 +1259,7 @@
 self.deferrable = deferrable
 self.initially = initially
 self.link_to_name = link_to_name
+self.match = match
 
 def __repr__(self):
 return ForeignKey(%r) % self._get_colspec()
@@ -1283,7 +1288,8 @@
 ondelete=self.ondelete,
 deferrable=self.deferrable,
 initially=self.initially,
-link_to_name=self.link_to_name
+link_to_name=self.link_to_name,
+match=self.match
 )
 fk.dispatch._update(self.dispatch)
 return fk
@@ -1445,6 +1451,7 @@
 [], [], use_alter=self.use_alter, name=self.name,
 onupdate=self.onupdate, ondelete=self.ondelete,
 deferrable=self.deferrable, initially=self.initially,
+match=self.match,
 )
 self.constraint._elements[self.parent] = self
 self.constraint._set_parent_with_dispatch(table)
@@ -2031,7 +2038,7 @@
 
 def __init__(self, columns, refcolumns, name=None, onupdate=None,
 ondelete=None, deferrable=None, initially=None, use_alter=False,
-link_to_name=False, table=None):
+link_to_name=False, match=None, table=None):
 Construct a composite-capable FOREIGN KEY.
 
 :param columns: A sequence of local column names. The named columns
@@ -2072,6 +2079,10 @@
   This is normally used to generate/drop constraints on objects that
   are mutually dependent on each other.
 
+:param match: Optional string. If set, emit MATCH value when issuing
+DDL for this constraint. Typical values include SIMPLE, PARTIAL
+and FULL.
+
 
 super(ForeignKeyConstraint, self).\
 __init__(name, deferrable, initially)
@@ -2082,6 +2093,7 @@
 if self.name is None and use_alter:
 raise exc.ArgumentError(Alterable Constraint requires a name)
 self.use_alter = use_alter
+self.match = match
 
 self._elements = util.OrderedDict()
 
@@ -2097,7 +2109,8 @@
 onupdate=self.onupdate, 
 ondelete=self.ondelete, 
 use_alter=self.use_alter, 
-link_to_name=self.link_to_name
+link_to_name=self.link_to_name,
+match=self.match
 )
 
 if table is not None:
@@ -2153,7 +2166,8 @@
 use_alter=self.use_alter,
 deferrable=self.deferrable,
 initially=self.initially,
-link_to_name=self.link_to_name
+link_to_name=self.link_to_name,
+match=self.match
 )
 fkc.dispatch._update(self.dispatch)
 return fkc
diff -r ea4bd6b54789 lib/sqlalchemy/sql/compiler.py
--- a/lib/sqlalchemy/sql/compiler.py	Fri Jun 08 15:56:58 2012 -0400
+++ b/lib/sqlalchemy/sql/compiler.py	Sat Jun 09 19:02:42 2012 -0400
@@ -1719,6 +1719,7 @@
 ', '.join(preparer.quote(f.column.name, f.column.quote)
   for f in constraint._elements.values())
 )
+text += self.define_constraint_match(constraint)
 text += self.define_constraint_cascades(constraint)
 text += self.define_constraint_deferrability(constraint)
 return text
@@ -1758,6 +1759,12 @@
 text +=  INITIALLY %s % constraint.initially

[sqlalchemy] Postgresql 8.4 DB Permission and FOR SHARE OF locking problem when inserting into table A that has foreign key to table B while user accessing AB has only select rights to B

2012-06-07 Thread Ryan D
 with raw sql and w/o
update permission on the states table, OR any combination of these
things?



All the best,

Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Removing duplicates

2012-05-08 Thread Ryan Kelly
On Tue, May 08, 2012 at 05:20:03AM -0700, Eduardo wrote:
 Hi,
 Is there any function in sqlalchemy that filters out duplicates?
 For example the following rows satisfy a query:
 1. (john, 23 ,  lpgh )
 2.(steve , 35 , dbr )
 3. (john ,76, qwe)
 4. (mark, 35, epz)
 I would like that my query results contain only one row with john (either 1 
 or 3 which one is not important) or with 35 (either 2 or 4).
 I tried with distinct(col) but it could not do the work.
You could try DISTINCT ON in combination with a UNION... maybe. DISTINCT
ON is documented here:
http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.distinct
and I'm not sure which dialects support this, apart from PostgreSQL.

 Thanks
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/xnt0Zo5tSPUJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [PATCHES] _compiler_dispatch in visit_delete and FROM ONLY support for postgres

2012-05-05 Thread Ryan Kelly
List:

I am currently trying to add support for FROM ONLY to the postgresql
dialect. FROM ONLY is used when you wish to query a table which has
other tables that inherit from it, but you ONLY want results from the
parent table you are referencing. More information can be found here:
http://www.postgresql.org/docs/current/static/sql-select.html#SQL-FROM

During the course of implementing this, I found that the visit_delete
method did not call _compiler_dispatch on the table targeted by the
delete, preventing table aliases in a delete, and preventing my
implementation of ONLY for delete. I changed this, but the mssql dialect
has some strange aliasing rules, which needed some TLC to function
correctly in the presence of _compiler_dispatch.

Of course, table aliasing in a delete isn't supported by all dialects,
and my current implementation doesn't do anything to protect Jimmy[1].

So there are two patches attached to this email, the first being for
_compiler_dispatch in visit_delete (compiler_dispatch_deletes.patch)
and the other for FROM ONLY in postgres (from_only.patch). The second
one could probably be considering more of a work-in-progress and I'm
actually interested in feedback on whether or not I'm headed in the
right direction with it. It also depends on the first patch.

Also, is this the right list?

-Ryan Kelly

[1] http://www.globalnerdy.com/2010/05/09/new-programming-jargon/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

diff -r 408388e5faf4 lib/sqlalchemy/dialects/mssql/base.py
--- a/lib/sqlalchemy/dialects/mssql/base.py	Fri May 04 23:18:52 2012 -0400
+++ b/lib/sqlalchemy/dialects/mssql/base.py	Sat May 05 19:23:12 2012 -0400
@@ -830,6 +830,10 @@
 return super(MSSQLCompiler, self).visit_table(table, **kwargs)
 
 def visit_alias(self, alias, **kwargs):
+if (self.isupdate and not kwargs.get('mssql_update_from', False)
+or self.isdelete) and not self.isselect:
+return self.preparer.format_table(alias.original)
+
 # translate for schema-qualified table aliases
 kwargs['mssql_aliased'] = alias.original
 return super(MSSQLCompiler, self).visit_alias(alias, **kwargs)
@@ -951,6 +955,7 @@
 well. Otherwise, it is optional. Here, we add it regardless.
 
 
+kw['mssql_update_from'] = True
 return FROM  + ', '.join(
 t._compiler_dispatch(self, asfrom=True,
 fromhints=from_hints, **kw)
diff -r 408388e5faf4 lib/sqlalchemy/sql/compiler.py
--- a/lib/sqlalchemy/sql/compiler.py	Fri May 04 23:18:52 2012 -0400
+++ b/lib/sqlalchemy/sql/compiler.py	Sat May 05 19:23:12 2012 -0400
@@ -183,10 +183,10 @@
 
 compound_keywords = COMPOUND_KEYWORDS
 
-isdelete = isinsert = isupdate = False
+isselect = isdelete = isinsert = isupdate = False
 class-level defaults which can be set at the instance
 level to define if this Compiled instance represents
-INSERT/UPDATE/DELETE
+SELECT/INSERT/UPDATE/DELETE
 
 
 returning = None
@@ -871,6 +871,7 @@
 def visit_select(self, select, asfrom=False, parens=True, 
 iswrapper=False, fromhints=None, 
 compound_index=1, **kwargs):
+self.isselect = True
 
 entry = self.stack and self.stack[-1] or {}
 
@@ -1423,7 +1424,8 @@
 self.stack.append({'from': set([delete_stmt.table])})
 self.isdelete = True
 
-text = DELETE FROM  + self.preparer.format_table(delete_stmt.table)
+text = DELETE FROM 
+text += delete_stmt.table._compiler_dispatch(self, asfrom=True)
 
 if delete_stmt._hints:
 dialect_hints = dict([
@@ -1447,7 +1449,8 @@
 delete_stmt, delete_stmt._returning)
 
 if delete_stmt._whereclause is not None:
-text +=  WHERE  + self.process(delete_stmt._whereclause)
+text +=  WHERE 
+text += delete_stmt._whereclause._compiler_dispatch(self)
 
 if self.returning and not self.returning_precedes_values:
 text +=   + self.returning_clause(
diff -r 408388e5faf4 test/sql/test_compiler.py
--- a/test/sql/test_compiler.py	Fri May 04 23:18:52 2012 -0400
+++ b/test/sql/test_compiler.py	Sat May 05 19:23:12 2012 -0400
@@ -2948,6 +2948,12 @@
 DELETE FROM mytable WHERE mytable.myid = :myid_1 
 AND mytable.name = :name_1)
 
+def test_aliased_delete(self):
+talias1 = table1.alias('t1')
+self.assert_compile(
+delete(talias1).where(talias1.c.myid == 7),
+DELETE FROM mytable AS t1 WHERE t1

[sqlalchemy] Getting ENUM-like behavior from a MySQL VARCHAR

2011-11-09 Thread Ryan
I have a MySQL VARCHAR column, but I'd like to get ENUM-like behavior at 
the ORM level. I'm using the declarative style. Here's what I've got so far:

language = Column(Enum('en', 'fr', native_enum=False), CheckConstraint(), 
default='en')

Docs say that when native_enum is set to False, uses VARCHAR + check 
constraint for all backends.

What args do I need to pass to CheckConstraint in order to restrict the 
list of languages?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/9upUrF4h5-QJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Is Session.execute SQL injection safe?

2011-02-28 Thread Ryan
Can't find anything in the docs as to whether query strings passed into 
Session.execute are escaped/safe from SQL injection. Any insights? Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Is Session.execute SQL injection safe?

2011-02-28 Thread Ryan McKillen
Great. Thank you.



On Mon, Feb 28, 2011 at 9:37 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Feb 28, 2011, at 5:35 PM, Ryan wrote:

  Can't find anything in the docs as to whether query strings passed into
 Session.execute are escaped/safe from SQL injection. Any insights? Thanks.

 A literal query string is only safe against injection if you ensure that
 the string contains no portions of user-entered text inside of it.Bind
 parameters should always be used for literal values.   Docs on the text()
 construct which Session.execute() uses, and the accepted bind parameter
 format, are here:
 http://www.sqlalchemy.org/docs/core/expression_api.html#sqlalchemy.sql.expression.text


 
  --
  You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] self-referential relationship w/ declarative style

2011-02-22 Thread Ryan McKillen
I added that in because without it I get:
TypeError: Incompatible collection type: User is not list-like


On Tue, Feb 22, 2011 at 6:47 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Feb 22, 2011, at 9:03 PM, Ryan McKillen wrote:

 Mike, thanks a lot. Big help. I'm almost there.

 This seems to do the trick:

 usersid = Column(Integer, primary_key=True, key='id')
 inviter_id = Column(Integer, ForeignKey('users.id'))

 inviter = relationship('User',
 uselist = False,
 backref = backref('invitee', remote_side=usersid, uselist=True),
 )

 When there are two users, one being the inviter (parent) and the other
 being the invitee (child), it works like a charm:

 self.assertEqual(invitee1.inviter.id, inviter.id)
 self.assertEqual(inviter.invitee[0].id, invitee1.id)

 But add a third user, one being the inviter and two being the invitees,
 invitee1.inviter is None.


 probably because of that uselist=False, which makes it into a one-to-one.
   Adjacency list is a standard single foreign key relationship - one-to-many
 on one side, many-to-one on the other.

 There's an illustration of exactly how the data resides in the table:


 http://www.sqlalchemy.org/docs/orm/relationships.html#adjacency-list-relationships




 Any ideas for me?


 On Tue, Feb 22, 2011 at 7:54 AM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Feb 20, 2011, at 10:12 PM, Ryan wrote:

 I'm attempting a self-referential mapping on a Client object that includes
 these two columns:

 id = Column(Integer, primary_key=True)
 inviter_id = Column(Integer, ForeignKey('users.id'), nullable=True)


 Started here with no luck:

 inviter = relationship('Client', primaryjoin='Client.id ==
 Client.inviter_id', uselist=False)


 Then read about self-referential mapping in the docs and tried with no
 luck:

 inviter = relationship('Client', remote_side='Client.id', uselist=False)


 And this with an error:

 relationship('Client', remote_side=[Client.id], uselist=False)


 Would be a great help to see how this is done in a declarative style.
 Thanks!



 the last example in the section
 http://www.sqlalchemy.org/docs/orm/relationships.html#adjacency-list-relationshipsillustrates
  a declarative self-referential relationship.  Note that the id
 Column object can be referenced directly when you're inside the class
 declaration itself.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] self-referential relationship w/ declarative style

2011-02-22 Thread Ryan McKillen
Got it. Many thanks!



On Tue, Feb 22, 2011 at 7:02 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 one side scalar, one side collection.   the collection side you use
 .append().   You decide which end is the non-collection by setting
 remote_side, in your code below its invitee.


 On Feb 22, 2011, at 9:59 PM, Ryan McKillen wrote:

 I added that in because without it I get:
 TypeError: Incompatible collection type: User is not list-like


 On Tue, Feb 22, 2011 at 6:47 PM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Feb 22, 2011, at 9:03 PM, Ryan McKillen wrote:

 Mike, thanks a lot. Big help. I'm almost there.

 This seems to do the trick:

 usersid = Column(Integer, primary_key=True, key='id')
 inviter_id = Column(Integer, ForeignKey('users.id'))

 inviter = relationship('User',
 uselist = False,
 backref = backref('invitee', remote_side=usersid, uselist=True),
 )

 When there are two users, one being the inviter (parent) and the other
 being the invitee (child), it works like a charm:

 self.assertEqual(invitee1.inviter.id, inviter.id)
 self.assertEqual(inviter.invitee[0].id, invitee1.id)

 But add a third user, one being the inviter and two being the invitees,
 invitee1.inviter is None.


 probably because of that uselist=False, which makes it into a one-to-one.
 Adjacency list is a standard single foreign key relationship -
 one-to-many on one side, many-to-one on the other.

 There's an illustration of exactly how the data resides in the table:


 http://www.sqlalchemy.org/docs/orm/relationships.html#adjacency-list-relationships




 Any ideas for me?


 On Tue, Feb 22, 2011 at 7:54 AM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 On Feb 20, 2011, at 10:12 PM, Ryan wrote:

 I'm attempting a self-referential mapping on a Client object that
 includes these two columns:

 id = Column(Integer, primary_key=True)
 inviter_id = Column(Integer, ForeignKey('users.id'), nullable=True)


 Started here with no luck:

 inviter = relationship('Client', primaryjoin='Client.id ==
 Client.inviter_id', uselist=False)


 Then read about self-referential mapping in the docs and tried with no
 luck:

 inviter = relationship('Client', remote_side='Client.id', uselist=False)


 And this with an error:

 relationship('Client', remote_side=[Client.id], uselist=False)


 Would be a great help to see how this is done in a declarative style.
 Thanks!



 the last example in the section
 http://www.sqlalchemy.org/docs/orm/relationships.html#adjacency-list-relationshipsillustrates
  a declarative self-referential relationship.  Note that the id
 Column object can be referenced directly when you're inside the class
 declaration itself.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group

[sqlalchemy] self-referential relationship w/ declarative style

2011-02-20 Thread Ryan
I'm attempting a self-referential mapping on a Client object that includes 
these two columns:

id = Column(Integer, primary_key=True)
inviter_id = Column(Integer, ForeignKey('users.id'), nullable=True)


Started here with no luck:

inviter = relationship('Client', primaryjoin='Client.id == 
Client.inviter_id', uselist=False)


Then read about self-referential mapping in the docs and tried with no luck:

inviter = relationship('Client', remote_side='Client.id', uselist=False)


And this with an error:

relationship('Client', remote_side=[Client.id], uselist=False)


Would be a great help to see how this is done in a declarative style. 
Thanks! 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] MapperExtension#after_insert behavior

2011-02-03 Thread Ryan
I'm using MapperExtension#after_insert and realizing that this callback 
fires when a record has been inserted into a transaction, but before the 
session is actually committed.

I'd like an after_insert callback that fires after commit/once the record 
physically resides in the database. Any insight into the code to get this 
behavior?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: mappers failed to initialize/class name is not defined

2011-01-25 Thread Ryan
Problem solved. Had to to with my setup, unrelated to SQLAlchemy. Sorry!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] mappers failed to initialize/class name is not defined

2011-01-24 Thread Ryan
I'm using the declarative style and unable to to get a mapper to initialize.

Here's a simplified version of the class from client_transaction.py:

class ClientTransaction(Base):

__tablename__ = 'client_transactions'

id = Column(Integer, primary_key=True)
client_promotion_id = Column(Integer, 
ForeignKey('clients_promotions.id'))

client_promotion = relationship('ClientPromotion', 
primaryjoin='ClientTransction.client_promotion_id == ClientPromotion.id')


And here's the error:

InvalidRequestError: One or more mappers failed to initialize - can't 
proceed with initialization of other mappers.  Original exception was: When 
initializing mapper Mapper|ClientTransaction|client_transactions, expression 
'ClientPromotion' failed to locate a name (name 'ClientPromotion' is not 
defined). If this is a class name, consider adding this relationship() to 
the class 'uber.model.client_transaction.ClientTransaction' class after 
both dependent classes have been defined.


Which seems to be in conflict with this statement from the docs:

In addition to the main argument for relationship(), other arguments which 
depend upon the columns present on an as-yet undefined class may also be 
specified as strings.


For the sake of reference, here's the other class in client_promotion.py as 
well:

class ClientPromotion(SmartModel):

__tablename__ = 'clients_promotions'

 

id = Column(Integer, primary_key=True)

client_transaction_id = Column(Integer, 
ForeignKey('client_transactions.id'))

client_transaction = relationship('ClientTransaction', 
primaryjoin='ClientPromotion.client_transaction_id == ClientTransaction.id', 
uselist=False)


Any help on getting a 'client_promotion' attribute on the 
'ClientTransaction' class would be greatly appreciated. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Can SQLAlchemy's ORM do this?

2010-06-21 Thread Ryan I.
On Jun 21, 10:02 pm, Michael Bayer mike...@zzzcomputing.com wrote:

 yes please see the polymorphic association example included in the 
 distribution .

Thanks! It seems there's nothing SQLAlchemy can't do :).

Btw, since the examples are a bit complicated, here is just for
reference in case
someone comes by this thread later, a simpler example:

TAGS = Table('tags', meta,
Column('id', Integer, primary_key=True),
Column('type', String),
Column('object_id', Integer),
Column('name', String))
DOGS = Table('dogs', meta,
Column('id', Integer, primary_key=True),
Column('name', String))
CATS = Table('cats', meta,
Column('id', Integer, primary_key=True),
Column('name', String))

class Tag(object):
def __init__(self, type, name):
self.type, self.name = type, name
def __repr__(self):
return u'Tag {self.type}: {self.name}'.format(self=self)

class Dog(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return u'Dog {self.id}:{self.name}'.format(self=self)

class Cat(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return u'Cat {self.id}:{self.name}'.format(self=self)

tags_mapper = mapper(Tag, TAGS)
dogs_mapper = mapper(Dog, DOGS)
cats_mapper = mapper(Cat, CATS)

# Magic happens below
pj = and_(DOGS.c.id == TAGS.c.object_id, TAGS.c.type == 'dog')
fk = [TAGS.c.object_id]
dogs_mapper.add_property('tags',
 relationship(Tag,
  primaryjoin=pj,
  uselist=True,
  foreign_keys=fk))
pj = and_(CATS.c.id == TAGS.c.object_id, TAGS.c.type == 'cat')
cats_mapper.add_property('tags',
 relationship(Tag,
  primaryjoin=pj,
  uselist=True,
  foreign_keys=fk))

From now on it's simple:

dog = Dog('Goofy')
tag1, tag2 = Tag(type='dog', name='stray'), Tag(type='dog',
name='vaccinated')
dog.tags = [tag1, tag2]

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Circular Dependancy Hell

2009-09-28 Thread Ryan Michael

I cannot thank you enough - this has been driving me crazy.

Your solution works great.  I also worked out that you can get
create_all() to work by adding use_alter=True to the foreignkey
definition

thumbnail_id=Column('thumbnail_id',Integer, ForeignKey('image.id',
use_alter=True, name='thumbnail_id'))

Thanks again, you saved my sanity.
-Ryan

On Sep 26, 3:17 pm, Mike Conley mconl...@gmail.com wrote:
 You need to use argument            post_update=True
 on your thumbnails relation

 http://www.sqlalchemy.org/docs/05/mappers.html#rows-that-point-to-the...

 http://www.sqlalchemy.org/docs/05/mappers.html#rows-that-point-to-the...Here
 is a sample I used that seems to work. Interesting is that you cannot create
 the tables with meta.create_all() because of te circular dependency. I
 created the table in 2 separate calls.

 class Image(Base):
     __tablename__='image'
     id = Column(Integer, primary_key=True)
     project_id = Column(Integer, ForeignKey('project.id'))
     def __repr__(self):
         return I id:%s % self.id

 class Project(Base):
     __tablename__='project'
     id = Column(Integer, primary_key=True)
     thumbnail_id = Column(Integer, ForeignKey('image.id') )
     images = relation('Image', backref=backref('project'),
         primaryjoin=Project.id==Image.project_id,
         foreign_keys=[Image.project_id]
         )
     thumbnail = relation(Image,
         primaryjoin=Project.thumbnail_id==Image.id,
         foreign_keys=[thumbnail_id],
         uselist=False, post_update=True)
     def __repr__(self):
         return P id:%s thumb:%s % (self.id, self.thumbnail_id)

 Image.__table__.create()
 Project.__table__.create()

 P1 = Project()
 I1 = Image()
 I2 = Image()
 I3 = Image()
 P1.images.extend([I1,I2,I3])
 P1.thumbnail=I2
 session.add(P1)
 session.flush()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Circular Dependancy Hell

2009-09-25 Thread Ryan Michael

I'm trying to do something that seems (to me at least) to be very
simple:  I have two classes, 'Project' and 'Image'.  I want to
associate a set of 'Image's with a project, like a slide show or
whatever.  I also want to use *one* of those images as a thumbnail for
the project.  Here's my relevant code:

class Project(DeclarativeBase):
__tablename__='project'

images = relation( Image, backref=backref('project'),
primaryjoin=id==Image.project_id, order_by=Image.sequencePosition )

thumbnail_id = Column(Integer, ForeignKey('image.id') )
thumbnail = relation(Image, primaryjoin=thumbnail_id==Image.id,
uselist=False)

class Image(DeclarativeBase):
__tablename__='image'

project_id = Column(Integer, ForeignKey('project.id'))


I've tried about 20 different ways to define these two types of
relationships, and every one of them ends up giving me circular
dependancy errors.

Can someone help me understand what's going wrong?
Thanks
-Ryan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: persistent result set

2008-11-07 Thread Adam Ryan

OK, thank you.

But one last thought:  Is storing the query rather than the result
really the way?

I mean, after a couple dozen complex, expensive change operations, the
user could end up with only 4 records.  It would be more efficient to
just store the indexes rather than redo all the queries over and over
again.

On the other hand, with only a few simple queries and thousands of
resulting records, storing the indexes is obviously a drain.

Something about eating cake comes to mind.

Thanks again,

Adam
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: persistent result set

2008-11-06 Thread Adam Ryan


Wow, this is great stuff.  I'll have to spend some time trying it out.

My big question, though, is how to interact with this stored query
using and_ and or_?

I went off on a different track where I would store a list of filter
objects with their respective method (and_, or_), and put them back
together in a nested filter beast like so:

# filter_list is a list of tuples: ( filter_object, filter_method)

nested = None
for filter, method  in filter_list:
if not nested:
nested = filter
else:
nested = method( filter, nested )

query = query.filter( nested )
res = query.all()

If instead I stored the query object, how would I then and_ and or_
it?

Thanks a lot Michael.  This is extremely helpful and gracious.

- Adam


On Nov 6, 3:15 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 6, 2008, at 12:28 AM, Adam Ryan wrote:



  I can't store the query object in the beaker session because it can't
  be pickled, right?.  So is there a way to dump a string representation
  of a query object, then recreate it and associate it with a new SQLA
  Session?

  Or am I missing the obvious?  Thanks again for the help.

 OK, after some cobbling I have a great solution for this, I built a  
 custom pickler/unpickler which stores key objects sparsely on the  
 pickle side, can rebind back to any metadata/session/engine on the  
 unpickle side.   So you can pickle a whole Query object, or whatever,  
 with no problem.  The size of the string is still a few thousand  
 characters but not nearly as huge as it would be if it was pickling  
 the whole map of Table and mapper objects associated.

 Check out the svn trunk, and usage is as follows (this is from the  
 docstring):

      from sqlalchemy.ext.serializer import loads, dumps
      metadata = MetaData(bind=some_engine)
      Session = scoped_session(sessionmaker())

      # ... define mappers

      query =  
 Session
 .query
 (MyClass).filter(MyClass.somedata=='foo').order_by(MyClass.sortkey)

      # pickle the query
      serialized = dumps(query)

      # unpickle.  Pass in metadata + scoped_session, both are optional  
 depending on what you're deserializing
      query2 = loads(serialized, metadata, Session)

      print query2.all()

 this is all brand new code and several structures have special  
 serialization procedures.  If you're playing with it and get errors,  
 we probably need to add more structures to the list of special-needs  
 objects.   The unit tests are fleshing out a good deal of scenarios so  
 far but I can think of more which haven't been tested yet.

 For usage with Beaker, you're going to want to do this serialization  
 procedure by hand and then place the resulting string in the session,  
 then use the specialized loads() on the way out.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: correlated update across logical databases

2008-07-21 Thread Ryan Tracey

Hi Michael

2008/7/18 Michael Bayer [EMAIL PROTECTED]:


 On Jul 18, 2008, at 9:37 AM, Ryan Tracey wrote:

 sqlalchemy.exc.NoSuchTableError: manufacturer

 Just to sum up:

 dbengine = create_engine('mysql://u:[EMAIL PROTECTED]/schema1')
 meta = MetaData()
 meta.bind = dbengine

 Table('tableX', meta, autoload=True)
 ...

 Table('tableA', meta, autoload=True, schema='schema2')
 Table('tableB', meta, autoload=True, schema='schema2')


 tableX is okay, and tableA loads okay too. However tableB has a FK
 referencing tableA. Bang! I get a NoSuchTableError for tableA.

 I'll try get some ipython output. Or figure out how pdb works.


 its likely mysql reflection doesn't yet know how to reflect cross-
 schema foreign keys.  put up a trac ticket with a short test case.

Will do.

(Sorry for the late reply. Just moved house and still waiting for
phone/adsl line to be installed.)

Cheers,
Ryan

-- 
Ryan Tracey
Citizen: The World

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: correlated update across logical databases

2008-07-18 Thread Ryan Tracey

Hi Michael

2008/7/17 Michael Bayer [EMAIL PROTECTED]:


 On Jul 17, 2008, at 7:12 AM, Ryan Tracey wrote:


 Hi

 I would like to do a correlated update involving tables located in two
 logical databases on the same MySQL server.

 The commented out code below would work except that the mysql ends up
 looking for the one table in the wrong database. customer is defined
 as Table('customer', ps_final_meta, autoload=True) and sdf_customer is
 defined as Table('sdf_customer, ps_staging_meta, autoload=True).

 How can I tell sqlalchemy to include the database names in the sql it
 generates? For the moment I am just using SQL directly in an
 execute().

file_ids_str = makeSQLList(tuple(file_ids))
# sqlalchemy correlated update
# TODO: figure out to do correlated updates across databases
#
#s = select([customer.c.MP_Code],
#
 and_(customer.c.CustomerAccNo1==sdf_customer.c.CustomerAccNo1,
#
 customer.c.WholesalerID==sdf_customer.c.WholesalerID
#   )).limit(1)
#rc = sdf_customer.update(
#   and_(sdf_customer.c.StatusID.in_([8, 12]),
#sdf_customer.c.FileID.in_(file_ids)
#   ),
#   values={sdf_customer.c.MP_Code:s}).execute().rowcount
sql = 
update
sdf_customer
set
sdf_customer.MP_Code = (
select
fc.MP_Code
from
ps_final.customer fc
where
sdf_customer.CustomerAccNo1=fc.CustomerAccNo1
and
sdf_customer.WholesalerID=fc.WholesalerID)
where
sdf_customer.StatusID in (8, 12)
and
sdf_customer.FileID in %s % (file_ids_str,)
rc = dbengine.execute(sql).rowcount


 OK, you have two MetaData objects which makes me think each one has
 its own engine pointing to an environment with a particular default
 schema.  If you'd like one SQL statement to be generated, referencing
 tables in both schemas and executeable within a single environment,
 all the Table objects need to be defined in terms of one default
 schema.  Those which are in a different schema should include the
 Table keyword argument schema='somename'.   You should get the whole
 thing working using just one Engine and one MetaData object which
 contains all tables.

Ahhh!  Works like a charm. It seems I missed the obvious.  Thank you
kindly for speedy assistance!

Just a note on something I picked up concerning stored procedures and
MySQL. There's a thread (which I seem unable to locate now) which
ended with the suggestion that to get SPs working with SA and MySQL
one should edit databases/mysql.py and add the CALL keyword to the
regex which handles SELECTs, etc.

 SQLAlchemy-0.5.0beta1-py2.5.egg/sqlalchemy/databases/mysql.py
 Diff:
224c224
 r'\s*(?:SELECT|SHOW|DESCRIBE|XA RECOVER)',
---
 r'\s*(?:SELECT|SHOW|DESCRIBE|XA RECOVER|CALL)',

The suggested patch works (for the way that I am using SPs) but has
not been applied to the main SA branch -- I have had to apply the
patch to a number of SA versions which have come out since the patch
was originally suggested.  Is this an oversight or is there a reason
why the patch has not been applied. Just don't want to be shooting
myself in the foot with a patch that solves one problem and introduces
another.

Thanks again for the help and much gratitude for the totally awesome
SQLAlchemy. Even without using the orm component it makes working with
databases almost a pleasure ;-)

Cheers,
Ryan

-- 
Ryan Tracey
Citizen: The World

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: correlated update across logical databases

2008-07-18 Thread Ryan Tracey

Hi Michael

2008/7/17 Michael Bayer [EMAIL PROTECTED]:


 On Jul 17, 2008, at 7:12 AM, Ryan Tracey wrote:


 Hi

 I would like to do a correlated update involving tables located in two
 logical databases on the same MySQL server.

 The commented out code below would work except that the mysql ends up
 looking for the one table in the wrong database. customer is defined
 as Table('customer', ps_final_meta, autoload=True) and sdf_customer is
 defined as Table('sdf_customer, ps_staging_meta, autoload=True).

 How can I tell sqlalchemy to include the database names in the sql it
 generates? For the moment I am just using SQL directly in an
 execute().

file_ids_str = makeSQLList(tuple(file_ids))
# sqlalchemy correlated update
# TODO: figure out to do correlated updates across databases
#
#s = select([customer.c.MP_Code],
#
 and_(customer.c.CustomerAccNo1==sdf_customer.c.CustomerAccNo1,
#
 customer.c.WholesalerID==sdf_customer.c.WholesalerID
#   )).limit(1)
#rc = sdf_customer.update(
#   and_(sdf_customer.c.StatusID.in_([8, 12]),
#sdf_customer.c.FileID.in_(file_ids)
#   ),
#   values={sdf_customer.c.MP_Code:s}).execute().rowcount
sql = 
update
sdf_customer
set
sdf_customer.MP_Code = (
select
fc.MP_Code
from
ps_final.customer fc
where
sdf_customer.CustomerAccNo1=fc.CustomerAccNo1
and
sdf_customer.WholesalerID=fc.WholesalerID)
where
sdf_customer.StatusID in (8, 12)
and
sdf_customer.FileID in %s % (file_ids_str,)
rc = dbengine.execute(sql).rowcount


 OK, you have two MetaData objects which makes me think each one has
 its own engine pointing to an environment with a particular default
 schema.  If you'd like one SQL statement to be generated, referencing
 tables in both schemas and executeable within a single environment,
 all lthe Table objects need to be defined in terms of one default
 schema.  Those which are in a different schema should include the
 Table keyword argument schema='somename'.   You should get the whole
 thing working using just one Engine and one MetaData object which
 contains all tables.

Urgh. Looks like I spoke too soon. In my previous email I said your
suggestion worked. That was only partially correct. Tables in the
second schema (the one not specified in the engine definition)
autoload fine and the correlated update across the two schemas works.
However, if a table in the second schema has a constraint on another
table in that same schema then autoloading fails. Looking at the (pdb)
traceback in ipython, somewhere schema='schema2' becomes schema=None
where FK constraints are being handled.

I am using SA 0.5.0beta2. But the same problem occurs with 0.4.6.
Here's the standard python traceback:


Traceback (most recent call last):
  File /usr/lib/python2.5/runpy.py, line 95, in run_module
filename, loader, alter_sys)
  File /usr/lib/python2.5/runpy.py, line 52, in _run_module_code
mod_name, mod_fname, mod_loader)
  File /usr/lib/python2.5/runpy.py, line 32, in _run_code
exec code in run_globals
  File /var/tmp/dbconnect.py, line 61, in module
autoload=True, schema='ps_final')
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/schema.py,
line 113, in __call__
return type.__call__(self, name, metadata, *args, **kwargs)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/schema.py,
line 242, in __init__
_bind_or_error(metadata).reflecttable(self, include_columns=include_columns)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/engine/base.py,
line 1282, in reflecttable
self.dialect.reflecttable(conn, table, include_columns)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/databases/mysql.py,
line 1655, in reflecttable
only=include_columns)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/databases/mysql.py,
line 2135, in reflect
self._set_constraints(table, constraints, connection, only)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/databases/mysql.py,
line 2295, in _set_constraints
autoload=True, autoload_with=connection)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/schema.py,
line 113, in __call__
return type.__call__(self, name, metadata, *args, **kwargs)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5.egg/sqlalchemy/schema.py,
line 240, in __init__
autoload_with.reflecttable(self, include_columns=include_columns)
  File 
/home/ryant/lib/python2.5/site-packages/SQLAlchemy-0.5.0beta2-py2.5

[sqlalchemy] correlated update across logical databases

2008-07-17 Thread Ryan Tracey

Hi

I would like to do a correlated update involving tables located in two
logical databases on the same MySQL server.

The commented out code below would work except that the mysql ends up
looking for the one table in the wrong database. customer is defined
as Table('customer', ps_final_meta, autoload=True) and sdf_customer is
defined as Table('sdf_customer, ps_staging_meta, autoload=True).

How can I tell sqlalchemy to include the database names in the sql it
generates? For the moment I am just using SQL directly in an
execute().

file_ids_str = makeSQLList(tuple(file_ids))
# sqlalchemy correlated update
# TODO: figure out to do correlated updates across databases
#
#s = select([customer.c.MP_Code],
#   and_(customer.c.CustomerAccNo1==sdf_customer.c.CustomerAccNo1,
#customer.c.WholesalerID==sdf_customer.c.WholesalerID
#   )).limit(1)
#rc = sdf_customer.update(
#   and_(sdf_customer.c.StatusID.in_([8, 12]),
#sdf_customer.c.FileID.in_(file_ids)
#   ),
#   values={sdf_customer.c.MP_Code:s}).execute().rowcount
sql = 
update
sdf_customer
set
sdf_customer.MP_Code = (
select
fc.MP_Code
from
ps_final.customer fc
where
sdf_customer.CustomerAccNo1=fc.CustomerAccNo1
and
sdf_customer.WholesalerID=fc.WholesalerID)
where
sdf_customer.StatusID in (8, 12)
and
sdf_customer.FileID in %s % (file_ids_str,)
rc = dbengine.execute(sql).rowcount

Any help would be much appreciated.

Regards,
Ryan

-- 
Ryan Tracey
Citizen: The World

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Can't reconnect until invalid transaction is rolled back after application sits idle overnight.

2008-07-11 Thread Ryan Parrish

Using SQLAlchemy 0.4.4

I am having a problem with SQLAlchemy where after the application that
uses SQLAlchemy has been sitting overnight and a user makes the first
queries of the day thru the app that uses SA, SQLAlchemy throws an
error saying 'MySQL server has gone away', which I understand the
reason to be that my mysql server has cut the idle connections, but I
have pool_recycle = 3600 set and I thought that was the solution to
keeping connections alive?

After that error message I get two more exceptions when the user tries
the request again of Can't reconnect until invalid transaction is
rolled back, which I don't understand at all because the application
only queries the DB and never deals with transactions.  After those
three attempts, the forth request will go through just fine.  :-\  Any
pointers on what I should look for or do?

Here is the full traceback of the exceptions...

Module vendormiddleware.model:47 in by_company_code
  count = codes.count()
Module sqlalchemy.orm.query:1087 in count
  return q._count()
Module sqlalchemy.orm.query:1108 in _count
  return self.session.scalar(s, params=self._params, mapper=self.mapper)
Module sqlalchemy.orm.session:612 in scalar
  return self.__connection(engine, close_with_result=True).scalar(clause, 
 params or {}, **kwargs)
Module sqlalchemy.engine.base:836 in scalar
  return self.execute(object, *multiparams, **params).scalar()
Module sqlalchemy.engine.base:846 in execute
  return Connection.executors[c](self, object, multiparams, params)
Module sqlalchemy.engine.base:897 in execute_clauseelement
  return self._execute_compiled(elem.compile(dialect=self.dialect, 
 column_keys=keys, inline=len(params)  1), distilled_params=params)
Module sqlalchemy.engine.base:909 in _execute_compiled
  self.__execute_raw(context)
Module sqlalchemy.engine.base:918 in __execute_raw
  self._cursor_execute(context.cursor, context.statement, 
 context.parameters[0], context=context)
Module sqlalchemy.engine.base:962 in _cursor_execute
  self._handle_dbapi_exception(e, statement, parameters, cursor)
Module sqlalchemy.engine.base:944 in _handle_dbapi_exception
  raise exceptions.DBAPIError.instance(statement, parameters, e, 
 connection_invalidated=is_disconnect)
OperationalError: (OperationalError) (2006, 'MySQL server has gone
away') u'SELECT count(company.company_id) AS count_1 \nFROM company
\nWHERE company.company_code = %s' [u'CUSTOMER1']




Module vendormiddleware.model:47 in by_company_code
  count = codes.count()
Module sqlalchemy.orm.query:1087 in count
  return q._count()
Module sqlalchemy.orm.query:1108 in _count
  return self.session.scalar(s, params=self._params, mapper=self.mapper)
Module sqlalchemy.orm.session:612 in scalar
  return self.__connection(engine, close_with_result=True).scalar(clause, 
 params or {}, **kwargs)
Module sqlalchemy.engine.base:836 in scalar
  return self.execute(object, *multiparams, **params).scalar()
Module sqlalchemy.engine.base:846 in execute
  return Connection.executors[c](self, object, multiparams, params)
Module sqlalchemy.engine.base:897 in execute_clauseelement
  return self._execute_compiled(elem.compile(dialect=self.dialect, 
 column_keys=keys, inline=len(params)  1), distilled_params=params)
Module sqlalchemy.engine.base:906 in _execute_compiled
  context = self.__create_execution_context(compiled=compiled, 
 parameters=distilled_params)
Module sqlalchemy.engine.base:950 in __create_execution_context
  return self.engine.dialect.create_execution_context(connection=self, 
 **kwargs)
Module sqlalchemy.databases.mysql:1464 in create_execution_context
  return MySQLExecutionContext(self, connection, **kwargs)
Module sqlalchemy.engine.default:178 in __init__
  self.cursor = self.create_cursor()
Module sqlalchemy.engine.default:275 in create_cursor
  return self._connection.connection.cursor()
Module sqlalchemy.engine.base:583 in connection
  raise exceptions.InvalidRequestError(Can't reconnect until invalid 
 transaction is rolled back)
InvalidRequestError: Can't reconnect until invalid transaction is
rolled back
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Can't reconnect until invalid transaction is rolled back after application sits idle overnight.

2008-07-11 Thread Ryan Parrish

On Jul 11, 10:03 am, Michael Bayer [EMAIL PROTECTED] wrote:.

 The reason you're getting the disconnect exception in the first place  
 is because the pool_recycle feature only works upon checkout from the  
 pool.  So the solution is the same, ensure all connections are  
 returned to the pool after operations are complete.

So if i just setup sessionmaker(autoflush=True, transactional=False,
bind=engine) rather than transactional=True which it is now; I would
never have these implicit begin()'s started which are pointless since
I'm only doing selects on the DB?  Thus my problem would be solved?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using custom functions and threading (v. 0.4.6)

2008-07-02 Thread Ryan Ginstrom

 [mailto:[EMAIL PROTECTED] On Behalf Of Ryan Ginstrom
 If I start my method calls with begin() would I be able to 
 create functions in the connection object, then use my 
 session object to run queries?

Founds this one out for myself: no, it doesn't work g

Regards,
Ryan Ginstrom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using custom functions and threading (v. 0.4.6)

2008-07-01 Thread Ryan Ginstrom

I'm a newbie to SQLAlchemy, so please excuse me if this is common knowledge.

Using the sqlite backend, I'm creating custom functions and then running
queries with them. I started by using the connection's create_function
method, and then running the query with the session.execute method.

self.session = self.SessionClass()
search_string = SELECT * FROM records
WHERE memory_id=:id AND get_concordance(source)
concordance_func = make_concordance_func(query)
conn = self.session.bind.connect()
conn.connection.create_function(get_concordance,
1,
concordance_func)

rows = [dict(x) for x in self.session.execute(search_string,
dict(id=id))]

return cPickle.dumps(rows)

This worked fine in unit testing, but when I tried it from a cherrypy
instance, it failed semi-randomly. The failure was 
(OperationalError) no such function: get_concordance
It would work for half the queries, and fail for the other half, not always
the same ones.

I assumed that this was due to some threading issue, and changed the execute
method to the connection object. This works:

# ... same up to here
rows = [dict(x) for x in conn.execute(search_string, dict(id=id))]

return cPickle.dumps(rows)

What I'd like to find out is:
  (1) What is going on to cause this?
  (2) Is there something horribly wrong with my approach? 
Thanks in advance for any enlightenment.

Regards,
Ryan Ginstrom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using custom functions and threading (v. 0.4.6)

2008-07-01 Thread Ryan Ginstrom

 [mailto:[EMAIL PROTECTED] On Behalf Of Michael Bayer
 Theres some things you're doing that would specifically cause 
 you to get back a different connection than the one you 
 created the function on, even within a single thread; except 
 that the SQLite dialect uses the SingletonThreadPool pool by 
 default which does in fact nail a single connection to each 
 thread.  Seems like you've either changed that setting or you 
...
 Calling session.bind.connect() in the general case gives you 
 a connection unrelated to the one used when you call 
 Session.execute().  
 If you'd like the connection that the Session is actually 
 using, the Session must be within a transaction (via 
 transactional=True or via  
 begin()), then call Session.connection().   If the Session is 
 not in a  
 transaction, Session.connection() again calls an arbitrary 
 connection from the pool.

Thanks a lot for the information. I'm creating my session class like so:
SessionClass = scoped_session(sessionmaker(bind=engine, autoflush=True,
transactional=True))

I'm closing out each exposed method call with a self.session.close()

If I start my method calls with begin() would I be able to create
functions in the connection object, then use my session object to run
queries?

Regards,
Ryan Ginstrom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why is explicit 'and_' required for filter but not filter_by?

2007-09-15 Thread Ryan

And whatever happened to the nice select_by Query method where I could
mix filter-type criterion arguments and filter_by-type keyword
arguments in one call.  *That* was readable to me, and very usable in
a pythonic way.  What's wrong with having one filter method that does
everything?  Python dictates that keyword args must follow sequential
args, but other than that, it's completely flexible.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Why is explicit 'and_' required for filter but not filter_by?

2007-09-13 Thread Ryan Fugger

In 0.4: Is there any reason that I (as far as i can tell) need an
explicit 'and_' to use multiple parameters with Query.filter, as
opposed to filter_by, where I don't need an explicit 'and_'?  I think
the latter is cleaner.

Ryan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Selecting objects with null foreign keys

2007-04-09 Thread Ryan

Suppose I have a table/object called Node that maps to Host by
relation field 'host' on foreign key host_id that may be null.  Why
can't I do the following:

Query(Node).select_by(host=None)

When it seems obvious that I mean this:

Query(Node).select_by(Node.c.host_id==None)

I think the former would be cleaner, since I could just pass a 'host'
variable that may be None without having to check it manually first
before getting its id (however simple that might be).

Thanks.
Ryan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---