Use events with expression api

2020-09-30 Thread brian...@blue-newt.com
Is there a way to use events with the expression api? I see how they're 
used with ORM and Core but not expressions.

Thanks,
Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/17a74151-51fb-44a5-9480-6d44acca52e1n%40googlegroups.com.


Use events with expression api

2020-09-30 Thread brian...@blue-newt.com
Is there a way to use events with the expression api?

I see how they're used with ORM and Core, but not expressions.

Thanks,
Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/2da059cf-9e55-41eb-8b64-9a79cc978b67n%40googlegroups.com.


[sqlalchemy] Re: join on array column

2020-05-08 Thread Brian Rayburn
oo actually I think I found my solution. Was specifically looking for way 
to do this in ORM relationship. Think I found my solution in the Custom 
Operators based on SQL function section of 
https://docs.sqlalchemy.org/en/13/orm/join_conditions.html

On Thursday, May 7, 2020 at 4:12:40 PM UTC-4, Brian Rayburn wrote:
>
> Hey all,
>
> In postgresql you can do this:
> ```
> SELECT t.*FROM unnest(ARRAY[1,2,3,2,3,5]) item_idLEFT JOIN items t on 
> t.id=item_id
> ```
>
> Is there support for this sort of join on an array of id's in sqlalchemy?
>
> Best wishes,
> Brian
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/789f8031-a736-44b8-9103-eb977e57ea1d%40googlegroups.com.


[sqlalchemy] join on array column

2020-05-07 Thread Brian Rayburn


Hey all,

In postgresql you can do this:
```
SELECT t.*FROM unnest(ARRAY[1,2,3,2,3,5]) item_idLEFT JOIN items t on 
t.id=item_id
```

Is there support for this sort of join on an array of id's in sqlalchemy?

Best wishes,
Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/0935b393-0508-4299-ab39-375531daa545%40googlegroups.com.


Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Brian Hill
That works!

Thank you.

Brian

On Friday, March 20, 2020 at 12:11:21 PM UTC-4, Mike Bayer wrote:
>
> OK one more addition to the recipe, please do this:
>
> DB_SCHEMA = "my_foo_schema"
>
> with connectable.connect() as connection:
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> connection.execute(f'set search_path to {DB_SCHEMA}')
>
> connection.dialect.default_schema_name = DB_SCHEMA
>
>
> will work completely
>
> alternatively you can use an on connect event on the engine to do the same 
> thing.
>
>
> On Fri, Mar 20, 2020, at 10:48 AM, Brian Hill wrote:
>
> this is what i tried but it generated the whole schema:
> .
> with connectable.connect() as connection:
>
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> connection.execute(f'set search_path to {DB_SCHEMA}')
>
> context.configure(
> connection=connection,
> target_metadata=metadata,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
> it works when my schema is 'public', when it matches metadata. my solution 
> will be to set my schema to 'public' and use this simple env.py.
>
> thanks,
>
> brian
>
> On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>
> I just realized that you really dont need to even use schema_translate_map 
> at all here.  If you use a completely straight env.py, and simply set 
> search_path=SCHEMA, you should be able to run Alembic in entirely 
> "schemaless" mode; don't include schemas anywhere nor would you need to set 
> it for the alembic version table.Postgresql will also CREATE/ ALTER in 
> that schema as well.As long as your environment runs completely in just 
> one schema at a time, this can be set up entirely at the connection level.
>
>
> On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many schemas
>
>
> yes
>  
>
>
> 2. you want to run autogenerate only once
>
>
> yes
>  
>
> 3. you want your mirations genrated with None for schema
>
>
> yes
>  
>
> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
> arbitrary?) schema to use as the target, is that right?
>
>
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
>  
>
>
>
> As it stands, your autogenerate is not going to look in any schema except 
> "public", assuming default PG search path, because you did not set 
> "include_schemas=True".
>
>
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
>
>
> If you truly want autogenerate to look at one schema and only that schema, 
> and generate everything as non-schema qualified, then I would simply set 
> the default schema in PG to that schema name using search_path:
>
> connection.execute("SET search_path TO my_schema")
>
> that way everything Alembic reflects will be from "my_schema" and it will 
> see the schema as blank, and it should generate as such.  you might need to 
> disable the schema translate map when autogenerate runs but try it without 
> doing that first.
>
>
> it's still generating the entire schema in the revision file.
>
> here's the updated section and it behaves the same with/without schema 
> translate map:
>
> with connectable.connect() as connection:
>
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>
> connection.execute(f"set search_path to {DB_SCHEMA}")
>
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
>
>
>
> no, don't use "include_schemas".  you want to run alembic in a mode where 
> it has no idea there are other schemas to use.   i think 
> version_table_schema is OK here but don't use include_schemas.  Also turn 
> on INFO or DEBUG logging for the sqlalchemy logger to see exactly w

Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Brian Hill
this is what i tried but it generated the whole schema:
.
with connectable.connect() as connection:

connection.execute(f'create schema if not exists {DB_SCHEMA}')
connection.execute(f'set search_path to {DB_SCHEMA}')

context.configure(
connection=connection,
target_metadata=metadata,
)

with context.begin_transaction():
context.run_migrations()

it works when my schema is 'public', when it matches metadata. my solution 
will be to set my schema to 'public' and use this simple env.py.

thanks,

brian

On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>
> I just realized that you really dont need to even use schema_translate_map 
> at all here.  If you use a completely straight env.py, and simply set 
> search_path=SCHEMA, you should be able to run Alembic in entirely 
> "schemaless" mode; don't include schemas anywhere nor would you need to set 
> it for the alembic version table.Postgresql will also CREATE/ ALTER in 
> that schema as well.As long as your environment runs completely in just 
> one schema at a time, this can be set up entirely at the connection level.
>
>
> On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many schemas
>
>
> yes
>  
>
>
> 2. you want to run autogenerate only once
>
>
> yes
>  
>
> 3. you want your mirations genrated with None for schema
>
>
> yes
>  
>
> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
> arbitrary?) schema to use as the target, is that right?
>
>
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
>  
>
>
>
> As it stands, your autogenerate is not going to look in any schema except 
> "public", assuming default PG search path, because you did not set 
> "include_schemas=True".
>
>
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
>
>
> If you truly want autogenerate to look at one schema and only that schema, 
> and generate everything as non-schema qualified, then I would simply set 
> the default schema in PG to that schema name using search_path:
>
> connection.execute("SET search_path TO my_schema")
>
> that way everything Alembic reflects will be from "my_schema" and it will 
> see the schema as blank, and it should generate as such.  you might need to 
> disable the schema translate map when autogenerate runs but try it without 
> doing that first.
>
>
> it's still generating the entire schema in the revision file.
>
> here's the updated section and it behaves the same with/without schema 
> translate map:
>
> with connectable.connect() as connection:
>
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>
> connection.execute(f"set search_path to {DB_SCHEMA}")
>
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
>
>
>
> no, don't use "include_schemas".  you want to run alembic in a mode where 
> it has no idea there are other schemas to use.   i think 
> version_table_schema is OK here but don't use include_schemas.  Also turn 
> on INFO or DEBUG logging for the sqlalchemy logger to see exactly what 
> tables it is reflecting.
>
>
> same behavior, entire schema generated. i'll look at using INFO and DEBUG 
> tomorrow to get a better idea of what's going on, but using public as my 
> dev schema to autogenerate against works. i just have to exclude the 
> alembic tables.
>
> again thanks for your help in understanding what i was trying to do.
>
> brian
>  
>
>
>
>
>
>
>
>  
>
>
>
>
> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>
> Here's my env.py. Thanks for the help.
> Brian
>
> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>
> Are there known issues with using autogenerate with 

Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Brian Hill


On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many schemas
>
>
> yes
>  
>
>
> 2. you want to run autogenerate only once
>
>
> yes
>  
>
> 3. you want your mirations genrated with None for schema
>
>
> yes
>  
>
> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
> arbitrary?) schema to use as the target, is that right?
>
>
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
>  
>
>
>
> As it stands, your autogenerate is not going to look in any schema except 
> "public", assuming default PG search path, because you did not set 
> "include_schemas=True".
>
>
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
>
>
> If you truly want autogenerate to look at one schema and only that schema, 
> and generate everything as non-schema qualified, then I would simply set 
> the default schema in PG to that schema name using search_path:
>
> connection.execute("SET search_path TO my_schema")
>
> that way everything Alembic reflects will be from "my_schema" and it will 
> see the schema as blank, and it should generate as such.  you might need to 
> disable the schema translate map when autogenerate runs but try it without 
> doing that first.
>
>
> it's still generating the entire schema in the revision file.
>
> here's the updated section and it behaves the same with/without schema 
> translate map:
>
> with connectable.connect() as connection:
>
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>
> connection.execute(f"set search_path to {DB_SCHEMA}")
>
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
>
>
>
> no, don't use "include_schemas".  you want to run alembic in a mode where 
> it has no idea there are other schemas to use.   i think 
> version_table_schema is OK here but don't use include_schemas.  Also turn 
> on INFO or DEBUG logging for the sqlalchemy logger to see exactly what 
> tables it is reflecting.
>

same behavior, entire schema generated. i'll look at using INFO and DEBUG 
tomorrow to get a better idea of what's going on, but using public as my 
dev schema to autogenerate against works. i just have to exclude the 
alembic tables.

again thanks for your help in understanding what i was trying to do.

brian
 

>
>
>
>
>
>  
>
>
>
>
> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>
> Here's my env.py. Thanks for the help.
> Brian
>
> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>
> Are there known issues with using autogenerate with multi-tenant 
> (schema_translate_map)?
>
>
> it's complicated and not one-size-fits-all, if you consider that to be an 
> issue
>
>
>
> My metadata doesn't have a schema and I my 
> schema_translate_map={None:'my_schema'}.
>
> This works for migrations but when I use autogenerate the generated 
> revision file is the full schema and not the diff.
>
> It's detecting the alembic version table in the tenant schema. If I run 
> the autogenerate a second time with the new revision file it fails saying 
> Taget database is not up to date.
>
>
> are you setting include_schemas=True in your environment?  I'd probably 
> not do this.   Can't help much more without a complete and concise working 
> example of your env.py, please remove all extraneous details.
>
>
>
> Thanks,
>
> Brian
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9

Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Brian Hill
i think i just figured out that what i'm trying to do is unreasonable and 
unecessary. alembic shouldn't have to know how to reverse map the schemas 
and i don't need to do it that way. i can create my dev schema in public 
and autogenerate against that, which maintains the same schema in metadata 
and the db.

thanks,

brian

On Thursday, March 19, 2020 at 7:41:54 PM UTC-4, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>>
>> so let me get this straight:
>>
>> 1. you have many schemas
>>
>>
> yes
>  
>
>> 2. you want to run autogenerate only once
>>
>
> yes
>  
>
>> 3. you want your mirations genrated with None for schema
>>
>
> yes
>  
>
>> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
>> arbitrary?) schema to use as the target, is that right?
>>
>
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
>  
>
>>
>> As it stands, your autogenerate is not going to look in any schema except 
>> "public", assuming default PG search path, because you did not set 
>> "include_schemas=True".
>>
>>
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
>
> If you truly want autogenerate to look at one schema and only that schema, 
>> and generate everything as non-schema qualified, then I would simply set 
>> the default schema in PG to that schema name using search_path:
>>
>> connection.execute("SET search_path TO my_schema")
>>
>> that way everything Alembic reflects will be from "my_schema" and it will 
>> see the schema as blank, and it should generate as such.  you might need to 
>> disable the schema translate map when autogenerate runs but try it without 
>> doing that first.
>>
>>
> it's still generating the entire schema in the revision file.
>
> here's the updated section and it behaves the same with/without schema 
> translate map:
>
> with connectable.connect() as connection:
>
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>
> connection.execute(f"set search_path to {DB_SCHEMA}")
>
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
>  
>
>>
>>
>> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>>
>> Here's my env.py. Thanks for the help.
>> Brian
>>
>> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>>
>>
>>
>> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>>
>> Are there known issues with using autogenerate with multi-tenant 
>> (schema_translate_map)?
>>
>>
>> it's complicated and not one-size-fits-all, if you consider that to be an 
>> issue
>>
>>
>>
>> My metadata doesn't have a schema and I my 
>> schema_translate_map={None:'my_schema'}.
>>
>> This works for migrations but when I use autogenerate the generated 
>> revision file is the full schema and not the diff.
>>
>> It's detecting the alembic version table in the tenant schema. If I run 
>> the autogenerate a second time with the new revision file it fails saying 
>> Taget database is not up to date.
>>
>>
>> are you setting include_schemas=True in your environment?  I'd probably 
>> not do this.   Can't help much more without a complete and concise working 
>> example of your env.py, please remove all extraneous details.
>>
>>
>>
>> Thanks,
>>
>> Brian
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy-alembic" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d24

Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Brian Hill


On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many schemas
>
>
yes
 

> 2. you want to run autogenerate only once
>

yes
 

> 3. you want your mirations genrated with None for schema
>

yes
 

> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
> arbitrary?) schema to use as the target, is that right?
>

yes one, but it won't be arbitrary, it will be a development schema 
generated from the latest version (prior to the change we want to 
autogenerate).
 

>
> As it stands, your autogenerate is not going to look in any schema except 
> "public", assuming default PG search path, because you did not set 
> "include_schemas=True".
>
>
i had it set to True and it behaves the same. i just took it out based on 
your previous suggestion, but again, it behaves the same.

If you truly want autogenerate to look at one schema and only that schema, 
> and generate everything as non-schema qualified, then I would simply set 
> the default schema in PG to that schema name using search_path:
>
> connection.execute("SET search_path TO my_schema")
>
> that way everything Alembic reflects will be from "my_schema" and it will 
> see the schema as blank, and it should generate as such.  you might need to 
> disable the schema translate map when autogenerate runs but try it without 
> doing that first.
>
>
it's still generating the entire schema in the revision file.

here's the updated section and it behaves the same with/without schema 
translate map:

with connectable.connect() as connection:

# create DB_SCHEMA if it doesn't exist
connection.execute(f'create schema if not exists {DB_SCHEMA}')

# map metadata schema (None) to DB_SCHEMA
# connection = 
connection.execution_options(schema_translate_map={None:DB_SCHEMA})

connection.execute(f"set search_path to {DB_SCHEMA}")

# set alembic version table location in DB_SCHEMA
context.configure(
connection=connection,
include_schemas=True,
target_metadata=metadata,
version_table_schema=DB_SCHEMA,
)

with context.begin_transaction():
context.run_migrations()

 

>
>
> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>
> Here's my env.py. Thanks for the help.
> Brian
>
> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>
> Are there known issues with using autogenerate with multi-tenant 
> (schema_translate_map)?
>
>
> it's complicated and not one-size-fits-all, if you consider that to be an 
> issue
>
>
>
> My metadata doesn't have a schema and I my 
> schema_translate_map={None:'my_schema'}.
>
> This works for migrations but when I use autogenerate the generated 
> revision file is the full schema and not the diff.
>
> It's detecting the alembic version table in the tenant schema. If I run 
> the autogenerate a second time with the new revision file it fails saying 
> Taget database is not up to date.
>
>
> are you setting include_schemas=True in your environment?  I'd probably 
> not do this.   Can't help much more without a complete and concise working 
> example of your env.py, please remove all extraneous details.
>
>
>
> Thanks,
>
> Brian
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/fe6fddb4-25d6-4dd1-be3a-c1a174548bb4%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/fe6fddb4-25d6-4dd1-be3a-c1a174548bb4%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
> *Attachments:*
>
>- env.py
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/f6dcb50d-368c-4f8d-8e0b-af6f80d10084%40googlegroups.com.


Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Brian Hill
Here's my env.py. Thanks for the help.
Brian

On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>
> Are there known issues with using autogenerate with multi-tenant 
> (schema_translate_map)?
>
>
> it's complicated and not one-size-fits-all, if you consider that to be an 
> issue
>
>
>
> My metadata doesn't have a schema and I my 
> schema_translate_map={None:'my_schema'}.
>
> This works for migrations but when I use autogenerate the generated 
> revision file is the full schema and not the diff.
>
> It's detecting the alembic version table in the tenant schema. If I run 
> the autogenerate a second time with the new revision file it fails saying 
> Taget database is not up to date.
>
>
> are you setting include_schemas=True in your environment?  I'd probably 
> not do this.   Can't help much more without a complete and concise working 
> example of your env.py, please remove all extraneous details.
>
>
>
> Thanks,
>
> Brian
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/fe6fddb4-25d6-4dd1-be3a-c1a174548bb4%40googlegroups.com.
import os
from alembic import context
from sqlalchemy import engine_from_config, pool

# add parent/parent to path so we can import metadata
dir_path = os.path.dirname(os.path.realpath(__file__))
repo_path = os.path.dirname(dir_path)
os.sys.path.append(repo_path)

# required import when autogenerating revisions at the command line:
from yms_db.db.db_schema import metadata

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# get db envs and set config sqlalchemy.url
DB_SCHEMA = os.environ['DB_SCHEMA']
PG_URL = 'postgresql://{DB_USER}:{DB_PASS}@{DB_HOST}/{DB_NAME}'.format(**os.environ)
config.set_main_option('sqlalchemy.url', PG_URL)

def run_migrations_offline():
"""Run migrations in 'offline' mode."""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=metadata, literal_binds=True)

with context.begin_transaction():
context.run_migrations()


def run_migrations_online():
"""Run migrations in 'online' mode."""

connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)

with connectable.connect() as connection:

# create DB_SCHEMA if it doesn't exist
connection.execute(f'create schema if not exists {DB_SCHEMA}')

# map metadata schema (None) to DB_SCHEMA
connection = connection.execution_options(schema_translate_map={None:DB_SCHEMA})

# set alembic version table location in DB_SCHEMA
context.configure(
connection=connection,
target_metadata=metadata,
version_table_schema=DB_SCHEMA,
)

with context.begin_transaction():
context.run_migrations()

if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()


Autogenerate with Multi-Tenant

2020-03-19 Thread Brian Hill
Are there known issues with using autogenerate with multi-tenant 
(schema_translate_map)?

My metadata doesn't have a schema and I my 
schema_translate_map={None:'my_schema'}.

This works for migrations but when I use autogenerate the generated 
revision file is the full schema and not the diff.

It's detecting the alembic version table in the tenant schema. If I run the 
autogenerate a second time with the new revision file it fails saying Taget 
database is not up to date.

Thanks,

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com.


Re: Migrate alembic_version table from Public to tenant schema

2020-02-18 Thread Brian Hill
That works. I check if the version table exists in the schema and set 
version_table_schema if it does. Then after run_migrations if the version 
table didn't exist in the schema I move it to the schema.

Thanks!

Brian

On Tuesday, February 18, 2020 at 1:30:54 PM UTC-5, Mike Bayer wrote:
>
>
>
> On Tue, Feb 18, 2020, at 1:17 PM, Brian Hill wrote:
>
> Is there a way to move the alembic_version table in the Public schema 
> (postgres) to a specific schema (mult-tenant) as part of a migration?
>
>
> I highly doubt this is possible in the general case without the raw SQL in 
> your env.py,  because as the migration runs, the environment needs to 
> update the table.  if it moves schemas, then it won't be updating the 
> correct table anymore.You would have to manipulate the internal state 
> of the MigrationContext within the migration script so that this doesnt 
> happen, however, the migration is also occurring inside of a transaction so 
> it's very likely that you wouldn't actually be able to fully drop the old 
> table in every case and there could be other side effects of this as well,  
> and I cant guarantee this internal manipulation will always work for new 
> Alembic releases.
>
>
>
> I want alembic to read the inital version from Public.alembic_version and 
> then write the new upgraded version to the specific schema.alembic_version.
>
> The only way I can get it to work is executing raw sql after the 
> migration, and this won't work for future migrations.
>
> with context.begin_transaction():
> context.run_migrations()
> context.execute(f'alter table Public.alembic_version set schema {DB_SCHEMA
> }')
>
>
>
> I think what you can do here is check for alembic_version in the old 
> schema and in the new one and then move the table after the migrations but 
> only if it needs to be moved.   Basically have the migrations always 
> run with the schema whereever it is to start with, then move it after the 
> fact.
>
> you'd have to detect which schema to use before you configure the context, 
> however, since you need to pass it the schema for the alembic_version table.
>
>
>
>
> Do I have to flatten/rebase my versions after this?
>
> Thanks,
>
> Brian
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/9ceb8556-e2c3-4631-84cf-8ba636c31a24%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/9ceb8556-e2c3-4631-84cf-8ba636c31a24%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/7a4e0542-0ae8-4148-a802-9c34c485144b%40googlegroups.com.


Migrate alembic_version table from Public to tenant schema

2020-02-18 Thread Brian Hill
Is there a way to move the alembic_version table in the Public schema 
(postgres) to a specific schema (mult-tenant) as part of a migration?

I want alembic to read the inital version from Public.alembic_version and 
then write the new upgraded version to the specific schema.alembic_version.

The only way I can get it to work is executing raw sql after the migration, 
and this won't work for future migrations.

with context.begin_transaction():
context.run_migrations()
context.execute(f'alter table Public.alembic_version set schema {DB_SCHEMA}'
)


Do I have to flatten/rebase my versions after this?

Thanks,

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/9ceb8556-e2c3-4631-84cf-8ba636c31a24%40googlegroups.com.


Re: Using enums with schema_translate_map

2020-02-17 Thread Brian Hill
Thanks for figuring this out.

On Monday, February 17, 2020 at 3:05:30 PM UTC-5, Mike Bayer wrote:
>
> Here is your SQL output, enum is created correctly:
>
> CREATE TYPE test_schema.enum1 AS ENUM ('One', 'Two')
>
>
> however table does not refer to the correct enum:
>
> CREATE TABLE test_schema.table1 (
> id SERIAL NOT NULL,
> type1 enum1,
> PRIMARY KEY (id)
> )
>
>
> this is SQLAlchemy bug 
> https://github.com/sqlalchemy/sqlalchemy/issues/5158 will be fixed in 
> 1.3.14
>
>
> On Mon, Feb 17, 2020, at 2:51 PM, Mike Bayer wrote:
>
> mm nope we have plenty of tests for this here:
>
>
> https://github.com/sqlalchemy/sqlalchemy/blob/master/test/dialect/postgresql/test_compiler.py#L223
>
>
>
> On Mon, Feb 17, 2020, at 2:49 PM, Mike Bayer wrote:
>
> it's possible also that PG Enum CREATE TYPE doesn't work with 
> schema_translate_map, would need to evaluate that on the SQLAlchemy side.
>
> On Mon, Feb 17, 2020, at 2:45 PM, Mike Bayer wrote:
>
> schema_translate_map is not yet supported with all Alembic ops:
>
> https://github.com/sqlalchemy/alembic/issues/555
>
> you will need to fill in the "schema" parameter explicitly when you call 
> upon op.create_table()
>
>
>
> On Mon, Feb 17, 2020, at 12:47 PM, Brian Hill wrote:
>
> I'm having trouble using enums in conjunction with schema_translate_map 
> for postgres migrations.
>
> My model, single table, single enum.
>
> import enum
> from sqlalchemy import MetaData, Enum, Column, Integer
> from sqlalchemy.ext.declarative import declarative_base
>
>
> metadata = MetaData()
> Base = declarative_base(metadata=metadata)
>
>
>
>
> class Enum1(enum.Enum):
> One = 1
> Two = 2
>
>
>
>
> class Table1(Base):
> __tablename__ = 'table1'
> id = Column(Integer, primary_key=True)
> type1 = Column(Enum(Enum1))
>
> Version file.
>
>
> """initial revision
>
>
> Revision ID: 844dd0269c1b
> Revises: 
> Create Date: 2020-02-17 12:23:31.125308
>
>
> """
> from alembic import op
> import sqlalchemy as sa
>
>
>
>
> # revision identifiers, used by Alembic.
> revision = '844dd0269c1b'
> down_revision = None
> branch_labels = None
> depends_on = None
>
>
>
>
> def upgrade():
> # ### commands auto generated by Alembic - please adjust! ###
> op.create_table('table1',
> sa.Column('id', sa.Integer(), nullable=False),
> sa.Column('type1', sa.Enum('One', 'Two', name='enum1'), nullable=True
> ),
> sa.PrimaryKeyConstraint('id')
> )
> # ### end Alembic commands ###
>
>
>
>
> def downgrade():
> # ### commands auto generated by Alembic - please adjust! ###
> op.drop_table('table1')
> # ### end Alembic commands ###
>
>
> Run migration funtion from env.py (creating foo schema):
>
>
>
> def run_migrations_online():
> connectable = engine_from_config(
> config.get_section(config.config_ini_section),
> prefix="sqlalchemy.",
> poolclass=pool.NullPool,
> )
>
>
> with connectable.connect() as connection:
>
>
> # use different schema
> connection.execute(f'create schema if not exists foo')
> connection = connection.execution_options(
> schema_translate_map={None: 'foo'}
> )
>
>
> context.configure(
> connection=connection,
> target_metadata=target_metadata,
> include_schema=True,
> )
>
>
> with context.begin_transaction():
> context.run_migrations()
>
>
> I get the following error when i run *alembic upgrade head*:
>
>
> sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type 
> "enum1" does not exist
> LINE 4:  type1 enum1, 
>^
>
>
> [SQL: 
> CREATE TABLE foo.table1 (
> id SERIAL NOT NULL, 
> type1 enum1, 
> PRIMARY KEY (id)
> )
>
>
> ]
> (Background on this error at: http://sqlalche.me/e/f405)
>
> It works when I comment out schema_translate_map.
>
> I feel like I'm missing something fundamental about using 
> schema_translate_map for multi-tenant/schema.
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/f8513719-4dd0-4bb2-b76

Using enums with schema_translate_map

2020-02-17 Thread Brian Hill
I'm having trouble using enums in conjunction with schema_translate_map for 
postgres migrations.

My model, single table, single enum.

import enum
from sqlalchemy import MetaData, Enum, Column, Integer
from sqlalchemy.ext.declarative import declarative_base


metadata = MetaData()
Base = declarative_base(metadata=metadata)




class Enum1(enum.Enum):
One = 1
Two = 2




class Table1(Base):
__tablename__ = 'table1'
id = Column(Integer, primary_key=True)
type1 = Column(Enum(Enum1))



Version file.


"""initial revision


Revision ID: 844dd0269c1b
Revises: 
Create Date: 2020-02-17 12:23:31.125308


"""
from alembic import op
import sqlalchemy as sa




# revision identifiers, used by Alembic.
revision = '844dd0269c1b'
down_revision = None
branch_labels = None
depends_on = None




def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('table1',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('type1', sa.Enum('One', 'Two', name='enum1'), nullable=True),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###




def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('table1')
# ### end Alembic commands ###




Run migration funtion from env.py (creating foo schema):



def run_migrations_online():
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)


with connectable.connect() as connection:


# use different schema
connection.execute(f'create schema if not exists foo')
connection = connection.execution_options(
schema_translate_map={None: 'foo'}
)


context.configure(
connection=connection,
target_metadata=target_metadata,
include_schema=True,
)


with context.begin_transaction():
context.run_migrations()



I get the following error when i run *alembic upgrade head*:


sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type 
"enum1" does not exist
LINE 4:  type1 enum1, 
   ^


[SQL: 
CREATE TABLE foo.table1 (
id SERIAL NOT NULL, 
type1 enum1, 
PRIMARY KEY (id)
)


]
(Background on this error at: http://sqlalche.me/e/f405)



It works when I comment out schema_translate_map.

I feel like I'm missing something fundamental about using 
schema_translate_map for multi-tenant/schema.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/f8513719-4dd0-4bb2-b769-ff874e2017f2%40googlegroups.com.


Re: [sqlalchemy] Hang when >= 15 MS SQL Server requests have been made

2020-02-02 Thread Brian Paterni
On Monday, December 30, 2019 at 9:07:45 AM UTC-6, Mike Bayer wrote:
>
>
> On Sun, Dec 29, 2019, at 11:54 PM, Brian Paterni wrote:
>
> On Sunday, December 29, 2019 at 1:17:24 AM UTC-6, Mike Bayer wrote:
>
>
> I can't run the test app however 15 seems like your connection pool is set 
> up at its default size of 5 connections + 10 overflow, all connections are 
> being checked out, and none are being returned.
>
>
> Hm, is the issue with running the app possibly something I could help with?
>
>
> sure, if you can turn it into a single file, runnable MCVE with zero 
> depedendencies other than SQLAlchemy, a single MSSQL Python driver (please 
> note that MS's ODBC driver, while necessary, is not a Python driver by 
> itself), and in this case eventlet, I can run that.  However I think 
> you likely should be able to reproduce your issue not using SQLAlchemy at 
> all and simply using pyodbc directly assuming that's the driver you are 
> using.
>
>
> http://stackoverflow.com/help/mcve
>
>
I've created: https://github.com/bpaterni/simple-blocking-eventlet

Which should be a stripped down version of the flask-app I'd posted before.

You are correct in that the problem persists when using only pyodbc, and as 
a result I've gone ahead and created an issue with that project in order to 
try and get at the source of this problem: 
https://github.com/mkleehammer/pyodbc/issues/694
 

>
> Apparently it *does* (or should) support async, as it is mentioned several 
> times in the RELEASE_NOTES shipped with the driver. I'm not sure if it's 
> does so implicitly or explicitly though.
>
>
> unfortunately things are not that simple.   PostgreSQL for example 
> supports a non-blocking API.  However, you can't just use psycopg2 out of 
> the box and expect it to work, psycopg2 offers an explicit API for this 
> that has to be adapted, which you can see here: 
> http://initd.org/psycopg/docs/advanced.html#green-support   in order for 
> that API to work with eventlet, you need to use a special eventlet 
> adaptation form here:  https://pypi.org/project/psycogreen/
>
>
I believe psycogreen (or it's intended behavior) has already been 
integrated into eventlet: 
https://github.com/eventlet/eventlet/blob/master/eventlet/support/psycopg2_patcher.py

which is probably the reason postgresql has been implicitly working as 
expected this whole time.

I agree that some additional hoops may need to be jumped in order for MSSQL 
to work as expected, but this hang on >= 15 busy connections is strange. 
Hopefully it is something that can be bandaid'ed in pyodbc until some kind 
of genuine async interface can be added to the project...

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e5d47afa-dbfa-4d99-b5e9-47fe6ab0690f%40googlegroups.com.


Re: [sqlalchemy] Hang when >= 15 MS SQL Server requests have been made

2019-12-29 Thread Brian Paterni
On Sunday, December 29, 2019 at 1:17:24 AM UTC-6, Mike Bayer wrote:
>
>
> I can't run the test app however 15 seems like your connection pool is set 
> up at its default size of 5 connections + 10 overflow, all connections are 
> being checked out, and none are being returned.
>

Hm, is the issue with running the app possibly something I could help with? 
I agree the magic 15 figure seems to be related to connection pool 
exhaustion. The funny thing is that the app still hangs on the 15th 
connection even if pool_size+overflow are expanded > 15, but not if the 
pool_size+overflow < 15 (?!)
 

>
> while I strongly recommend against using eventlet with Python DBAPI 
> drivers or SQLAlchemy,  when using eventlet or gevent with SQLAlchemy you 
> need to ensure that a full monkeypatch of  "thereading" / "socket" and 
> everything is performed before anything else is imported.   SQLAlchemy's 
> pool makes use of a port of the Queue class which makes use of threading 
> mutexes all of which will wreck an eventlet application that did not 
> correctly monkeypatch these.
>
> I'm also not familiar with any driver for MSSQL that supports implicit or 
> explicit async.   SQLAlchemy only works with PyODBC or pymssql neither of 
> which have async support that I'm aware of, what driver are you using ?
>

The test app is careful to initiate eventlet monkey patching before any 
additional logic/imports (except for the `import os` required to check if 
an envvar is set. The problem persists even if the envvar comparison is 
taken out and eventlet monkeypatching becomes the absolute first action of 
the test app).

https://github.com/bpaterni/flask-app-simple-blocking-eventlet/blob/d279ef9b6210122ebd9c7bd17fc7e8f4346ca74d/app.py#L3

The driver I'm using to connect to SQL Server is the official ODBC driver 
from Microsoft (mssql+pyodbc):

https://docs.microsoft.com/en-us/sql/connect/odbc/microsoft-odbc-driver-for-sql-server?view=sql-server-ver15

Apparently it *does* (or should) support async, as it is mentioned several 
times in the RELEASE_NOTES shipped with the driver. I'm not sure if it's 
does so implicitly or explicitly though.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/37931757-664c-4d42-babe-e59a3e4fd177%40googlegroups.com.


[sqlalchemy] Re: Hang when >= 15 MS SQL Server requests have been made

2019-12-28 Thread Brian Paterni
Plus, if it's any help. the *does* seem resolve itself after ~2 hours. 
That, or it can be side-stepped by sending a SIGINT signal (ctrl-c) to the 
flask app when it is hung. the SIGINT seems to kill the 15th (hung) request 
and allows the app to continue processing other requests successfully.

On Saturday, December 28, 2019 at 10:12:13 PM UTC-6, Brian Paterni wrote:
>
> Hi,
>
> I seemingly have a problem with flask/socketio/eventlet/sqlalchemy + MSSQL 
> when >= 15 parallel requests have been made. I've built a test app:
>
> https://github.com/bpaterni/flask-app-simple-blocking-eventlet
>
> that can be used to reproduce the problem. It will hang if >= 15 parallel 
> request have been made to the '/api/busy/mssql' endpoint.
>
> I'm not sure if the root cause of the problem is based in the SQL Server 
> ODBC Driver, sqlalchemy, or eventlet, but I've already paid the microsoft 
> support tax only to be told that there's insufficient evidence to indicate 
> the ODBC driver is at fault. So I thought I would post the issue here to 
> see if anybody would be able to help in pinpointing the code that is at 
> fault with this problem.
>
> Once the test app above is running and has a valid SQL server to query, 
> you should be able to reproduce the hang with
>
> seq 15 | parallel -j0 "curl -s localhost:5000/api/busy/mssql && echo {}"
>
> The hang seems lo occur consistently on the 15th request. This happens 
> even when connection pool_size/max_overflow are adjusted away from their 
> respective default values which leads me to believe that exhausting the 
> connection pool is not the cause of the problem. Though there may be some 
> other reason behind the scenes for the hang occurring at the 15th 
> connection(?)
>
> Thanks very much for any help that can be provided in resolving this issue!
> :)
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/36aa8c3b-d960-4238-bc83-4d33ed325435%40googlegroups.com.


[sqlalchemy] Hang when >= 15 MS SQL Server requests have been made

2019-12-28 Thread Brian Paterni
Hi,

I seemingly have a problem with flask/socketio/eventlet/sqlalchemy + MSSQL 
when >= 15 parallel requests have been made. I've built a test app:

https://github.com/bpaterni/flask-app-simple-blocking-eventlet

that can be used to reproduce the problem. It will hang if >= 15 parallel 
request have been made to the '/api/busy/mssql' endpoint.

I'm not sure if the root cause of the problem is based in the SQL Server 
ODBC Driver, sqlalchemy, or eventlet, but I've already paid the microsoft 
support tax only to be told that there's insufficient evidence to indicate 
the ODBC driver is at fault. So I thought I would post the issue here to 
see if anybody would be able to help in pinpointing the code that is at 
fault with this problem.

Once the test app above is running and has a valid SQL server to query, you 
should be able to reproduce the hang with

seq 15 | parallel -j0 "curl -s localhost:5000/api/busy/mssql && echo {}"

The hang seems lo occur consistently on the 15th request. This happens even 
when connection pool_size/max_overflow are adjusted away from their 
respective default values which leads me to believe that exhausting the 
connection pool is not the cause of the problem. Though there may be some 
other reason behind the scenes for the hang occurring at the 15th 
connection(?)

Thanks very much for any help that can be provided in resolving this issue!
:)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/26971819-0cf1-4158-83b4-6a4972f7e755%40googlegroups.com.


Re: [sqlalchemy] Listing all the tables that a query uses

2019-06-26 Thread Brian Maissy
You and sqlalchemy never cease to impress. Thank you very much!

On Wed, Jun 26, 2019, 22:14 Mike Bayer  wrote:

>
>
> On Wed, Jun 26, 2019, at 1:51 PM, Brian Maissy wrote:
>
> Background: I have a bunch of materialized views (postgres) that are
> dependent on each other. When I want to change one of them, I drop cascade
> it (postgres does not provide a way to modify the query of an existing
> materialized view) and recreate it and all of its dependents (which are
> dropped from the cascade).
>
> I have corresponding sqlalchemy core tables and queries which represent
> the views, and a script which generates the necessary DDL to drop and
> recreate the views, but I got stuck trying to solve the common case of the
> problem of automatically detecting the dependencies between the views.
>
> So my question is this: given an sqlalchemy core query (a Select object),
> is there a simple way to list all of the tables that it uses?
>
>
> yes, use the visitors package:
>
> from sqlalchemy.sql import visitors
>
> tables = []
> for obj in visitors.iterate(my_stmt, {}):
> if isinstance(obj, Table):
>   tables.add(obj)
>
> there's some other ways to use this API as well as some utility functions
> that find tables such as
> https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/sql/util.py#L231
> which you can take a look at for examples.
>
>
>
>
>
> By simple I mean without traversing the entire object tree of the query
> and dealing with each type of element individually. I started out by
> implementing something along those lines only for the FROM clauses of the
> query (which was pretty simple, just need to expect tables, aliases, joins,
> compound selects, and maybe another case or two). But when I realized that
> the query might refer to a table in a subquery anywhere in the SELECT or
> WHERE clauses, the number of possible elements I would have to deal with
> jumped dramatically.
>
> I should note that the obvious answer of "define the dependencies
> manually" is sufficient in practice for my use case, but for interest and
> the challenge I wanted to see if there was a better way.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/4580a1cb-adcd-4acf-bf81-bddf3577ee81%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/y1bS_5UxenQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/e7a7c00b-36ca-4998-8632-8e7910f4e67d%40www.fastmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/e7a7c00b-36ca-4998-8632-8e7910f4e67d%40www.fastmail.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAHhXk4htt9kpH9hvCQtLiWgBWQeJ6oDOg6PX0uepfQOBgc7%2B3g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Listing all the tables that a query uses

2019-06-26 Thread Brian Maissy
Background: I have a bunch of materialized views (postgres) that are dependent 
on each other. When I want to change one of them, I drop cascade it (postgres 
does not provide a way to modify the query of an existing materialized view) 
and recreate it and all of its dependents (which are dropped from the cascade).

I have corresponding sqlalchemy core tables and queries which represent the 
views, and a script which generates the necessary DDL to drop and recreate the 
views, but I got stuck trying to solve the common case of the problem of 
automatically detecting the dependencies between the views.

So my question is this: given an sqlalchemy core query (a Select object), is 
there a simple way to list all of the tables that it uses?

By simple I mean without traversing the entire object tree of the query and 
dealing with each type of element individually. I started out by implementing 
something along those lines only for the FROM clauses of the query (which was 
pretty simple, just need to expect tables, aliases, joins, compound selects, 
and maybe another case or two). But when I realized that the query might refer 
to a table in a subquery anywhere in the SELECT or WHERE clauses, the number of 
possible elements I would have to deal with jumped dramatically.

I should note that the obvious answer of "define the dependencies manually" is 
sufficient in practice for my use case, but for interest and the challenge I 
wanted to see if there was a better way.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/4580a1cb-adcd-4acf-bf81-bddf3577ee81%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] MySQL default driver for python3

2019-03-14 Thread 'Van Klaveren, Brian N.' via sqlalchemy
Hi Tomek,

You actually want mysqlclient, which is the maintained fork of mysqldb:

https://pypi.org/project/mysqlclient/

Brian


On Mar 14, 2019, at 7:41 AM, Tomek Rożen 
mailto:tomek.ro...@gmail.com>> wrote:

Hi,

'mysqldb' is the default driver, however it does not support python3. Any 
chance to update the default driver for python3? Otherwise I have to always 
remember to use "mysql+pymysql://..." in DB URLs.

Alternatively, is there a way to override that default from application code?

Thanks,
Tomek



--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
sqlalchemy+unsubscr...@googlegroups.com<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to 
sqlalchemy@googlegroups.com<mailto:sqlalchemy@googlegroups.com>.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Security Implications of Directly Executing Queries Compiled to Strings

2019-03-08 Thread 'Van Klaveren, Brian N.' via sqlalchemy
As has been said, if you are generating the SQL, you will be fine so long as 
you use parameters and no blind string interpolation.

This isn't really any different that any other API in that regard - obviously 
you don't want to allow a non-substituted first name field of the form `'; DROP 
TABLE USERS;`

If you are worried about security in terms of accepting SQL (which sounds like 
it's a different concern than yours), you will really want a full-fledged 
parser.

In astronomy, we have a long history of actually letting users execute SQL 
directly via a REST API (there's even a specification called the "Table Access 
Protocol"). There's some SQL parsers out there in Python, but not much that's 
easily reusable or full features. I had started porting the Presto parser to 
Python (github.com/slaclab/lacquer<http://github.com/slaclab/lacquer>)  but I 
ended up with something kind of buggy (someday I'll try to get back to it... 
Antlr4 has good Python support now) and we had switched that service to Java 
because we needed better threading and I switched to directly using the Presto 
SQL parser. You could easily write a query validator, for example as a CLI or 
as a REST API, to verify there's only one statement with the presto framework 
if you want good SQL support and to verify the statements and operations are 
okay. This is probably out of scope for your needs, but it may be something to 
bring up to your team if you are worried about security more generally for this 
service.

Brian


On Mar 8, 2019, at 9:56 AM, Walt 
mailto:waltas...@gmail.com>> wrote:



On Friday, March 8, 2019 at 11:32:01 AM UTC-6, Jonathan Vanasco wrote:
Do you control the HTTP API or is this someone else's system?

It's someone else's. I'm living in a world where folks have generated their SQL 
with regular old string processing & interpolation in Python because there's no 
database connection to work with. I'm hoping to replace this with sqlalchemy + 
.compile()-ing.

Does the API just execute the raw sql it is provided, and you're looking to 
generate that?  What you want to do is pipe a SqlAlchemy query into a function 
that can compile it into the right statement for your database.  Below is an 
example of PostgreSQL that worked on SqlAlchemy 1 (probably 1.3. too but I 
haven't tested):

Yep, just executes it raw. I've got the basic examples working, I just wanted 
to understand the limitations of using SQLAlchemy vs. DBAPI literal binding, 
particularly any security implications, before I start advocating for more 
people in my organization to start using this workflow.

The security concerns you brought up deal with how/what SqlAlchemy treats as 
trusted user input or not.  Most functions in SqlAlchemy will escape the values 
by default, very few will not and are documented with a dragon in the database. 
 If you are using values for those items in Sql you need to filter them 
yourself.

Thanks, so the limitations the documentation is bringing up are more that 
SQLAlchemy doesn't know how to bind the same variety of types as does the DBAPI?

For 99.9% of use cases though, you can just compile your sql to the database's 
dialect and just send it without worry.

Hooray!

Thank you for your response!

--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
sqlalchemy+unsubscr...@googlegroups.com<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to 
sqlalchemy@googlegroups.com<mailto:sqlalchemy@googlegroups.com>.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] set a query timeout on a per query basis

2018-11-12 Thread Brian Cherinka
Hmm. Ok.  I'm using a postgres database with the psycopg2 driver.  I'm 
aware of the `statement_timeout` option in `postgres` which I can pass into 
the psycopg2 `connect` method.  As far as I can tell there's not a way to 
pass that keyword in through SQLAlchemy after the db engine has been 
created.  Is that correct?  Does the `query.execution_options`, or 
something in session, accept that keyword?

On Monday, November 12, 2018 at 3:15:23 PM UTC-5, Mike Bayer wrote:
>
> On Mon, Nov 12, 2018 at 2:08 PM Brian Cherinka  > wrote: 
> > 
> > What's the best way to set a timeout for specific queries?  I have a 
> custom query tool that uses SQLAlchemy to build and submit queries.  This 
> tool can be used in a local python session with a database.  I'm also using 
> it to allow queries in a Flask web-app. In general, I do not want to apply 
> a time limit to all queries, but I only want to apply a time limit of 1 
> minute to queries submitted through the web app.  Given a built SQLAlchemy 
> query and a db session instance, is there any way to set a timeout through 
> the query or session objects without having to recreate a database 
> connection? 
>
> There's no set way to do that at the SQLAlchemy level, this depends 
> very much on the database and the Python driver you are using and may 
> not be possible for every driver. 
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] set a query timeout on a per query basis

2018-11-12 Thread Brian Cherinka
What's the best way to set a timeout for specific queries?  I have a custom 
query tool that uses SQLAlchemy to build and submit queries.  This tool can 
be used in a local python session with a database.  I'm also using it to 
allow queries in a Flask web-app. In general, I do not want to apply a time 
limit to all queries, but I only want to apply a time limit of 1 minute to 
queries submitted through the web app.  Given a built SQLAlchemy query and 
a db session instance, is there any way to set a timeout through the query 
or session objects without having to recreate a database connection?  

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Proper way to handle new 128 character identifier limit in Oracle >= 12.2

2018-11-08 Thread 'Van Klaveren, Brian N.' via sqlalchemy
Hi,

Oracle 12.2 now allows 128 character length identifiers:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/newft/new-features.html#GUID-64283AD6-0939-47B0-856E-5E9255D7246B

It'd be great if sqlalchemy knew about this, but what's the proper way of 
handling this? Just use the existing dialect and monkey patch 
max_identifier_length or create a new dialect?

Thanks,
Brian


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Custom type compilers interplace with with_variant

2018-11-06 Thread 'Van Klaveren, Brian N.' via sqlalchemy
Hi,

I want to create a custom type for TINYINT and DOUBLE.

I've defined them as custom types. 

I want to use with_variant for them, so that in sqlite they print out as 
TINYINT and DOUBLE. 

But I also want them to use the variants defined for other databases, like 
Oracle and Postgres.

The first part works fine, the second part, not so much.

What's really the best way to do this?

Code is attached below.

Brian


from sqlalchemy.ext.compiler import compiles
from sqlalchemy import SmallInteger, Float
from sqlalchemy import types
from sqlalchemy.dialects import mysql, oracle, postgresql
from sqlalchemy import create_engine, MetaData, Column, Boolean
from sqlalchemy.schema import Table

MYSQL = "mysql"
ORACLE = "oracle"
POSTGRES = "postgresql"
SQLITE = "sqlite"

class TINYINT(SmallInteger):
"""The non-standard TINYINT type."""
__visit_name__ = 'TINYINT'


class DOUBLE(Float):
"""The non-standard DOUBLE type."""
__visit_name__ = 'DOUBLE'

@compiles(TINYINT)
def compile_tinyint(type_, compiler, **kw):
return "TINYINT"


@compiles(DOUBLE)
def compile_double(type_, compiler, **kw):
return "DOUBLE"

byte_map = {
MYSQL: mysql.TINYINT(),
ORACLE: oracle.NUMBER(3),
POSTGRES: postgresql.SMALLINT(),
}

double_map = {
MYSQL: mysql.DOUBLE(),
ORACLE: oracle.BINARY_DOUBLE(),
POSTGRES: postgresql.DOUBLE_PRECISION(),
}

def byte(**kwargs):
return _vary(TINYINT(), byte_map, kwargs)


def double(**kwargs):
return _vary(DOUBLE(), double_map, kwargs)

def _vary(type, variant_map, overrides):
for dialect, variant in overrides.items():
variant_map[dialect] = variant
for dialect, variant in variant_map.items():
type.with_variant(variant, dialect)
return type


metadata = MetaData()

t = Table("MyTable", metadata,
  Column("my_byte", byte()),
  Column("my_double", double())
  )


def metadata_dump(sql, *multiparams, **params):
# print or write to log or file etc
print(sql.compile(dialect=engine.dialect))

engine = create_engine("sqlite://", strategy='mock', executor=metadata_dump)
metadata.create_all(engine)


def metadata_dump(sql, *multiparams, **params):
# print or write to log or file etc
print(sql.compile(dialect=engine.dialect))

engine = create_engine("oracle://", strategy='mock', executor=metadata_dump)
metadata.create_all(engine)


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy Foreign Key Issues with Db2

2018-10-12 Thread Brian T
I'm running SQLAlchemy 1.2.12.  When trying to autoload a DB2 table, it 
gives me a "no such table" error for a table referenced in a foreign key, 
even though that table exists.

import sqlalchemy

cnxstr = 'ibm_db_sa://xyzzy'
db2 = sqlalchemy.create_engine(cnxstr)
meta = sqlalchemy.MetaData(db2)
tbl = sqlalchemy.Table('child_table', meta, schema='plugh', 
autoload_with=db2)

NoSuchTableError: parent_table

Not that this code works fine -- SQLAlchemy is able to find the parent 
table in other contexts.

tbl = sqlalchemy.Table('parent_table', meta, schema='plugh', 
autoload_with=db2)

I'm also able to construct the tables by hand using the inspector.  It's 
only when I use Autoload that I run into this problem.  Is there a way to 
fix the autoload?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sqlalchemy session in same transaction as existing psycopg2 connection

2018-08-29 Thread 'Brian DeRocher' via sqlalchemy
Beautiful.  Skipping the psycopg2 initialization prevents that rollback and 
allows SQLAlchemy to use the same transaction.

FWIW, I don't think pool_reset_on_return=None is needed, at least for my 
purposes.

Thanks for the help and thanks for the advice about raw_connection().  I'll 
get that into place, at least for the testing suite.

Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] sqlalchemy session in same transaction as existing psycopg2 connection

2018-08-28 Thread 'Brian DeRocher' via sqlalchemy


On Tuesday, August 28, 2018 at 2:51:47 PM UTC-4, Mike Bayer wrote:
>
> On Tue, Aug 28, 2018 at 11:32 AM, 'Brian DeRocher' via sqlalchemy 
> > wrote: 
> > Hey all, 
> > 
> > I'm writing some automated tests for some legacy python code using a 
> > psycopg2 connection.  I'd like to check data in the database by using 
> > SQLAlchemy.  I rollback the database transaction after each test in 
> > tearDown(). 
> > 
> > The problem is my SQLAlchemy connection doesn't see the database 
> updates. 
> > At first I thought they weren't using the same connection, but they are. 
> > I'm using create_engine(..., creator=get_conn). 
> > 
> > The problem appears to be that DefaultDialect is rolling back the 
> > transaction.  See sqlalchemy/engine/default.py line 167.  I had to 
> review PG 
> > logs to spot this as it's not logged. 
> > 
> > self.do_rollback(connection.connection) 
> > 
> > Is this line really needed? 
>
> yup (in general) 
>

I ask because it doesn't seem natural to me that the job so the 
DefaultDialect is to manage a transaction. 

>
> > What would it be rolling back? 
>
> all of the snapshots and/or locks that are accumulated by the 
> Postgresql database as commands on the connection proceed.   This 
> means literally versions of the database that don't exist outside of 
> the transaction, as well as simple things like table/ row locks.   See 
> https://www.postgresql.org/docs/9.1/static/mvcc.html .   As far as why 
> there is a transaction in the first place, the pep-249 DBAPI drivers 
> like psycopg2 are required for a connection to be in a transaction by 
> default, and "autocommit" systems are not standard per spec. 
>
> > Can it be 
> > avoided? 
>
> the rollback you are seeing is likely the connection-pool level 
> "reset", which is not strictly necessary if the code that makes use of 
> the pool is absolutely disciplined about ensuring transactions are 
> cleaned up before returning a connection to the pool, or as in your 
> case you are linking it to some larger state and don't actually want 
> the connection pool to be managing connection lifecycle.  You can set 
> this with pool_reset_on_return=None, see 
>
> http://docs.sqlalchemy.org/en/latest/core/engines.html?highlight=reset_on_return#sqlalchemy.create_engine.params.pool_reset_on_return,
>  
>
> however that docstring seems a little typo-ish and the link is broken 
> so the actual list of values you can see at 
>
> http://docs.sqlalchemy.org/en/latest/core/pooling.html?highlight=reset_on_return#sqlalchemy.pool.Pool.params.reset_on_return
>  
>
> let me fix the formatting on that docstring 
>

Unfortunately setting pool_reset_on_return=None did not help.  There's 
still a path of code down to this do_rollback().   I insert a 
traceback.format_stack() near the rollback, and this is the stack at that 
point in time (with pool_reset_on_return=None):

DEBUG:root:  File "iso.py", line 72, in 
test_transaction()
  File "iso.py", line 66, in test_transaction
user = session.query(User).get(uid)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 864, in get
return self._get_impl(ident, loading.load_on_ident)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 897, in _get_impl
return fallback_fn(self, key)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py",
 
line 223, in load_on_ident
    return q.one()
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 2814, in one
ret = self.one_or_none()
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 2784, in one_or_none
ret = list(self)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 2855, in __iter__
return self._execute_and_instances(context)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 2876, in _execute_and_instances
close_with_result=True)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 2885, in _get_bind_args
**kw
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 
line 2867, in _connection_from_session
conn = self.session.connection(**kw)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 
line 1013, in connection
execution_options=execution_options)
  File 
"/home/brian/fr/venv2/local/lib/python2.7/site-packages/sqlal

[sqlalchemy] sqlalchemy session in same transaction as existing psycopg2 connection

2018-08-28 Thread 'Brian DeRocher' via sqlalchemy
Hey all,

I'm writing some automated tests for some legacy python code using a 
psycopg2 connection.  I'd like to check data in the database by using 
SQLAlchemy.  I rollback the database transaction after each test in 
tearDown().

The problem is my SQLAlchemy connection doesn't see the database updates.  
At first I thought they weren't using the same connection, but they are.  
I'm using create_engine(..., creator=get_conn).

The problem appears to be that DefaultDialect is rolling back the 
transaction.  See sqlalchemy/engine/default.py line 167.  I had to review 
PG logs to spot this as it's not logged.

self.do_rollback(connection.connection)

Is this line really needed?  What would it be rolling back?  Can it be 
avoided?  When I disable this line of code, the transaction continues and 
sqlalchemy can see the updates from psyopg2.

I've attached a demo file.

Thanks,
Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
import os
import logging
from dotenv import load_dotenv
import psycopg2
from psycopg2.extras import LoggingConnection
from sqlalchemy import Column, create_engine, Integer, MetaData, Text
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base


dotenv_path = os.path.join(os.path.dirname(__file__), '.env')
load_dotenv(dotenv_path)

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG)

db_conn = None

def get_conn():
global db_conn
logging.debug('get_conn()')
if db_conn is None:
logging.debug('  theres no db_conn, must create one')
conn_string = "dbname='{0}' host='localhost' port=5432 user='{1}' password='{2}'".format(
os.getenv("DB_NAME"),
os.getenv("DB_USER"),
os.getenv("DB_PASSWORD"))
db_conn = psycopg2.connect(conn_string, connection_factory=LoggingConnection)
db_conn.initialize(logger)
else:
logging.debug('  using exising db_conn')
return db_conn


BaseCi = declarative_base(metadata=MetaData(schema='config'))

class User(BaseCi):
__tablename__ = 'user'
user_id = Column(Integer, primary_key=True)
user_status = Column(Text)


def legacy_psycopg_code(uid, status):
conn = get_conn()
cur = conn.cursor()
query = 'UPDATE config.user SET user_status = %s WHERE user_id = %s '
cur.execute(query, (status, uid,))

# Confirm it's there.
query = 'select user_status from config.user where user_id = %s'
cur.execute(query, (uid,))
uc = cur.fetchone()[0]
logging.debug("user_status is %s" % uc)


def test_transaction():
uid = 78
status = 'ex-parrot'
legacy_psycopg_code(uid, status)

# Use existing connection so we can see inside the transaction.
engine = create_engine('postgresql+psycopg2://', creator=get_conn)
Session = sessionmaker(bind=engine)
session = Session()

user = session.query(User).get(uid)
logging.debug(user.user_status)
#   assertEquals(user.user_status, status)


if __name__ == "__main__":
test_transaction()
DEBUG:root:get_conn()
DEBUG:root:  theres no db_conn, must create one
DEBUG:__main__:UPDATE config.user SET user_status = 'ex-parrot' WHERE user_id = 
78 
DEBUG:__main__:select user_status from config.user where user_id = 78
DEBUG:root:user_status is ex-parrot
DEBUG:root:get_conn()
DEBUG:root:  using exising db_conn
DEBUG:__main__:select version()
DEBUG:__main__:select current_schema()
DEBUG:__main__:show transaction isolation level
DEBUG:__main__:SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
DEBUG:__main__:SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
DEBUG:__main__:SELECT 'x' AS some_label
DEBUG:root:DefaultDialect::initialize() calls do_rollback() here.
<-
DEBUG:__main__:show standard_conforming_strings
DEBUG:__main__:SELECT config."user".user_id AS config_user_user_id, 
config."user".user_status AS config_user_user_status 
FROM config."user" 
WHERE config."user".user_id = 78
DEBUG:root:Active


[sqlalchemy] Re: accessing functions in the postgres public schema using sqlalchemy.func

2018-08-01 Thread Brian Cherinka

Yeah everything is working in psql just fine.   

select current_schema();
 current_schema

 public
(1 row)


Time: 1.226 ms
show search_path;
   search_path
-
 "$user", public
(1 row)

select add(2,4);
 add
-
   6
(1 row)
 

But you jogged my memory and think I found the problem.  I dug back through 
my code and found this buried in it

def clearSearchPathCallback(dbapi_con, connection_record):
'''
When creating relationships across schema, SQLAlchemy
has problems when you explicitly declare the schema in
ModelClasses and it is found in search_path.


The solution is to set the search_path to "$user" for
the life of any connection to the database. Since there
is no (or shouldn't be!) schema with the same name
as the user, this effectively makes it blank.


This callback function is called for every database connection.


For the full details of this issue, see:
http:
//groups.google.com/group/sqlalchemy/browse_thread/thread/88b5cc5c12246220


dbapi_con - type: psycopg2._psycopg.connection
connection_record - type: sqlalchemy.pool._ConnectionRecord
'''
cursor = dbapi_con.cursor()
cursor.execute('SET search_path TO "$user",functions')
dbapi_con.commit()


listen(sqlalchemy.pool.Pool, 'connect', clearSearchPathCallback)

which shows my setting the search_path to exclude public.  Changing that 
line 'SET search_path TO "$user",functions, public' fixes everything SQLA.  
It's been a while since I've looked at my code that sets up the 
DatabaseConnection.  I must have had an original reason to exclude public 
but it doesn't seem relevant anymore.  Thanks for your help.


On Wednesday, August 1, 2018 at 2:18:55 PM UTC-4, Brian Cherinka wrote:
>
> Hi,
>
> What's the best way to access functions that live in schema='public' in 
> the postgres databases?  Some postgresql extensions install functions in 
> the public schema that I would like accessible via *sqlachemy.func,* however 
> I get an error when attempting to call them.   Am I missing something when 
> setting up my Base Classes?  Maybe regarding the `search_path`?  I'm 
> defining my models and tables with a Declarative Base.  My 
> default_schema_name is set to 'public'.   I've read through this page, 
> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path
>   
> <http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path>but
>  
> it's not clear where I should be setting my search_path to ensure it 
> includes the public schema.  
>
> As an example, I have a test "add" function in the 'public' schema , which 
> crashes presumably because it cannot find the function definition
>
> session.query(func.add(2,4)).all()
>
> ProgrammingError: (psycopg2.ProgrammingError) function add(integer, 
> integer) does not exist
> LINE 1: SELECT add(2, 4) AS add_1
>^
> HINT:  No function matches the given name and argument types. You might 
> need to add explicit type casts.
>  [SQL: 'SELECT add(%(add_2)s, %(add_3)s) AS add_1'] [parameters: {'add_3': 
> 4, 'add_2': 2}]
>
>
>  and a test "newadd" function defined in an explicit schema called 
> "functions".  This seems to automatically get reflected and mapped.  And 
> works perfectly.  
>
> session.query(func.newadd(2,4)).all()
>
> [(6)]
>
> One solution is to install the postgres extension into the functions 
> schema, but this kind of breaks the usage within postgres itself.  I have 
> to always explicity set search_path='functions' in order to use them.  So 
> it's not ideal.  Ideally, I'd like *sqlachemy.func* to understand 
> functions that live either in the "functions" or "public" schema.  Any 
> ideas on how to fix this? 
>
> Cheers, Brian
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] accessing functions in the postgres public schema using sqlalchemy.func

2018-08-01 Thread Brian Cherinka
Hi,

What's the best way to access functions that live in schema='public' in 
the postgres databases?  Some postgresql extensions install functions in 
the public schema that I would like accessible via *sqlachemy.func,* however 
I get an error when attempting to call them.   Am I missing something when 
setting up my Base Classes?  Maybe regarding the `search_path`?  I'm 
defining my models and tables with a Declarative Base.  My 
default_schema_name is set to 'public'.   I've read through this page, 
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path
  
<http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path>but
 
it's not clear where I should be setting my search_path to ensure it 
includes the public schema.  

As an example, I have a test "add" function in the 'public' schema , which 
crashes presumably because it cannot find the function definition

session.query(func.add(2,4)).all()

ProgrammingError: (psycopg2.ProgrammingError) function add(integer, integer) 
does not exist
LINE 1: SELECT add(2, 4) AS add_1
   ^
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.
 [SQL: 'SELECT add(%(add_2)s, %(add_3)s) AS add_1'] [parameters: {'add_3': 4
, 'add_2': 2}]


 and a test "newadd" function defined in an explicit schema called 
"functions".  This seems to automatically get reflected and mapped.  And 
works perfectly.  

session.query(func.newadd(2,4)).all()

[(6)]

One solution is to install the postgres extension into the functions 
schema, but this kind of breaks the usage within postgres itself.  I have 
to always explicity set search_path='functions' in order to use them.  So 
it's not ideal.  Ideally, I'd like *sqlachemy.func* to understand functions 
that live either in the "functions" or "public" schema.  Any ideas on how 
to fix this? 

Cheers, Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is FLOAT(53) the best way to guarantee a double-precision generic column type

2018-05-02 Thread Van Klaveren, Brian N.
I'm trying to work on a universal table generator based on in-memory table 
objects. The code is based on the work from the pandas to_sql.

I'll be targeting Oracle, Postgres, MySQL, and SQLite for sure.

It seems like making sure to use Float(53) is the best way to guarantee that a 
column will be generated with a double-precision Floating point in all of these 
database backends without introspecting the engine at runtime.

Pandas does that here:
https://github.com/pandas-dev/pandas/blob/master/pandas/io/sql.py#L879

Is this generally true?

Also, should I use the generic Float type, or the SQL FLOAT type? It doesn't 
seem like there's a huge functional difference.

Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is it safe to re-use a closed session?

2018-04-05 Thread 'Brian Candler' via sqlalchemy
The documentation <http://docs.sqlalchemy.org/en/latest/orm/session.html> 
doesn't seem to be 100% clear on this point: if you close() a session, can 
you continue to use it as if it were a fresh session?  (Unlike say a closed 
filehandle)

In other words, in a non-threaded application, is this a valid pattern?

Session = sessionmaker(bind=engine)
session = Session()

def my_request():
# Each request uses the same session object but cleans it up at the end
try:

finally:
session.close()

As opposed to:

def my_request():
# Each request uses a separate session object
session = Session()
try:

finally:
session.close()

The reason for asking is that there are times where it would be more 
convenient to create a single session up-front and use it where required, 
closing at the end of each unit of work, than have to pass around a new 
session through multiple levels of function calls.

In particular, flask-sqlalchemy wants to create a single session (albeit a 
scoped session).  Hence if I want to use the same models in non-Flask code, 
I need to reference that session and just close it at the end of each unit 
of work, rather than create a fresh session each time.

Thanks,

Brian.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


feedback request on support for sequences with --autogenerate

2018-01-08 Thread Brian DeRocher
[23:13]  I just started using Alembic/SA in a project that had a well 
established database.
[23:13]  I'm so glad there's support for relationships.  Thanks!
[23:13]  But there was no support for sequences, when it came to 
--autogen.
[23:14]  So I took a stab at adding that and I got the happy path 
working.
[23:14]  I'd like someone to take a look at my patches and give me some 
feedback.
[23:15]  So what I have now are forks of SA and Alembic.
[23:16]  I'm not too familiar with Bitbucket.  What's the next step?

# Concerns

So some of the issues I have are the following:

* There's a lot of support in the codebase for schemas.  I just want to make 
sure I got it right with regard to metadata.
* I want to make sure that these sequences can be used independent of any 
column.
* I want to make sure that column server_default works correctly.
* I need to make sure downgrade() / drop sequence is working.  Not to mention 
ALTER SEQUENCE.
* I need to make sure dropping a schema will remove it as a  default from any 
column that uses it.

Surely I'm forgetting many things, so please fill me in.

# Transactions in PG

One thing I've noticed is that calls to sequences in PostgreSQL don't rollback 
in a transaction.  I'm not sure if that causes an issue for alembic.

# Testing

I'm new to Python.  I've run automated tests using python setup.py test.  Is 
there any way to run a subset of tests or label tests with @WIP to just run 
those?  And is it me, or are there a lot of failures?  Is there anyway someone 
can create the tests, so I know I got this working correctly?  Also, since 
sqlite doesn't support sequences, how do I get automated testing to use PG.

A pull request is coming soon.

Thanks,
Brian

-- 
Brian DeRocher

Noblis | noblis.org | ☎ 703.610.1589 | ✉ brian.deroc...@noblis.org


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] create_engine returns a function, not a proper object

2017-08-19 Thread Brian Van Klaveren
Your code is buggy. You assigned the engine object then your parameters are 
actually a tuple which are immediately lost.

> On Aug 19, 2017, at 10:35 PM, Boris Epstein  wrote:
> 
> Hello all,
> 
> My call to create_engine which used to work at one point now return a 
> function object.
> 
> The code:
> 
> if self.__dbConnectParams[DB_SERVER_TAG] is not None:
> self.__dbEngine = sqlalchemy.create_engine
> ("mysql://%s:%s@%s:%s/%s" %
>  (self.__dbConnectParams[DB_SERVER_TAG][USER_TAG],
>   self.__dbConnectParams[DB_SERVER_TAG][PASSWORD_TAG],
>   self.__dbConnectParams[DB_SERVER_TAG][HOST_ADDRESS_TAG],
>   self.__dbConnectParams[DB_SERVER_TAG][PORT_TAG],
>   self.__dbConnectParams[DB_SERVER_TAG][DB_TAG]))
> 
> print "DEBUG: pt 2: " + str(self.__dbEngine)
> 
> the resulting variable:
> 
> DEBUG: pt 2: 
> 
> Can anyone explain this? How does it happen? How do I fix that?
> 
> Thanks.
> 
> Boris.
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Adding a relationship for specific rows based on foreign value

2017-06-19 Thread Van Klaveren, Brian N.
Hi,

I want to represent a One to Many relationship with an additional default value 
that depends on a value in the foreign table.

I think what I want is something like the following:


class UserDatabase(Base):
db_id = Column(Integer, primary_key=True)
repo_id = Column(Integer, ForeignKey("UserDatabase.db_id"), nullable=True)
name = Column(String(128))
description = Column(Text)
conn_host = Column(String(128))
conn_port = Column(Integer)
schemas = relationship("UserDatabaseSchema", lazy="dynamic")
# Where is_default_schema == True
default_schema = relationship("UserDatabaseSchema", primaryjoin=???, 
lazy="dynamic")

class UserDatabaseSchema(Base):
schema_id = Column(Integer, primary_key=True)
db_id = Column(Integer, ForeignKey("UserDatabase.db_id"))
name = Column(String(128))
description = Column(Text)
is_default_schema = Column(Boolean)


but I'm not sure if primaryjoin is the proper argument for relationship and, if 
it is, what the expression should be. Or is this something that's best handled 
a different way?


Thanks,
Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] When is session.close required?

2017-05-23 Thread 'Brian Candler' via sqlalchemy
Hello,

I have a question about when "close" should or should not be called on a 
session.

At http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html it 
says:

"When the transactional state is completed after a rollback or commit, the 
Session releases all Transaction and Connection resources, and goes back to 
the “begin” state, which will again invoke new Connection and Transaction 
objects as new requests to emit SQL statements are received."

>From this description, given that the session releases its resources, it 
seems that there's no need to call "close" explicitly on the session, 
whether or not you want to re-use the session object or not.  There is also 
some example code, which doesn't invoke session.close().  

session = Session()try:
...
# commit.  The pending changes above
# are flushed via flush(), the Transaction
# is committed, the Connection object closed
# and discarded, the underlying DBAPI connection
# returned to the connection pool.
session.commit()except:
# on rollback, the same closure of state
# as that of commit proceeds.
session.rollback()
raise


However, under 
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it
 
there's a different example, this time explicitly calling close() on the 
session:

def run_my_program():
session = Session()
try:
ThingOne().go(session)
ThingTwo().go(session)

session.commit()
except:
session.rollback()
raise
finally:
session.close()


So my question is, what does session.close() do that commit/rollback does 
not?

It's also not entirely clear to me if a session object can be reused after 
it has been closed. 
 At 
http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.close
 
it says:

"If this session were created with autocommit=False, a new transaction is 
immediately begun. Note that this new transaction does not use any 
connection resources until they are first needed."

So it sounds to me like a closed session *can* be re-used.  Is that correct?

Related to this is calling scoped_session.remove. 
 At 
http://docs.sqlalchemy.org/en/latest/orm/contextual.html#unitofwork-contextual 
it says:

web request ends  -> # the registry is instructed to
 # remove the Session
 Session.remove()


As I understand it, this calls "close" on the underlying session *and* 
removes it from the registry, so you get a fresh Session next time. 
 However if the framework already does a commit/rollback, why not just 
allow the registry to retain the same session object?

Thanks,

Brian.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Additional error handling using MySQLdb.connection extended API

2017-05-18 Thread Van Klaveren, Brian N.
Hi,

I need access to MySQLdb.connection.error() and MySQLdb.connection.errno(), as 
we have a database which is throwing custom error codes that are higher than 
CR_MAX_ERROR (for reference: 
https://github.com/PyMySQL/mysqlclient-python/blob/master/_mysql.c#L124)

My actual code looks like something along these lines:


from sqlalchemy import create_engine, text
engine = create_engine(url)
sql = "SELECT... "

try:
   conn = engine.connect()
   rows = conn.execute(text(sql))
except Exception as e:
   if hasattr(conn.connection, "error"):
   print(conn.connection.errno())
   print(conn.connection.error())

I've also tried:

err_conn = conn._branch()
try:
   err_conn = engine.connect()
   rows = err_conn.execute(text(sql))
except Exception as e:
   if hasattr(err_conn.connection, "error"):
   print(err_conn.connection.errno())
   print(err_conn.connection.error())


with no luck.

I'm not sure how to write a minimal test that just uses vanilla MySQL, because 
I'm not sure how to trigger an error with a errno > CR_MAX_ERROR other than in 
our system.

I'm losing the information somehow, via a connection being closed/reopened or 
something? I'm not really sure. What might be the a way to handle this without 
just using raw_connection or writing a new dialect?

Thanks,
Brian

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Issue with Sqlalchemy and inserting array of jsonb

2017-01-05 Thread Brian Clark
Great thanks!!

On Thursday, January 5, 2017 at 7:06:26 AM UTC-8, Mike Bayer wrote:
>
> I will add a new section for this but we need to use a workaround 
> similar to 
>
> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#using-enum-with-array
>  
> (but simpler).  Just the CAST is needed, see below. 
>
> from sqlalchemy import * 
> from sqlalchemy.orm import * 
> from sqlalchemy.ext.declarative import declarative_base 
> from sqlalchemy.dialects.postgresql import ARRAY, JSONB 
>
> Base = declarative_base() 
>
>
> class CastingArray(ARRAY): 
>
>  def bind_expression(self, bindvalue): 
>  return cast(bindvalue, self) 
>
>
> class A(Base): 
>  __tablename__ = 'a' 
>  id = Column(Integer, primary_key=True) 
>  data = Column(CastingArray(JSONB)) 
>
> e = create_engine("postgresql://scott:tiger@localhost/test", echo='debug') 
> Base.metadata.drop_all(e) 
> Base.metadata.create_all(e) 
>
> s = Session(e) 
>
> s.add(A(data=[{"foo": "bar"}, {"bat": "hoho"}])) 
>
> s.commit() 
>
> a1 = s.query(A).first() 
> print a1.data 
>
>
>
> On 01/05/2017 02:00 AM, Brian Clark wrote: 
> > So i'm trying to insert an array of jsonb values into my database but I 
> > can't seem to format it right, here's my code: 
> > 
> > updated_old_passwords = [] 
> > updated_old_passwords.append({"index": 1, "password": 
> hashed_password}) 
> > user.old_passwords = updated_old_passwords 
> > user.last_password_reset = datetime.datetime.utcnow() 
> > db.session.commit() 
> > 
> > And here's the error: 
> > 
> > ProgrammingError: (psycopg2.ProgrammingError) column "old_passwords" 
> > is of type jsonb[] but expression is of type text[] 
> > LINE 1: ...-01-05T06:18:24.992968'::timestamp, 
> > old_passwords=ARRAY['"\"... 
> >  ^ 
> > HINT:  You will need to rewrite or cast the expression. 
> >  [SQL: 'UPDATE users SET password=%(password)s, 
> > last_password_reset=%(last_password_reset)s, 
> > old_passwords=%(old_passwords)s WHERE users.id = %(users_id)s'] 
> > [parameters: {'users_id': 1, 'password': 
> > 
> '$6$rounds=656000$b.LVoVb7T0WNbT.n$l9uUb1a1qk2Z5ugfpI7B.3D02sUVqhES5VhM1TvwUnMd/iZZL3gn4/zExB47/ZQYPcTMRxO1iaL4/yjXda2.P1',
>  
>
> > 'last_password_reset': datetime.datetime(2017, 1, 5, 6, 18, 24, 992968), 
> > 'old_passwords': ['"\\"{\\"index\\": 1, \\"password\\": 
> > hashed_password}\\""']}] 
> > 
> > Any idea how I format my insert for this to work? 
> > 
> > Here's my db table 
> > 
> > from sqlalchemy.dialects.postgresql import JSONB, ARRAY 
> > 
> > class User(db.Model): 
> > __tablename__ = 'users' 
> > id = db.Column(db.Integer, primary_key = True) 
> > email = db.Column(db.String(255), index = True) 
> > password = db.Column(db.String(255)) 
> > last_password_reset = db.Column(db.DateTime()) 
> > old_passwords = db.Column(ARRAY(JSONB)) 
> > 
> > 
> > I also tried this: 
> > 
> > updated_old_passwords.append(cast('{"index": 1, "password": 
> > hashed_password}', JSONB)) 
> > 
> > but got the error 
> > 
> > StatementError: (exceptions.TypeError)  > object at 0x10f3ed150> is not JSON serializable [SQL: u'UPDATE users SET 
> > password=%(password)s, last_password_reset=%(last_password_reset)s, 
> > old_passwords=%(old_passwords)s WHERE users.id = %(users_id)s'] 
> > [parameters: [{'users_id': 1, 'password': 
> > 
> '$6$rounds=656000$WYOiWMAYDSag9QIX$YSDtZle6Bd7Kz.cy7ejWq1NqgME.xUPiDHfV31FKobGu2umxoX34.ZP2MrUDxyym0X4fyzZNEIO//yS6UTPoC.',
>  
>
> > 'last_password_reset': datetime.datetime(2017, 1, 5, 6, 26, 35, 610703), 
> > 'old_passwords': [ 0x10f3ed150>]}]] 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
&

[sqlalchemy] Issue with Sqlalchemy and inserting array of jsonb

2017-01-04 Thread Brian Clark
So i'm trying to insert an array of jsonb values into my database but I 
can't seem to format it right, here's my code:

updated_old_passwords = []
updated_old_passwords.append({"index": 1, "password": hashed_password})
user.old_passwords = updated_old_passwords
user.last_password_reset = datetime.datetime.utcnow()
db.session.commit()

And here's the error:

ProgrammingError: (psycopg2.ProgrammingError) column "old_passwords" is 
of type jsonb[] but expression is of type text[]
LINE 1: ...-01-05T06:18:24.992968'::timestamp, 
old_passwords=ARRAY['"\"...
 ^
HINT:  You will need to rewrite or cast the expression.
 [SQL: 'UPDATE users SET password=%(password)s, 
last_password_reset=%(last_password_reset)s, 
old_passwords=%(old_passwords)s WHERE users.id = %(users_id)s'] 
[parameters: {'users_id': 1, 'password': 
'$6$rounds=656000$b.LVoVb7T0WNbT.n$l9uUb1a1qk2Z5ugfpI7B.3D02sUVqhES5VhM1TvwUnMd/iZZL3gn4/zExB47/ZQYPcTMRxO1iaL4/yjXda2.P1',
 
'last_password_reset': datetime.datetime(2017, 1, 5, 6, 18, 24, 992968), 
'old_passwords': ['"\\"{\\"index\\": 1, \\"password\\": 
hashed_password}\\""']}]

Any idea how I format my insert for this to work?

Here's my db table

from sqlalchemy.dialects.postgresql import JSONB, ARRAY

class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key = True)
email = db.Column(db.String(255), index = True)
password = db.Column(db.String(255))
last_password_reset = db.Column(db.DateTime())
old_passwords = db.Column(ARRAY(JSONB))


I also tried this:

updated_old_passwords.append(cast('{"index": 1, "password": 
hashed_password}', JSONB))

but got the error

StatementError: (exceptions.TypeError)  is not JSON serializable [SQL: u'UPDATE users SET 
password=%(password)s, last_password_reset=%(last_password_reset)s, 
old_passwords=%(old_passwords)s WHERE users.id = %(users_id)s'] 
[parameters: [{'users_id': 1, 'password': 
'$6$rounds=656000$WYOiWMAYDSag9QIX$YSDtZle6Bd7Kz.cy7ejWq1NqgME.xUPiDHfV31FKobGu2umxoX34.ZP2MrUDxyym0X4fyzZNEIO//yS6UTPoC.',
 
'last_password_reset': datetime.datetime(2017, 1, 5, 6, 26, 35, 610703), 
'old_passwords': []}]]

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Possible to bulk update with different values?

2016-12-23 Thread Brian Clark
Got it, thanks!

On Friday, December 23, 2016 at 12:11:12 PM UTC-8, Brian Clark wrote:
>
> Is there an update equivalent of this insert statement?
>
> inserts = [{"name": "jon", "age": 20"}, {"name": "ashley", "age": 22"}]
> session.execute(
> People.__table__.insert().values(
> inserts
> )
> )
>
> I have this right now but it's still slower than I'd like because it's 
> using executemany, hoping to have it be one big query
>
> updated_people = [{"b_id": 1, "b_name": "jon", "b_age": 21"}, {"b_id": 2, 
> "b_name": "ashley", "b_age": 25"}]
> stmt = People.__table__.update().\
>   where(People.id == bindparam('b_id')).\
>   values(name=bindparam('b_name'), age=bindparam('b_age'))
> session.execute(stmt, updated_people)
>
> Thanks!
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Possible to bulk update with different values?

2016-12-23 Thread Brian Clark
Seems doable in raw SQL (using postgresql btw)

http://stackoverflow.com/questions/18797608/update-multiple-rows-in-same-query-using-postgresql

On Friday, December 23, 2016 at 12:11:12 PM UTC-8, Brian Clark wrote:
>
> Is there an update equivalent of this insert statement?
>
> inserts = [{"name": "jon", "age": 20"}, {"name": "ashley", "age": 22"}]
> session.execute(
> People.__table__.insert().values(
> inserts
> )
> )
>
> I have this right now but it's still slower than I'd like because it's 
> using executemany, hoping to have it be one big query
>
> updated_people = [{"b_id": 1, "b_name": "jon", "b_age": 21"}, {"b_id": 2, 
> "b_name": "ashley", "b_age": 25"}]
> stmt = People.__table__.update().\
>   where(People.id == bindparam('b_id')).\
>   values(name=bindparam('b_name'), age=bindparam('b_age'))
> session.execute(stmt, updated_people)
>
> Thanks!
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Possible to bulk update with different values?

2016-12-23 Thread Brian Clark
Is there an update equivalent of this insert statement?

inserts = [{"name": "jon", "age": 20"}, {"name": "ashley", "age": 22"}]
session.execute(
People.__table__.insert().values(
inserts
)
)

I have this right now but it's still slower than I'd like because it's 
using executemany, hoping to have it be one big query

updated_people = [{"b_id": 1, "b_name": "jon", "b_age": 21"}, {"b_id": 2, 
"b_name": "ashley", "b_age": 25"}]
stmt = People.__table__.update().\
  where(People.id == bindparam('b_id')).\
  values(name=bindparam('b_name'), age=bindparam('b_age'))
session.execute(stmt, updated_people)

Thanks!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Speed issue with bulk inserts

2016-12-23 Thread Brian Clark
And here's the objects

for key, unique_new_location in unique_locations_hash.iteritems():
trial_location_inserts.append(TrialLocations(trial_id = 
current_trial.id, location_id = unique_new_location['location_id'], active 
= True, status = unique_new_location['status']))


On Friday, December 23, 2016 at 12:25:40 AM UTC-8, Brian Clark wrote:
>
> So I'm having an issue with a very slow insert, I'm inserting 223 items 
> and it takes 20+ seconds to execute. Any advice on what I'm doing wrong and 
> why it would be so slow? Using Postgresql 9.4.8
>
> The line of code
>
> LOG_OUTPUT('==PRE BULK==', True)
> db_session.bulk_save_objects(trial_location_inserts)
> LOG_OUTPUT('==POST BULK==', True)
>
> And here's the log from sqlalchemy echo
>
> 2016-12-23 07:37:52.570: ==PRE BULK==
> 2016-12-22 23:37:52,572 INFO sqlalchemy.engine.base.Engine INSERT INTO 
> trial_locations (status, trial_id, location_id, active) VALUES (%(status)s, 
> %(trial_id)s, %(location_id)s, %(active)s)
> 2016-12-22 23:37:52,572 INFO sqlalchemy.engine.base.Engine ({'status': 
> u'Completed', 'active': True, 'location_id': 733, 'trial_id': 126625}, 
> {'status': u'Completed', 'active': True, 'location_id': 716, 'trial_id': 
> 126625}, {'status': u'Completed', 'active': True, 'location_id': 1033, 
> 'trial_id': 126625}, {'status': u'Completed', 'active': True, 
> 'location_id': 1548, 'trial_id': 126625}, {'status': u'Completed', 
> 'active': True, 'location_id': 1283, 'trial_id': 126625}, {'status': 
> u'Completed', 'active': True, 'location_id': 1556, 'trial_id': 126625}, 
> {'status': u'Completed', 'active': True, 'location_id': 4271, 'trial_id': 
> 126625}, {'status': u'Completed', 'active': True, 'location_id': 1567, 
> 'trial_id': 126625}  ... displaying 10 of 223 total bound parameter sets 
> ...  {'status': u'Completed', 'active': True, 'location_id': 1528, 
> 'trial_id': 126625}, {'status': u'Completed', 'active': True, 
> 'location_id': 1529, 'trial_id': 126625})
> 2016-12-23 07:38:14.270: ==POST BULK==
>
>
> Also for comparison I rewrote it in Sqlalchemy core 
>   
>
> LOG_OUTPUT('==PRE BULK==', True)
> engine.execute(
>   TrialLocations.__table__.insert().values(
> trial_location_core_inserts
>   )
> )
> # db_session.bulk_save_objects(trial_location_inserts)
> LOG_OUTPUT('==POST BULK==', True)
>
> and it ran in 0.028 seconds
>
> 2016-12-23 08:11:26.097: ==PRE BULK==
> ...
> 2016-12-23 08:11:27.025: ==POST BULK==
>
> I'd like to keep it in session for the sake of this being the end part of 
> a transaction and if the whole thing fails I want it all to rollback. Am I 
> doing something wrong? I don't need the data afterwards it just needs to be 
> saved
>
> Help appreciated!
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Speed issue with bulk inserts

2016-12-23 Thread Brian Clark
So I'm having an issue with a very slow insert, I'm inserting 223 items and 
it takes 20+ seconds to execute. Any advice on what I'm doing wrong and why 
it would be so slow? Using Postgresql 9.4.8

The line of code

LOG_OUTPUT('==PRE BULK==', True)
db_session.bulk_save_objects(trial_location_inserts)
LOG_OUTPUT('==POST BULK==', True)

And here's the log from sqlalchemy echo

2016-12-23 07:37:52.570: ==PRE BULK==
2016-12-22 23:37:52,572 INFO sqlalchemy.engine.base.Engine INSERT INTO 
trial_locations (status, trial_id, location_id, active) VALUES (%(status)s, 
%(trial_id)s, %(location_id)s, %(active)s)
2016-12-22 23:37:52,572 INFO sqlalchemy.engine.base.Engine ({'status': 
u'Completed', 'active': True, 'location_id': 733, 'trial_id': 126625}, 
{'status': u'Completed', 'active': True, 'location_id': 716, 'trial_id': 
126625}, {'status': u'Completed', 'active': True, 'location_id': 1033, 
'trial_id': 126625}, {'status': u'Completed', 'active': True, 
'location_id': 1548, 'trial_id': 126625}, {'status': u'Completed', 
'active': True, 'location_id': 1283, 'trial_id': 126625}, {'status': 
u'Completed', 'active': True, 'location_id': 1556, 'trial_id': 126625}, 
{'status': u'Completed', 'active': True, 'location_id': 4271, 'trial_id': 
126625}, {'status': u'Completed', 'active': True, 'location_id': 1567, 
'trial_id': 126625}  ... displaying 10 of 223 total bound parameter sets 
...  {'status': u'Completed', 'active': True, 'location_id': 1528, 
'trial_id': 126625}, {'status': u'Completed', 'active': True, 
'location_id': 1529, 'trial_id': 126625})
2016-12-23 07:38:14.270: ==POST BULK==


Also for comparison I rewrote it in Sqlalchemy core 
  

LOG_OUTPUT('==PRE BULK==', True)
engine.execute(
  TrialLocations.__table__.insert().values(
trial_location_core_inserts
  )
)
# db_session.bulk_save_objects(trial_location_inserts)
LOG_OUTPUT('==POST BULK==', True)

and it ran in 0.028 seconds

2016-12-23 08:11:26.097: ==PRE BULK==
...
2016-12-23 08:11:27.025: ==POST BULK==

I'd like to keep it in session for the sake of this being the end part of a 
transaction and if the whole thing fails I want it all to rollback. Am I 
doing something wrong? I don't need the data afterwards it just needs to be 
saved

Help appreciated!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-04 Thread Brian Cherinka

>
>
>
> I've yet to see an unambiguous statement of what "the raw SQL" is.   If 
> it is this: 
>
> select c.pk,c.mangaid,c.manga_target_pk, n.z, 
> (select (array_agg(unwave.restw))[0:5] as restwave from (select 
> (unnest(w.wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as 
> w) as unwave) 
> from mangadatadb.cube as c 
>  join mangasampledb.manga_target as m on m.pk=c.manga_target_pk 
>  join mangasampledb.manga_target_to_nsa as t on t.manga_target_pk=m.pk 
>  join mangasampledb.nsa as n on n.pk=t.nsa_pk 
>  join mangadatadb.wavelength as w on w.pk=c.wavelength_pk; 

 

> then that does not, and cannot, correspond to the Python code you are 
> sending.  the JOINs would not be inside of the @expression, you'd have 
> to write them out on the outside. 

 
Yeah, that was the SQL that would be typed into psql.  I know that the 
Python implementation is incorrect.  I've been trying to sort that out.   


I've provided a complete example of how to render SQL extremely similar 
> to what you want, featuring techniques such as correlate() as well as 
> how to nest the queries appropriately.  If at this point you aren't able 
> to manipulate the code to get what you want, then I'd advise not using a 
> hybrid for this query at all. It is extremely complicated in this 
> context, and unless you are deeply familiar with SQLAlchemy APIs, you 
> would just have a bunch of code that you can't effectively maintain. 
>

Ok..thanks for your help.  I appreciate it.  
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-04 Thread Brian Cherinka

Yeah, sorry about that.  Like I said, I don't normally build my classes 
this way, defining all the columns in the Base Class.  I define all my 
columns, primary and foreign keys, etc manually first, so my SQLA classes 
are minimally defined.  And I cobbled together pieces from my real code and 
your example to build that test code I sent you.  So I forgot some things. 
 I did say it may not be completely right, and you might have to hack it 
some.  I apologize.  Here is an attempt at cleaning it up. 

When I try to implement your @expression into my real code, it doesn't 
work.  I am getting an error about multiple results returned within the 
subquery. 

@restwave.expression
def restwave(cls):
s = db.Session()
restw = (func.unnest(Wavelength.wavelength) / (1 + sampledb.NSA.z)).
label("restw")
unwave = s.query(restw).select_from(Wavelength).correlate(sampledb.
NSA).subquery("unwave")
agg_unwave = s.query(func.array_agg(unwave.c.restw)).label(
"restwarr")
joined = s.query(agg_unwave).select_from(
 Cube
 ).join(sampledb.MangaTarget, sampledb.MangaTargetToNSA, 
sampledb.NSA, Wavelength)
return joined.as_scalar()

session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520').
all()

ProgrammingError: (psycopg2.ProgrammingError) more than one row returned by 
a subquery used as an expression
 [SQL: 'SELECT (SELECT (SELECT array_agg(unwave.restw) AS array_agg_1 
\nFROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + 
mangasampledb.nsa.z) AS restw \nFROM mangadatadb.wavelength) AS unwave) AS 
restwarr \nFROM mangadatadb.cube JOIN mangasampledb.manga_target ON 
mangasampledb.manga_target.pk = mangadatadb.cube.manga_target_pk JOIN 
mangasampledb.manga_target_to_nsa ON mangasampledb.manga_target.pk = 
mangasampledb.manga_target_to_nsa.manga_target_pk JOIN mangasampledb.nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangadatadb.wavelength ON mangadatadb.wavelength.pk = 
mangadatadb.cube.wavelength_pk) AS anon_1 \nFROM mangadatadb.cube \nWHERE 
mangadatadb.cube.mangaid = %(mangaid_1)s'] [parameters: {'z_1': 1, 
'mangaid_1': '1-113520'}]


 I don't know if this is because I have multiple versions of the same 
object or not.  I tried adding in a selection on version, but the joins did 
not work out properly.  This NSA table does not need to be joined with the 
versions.  What I expect the above to return is exactly what the raw SQL 
returns.  A list of the restwave array for each version of the object with 
id = '1-113520'.  I thought SQLA was designed precisely to let users do 
what I'm trying, i.e. construct complex columns involving multiple selects, 
without having to physically add a new column into the db, or write a new 
SQL function in the db to call?  

session.query(datadb.Cube.restwave).join(datadb.Cube,datadb.PipelineInfo,
datadb.PipelineVersion).filter(datadb.PipelineVersion=='v1_5_1',datadb.Cube.
mangaid=='1-113520').all()

InvalidRequestError: Could not find a FROM clause to join from.  Tried 
joining to , but got: 
Can't find any foreign key relationships between 'nsa' and 'cube'.


On Wednesday, August 3, 2016 at 2:22:43 PM UTC-4, Mike Bayer wrote:
>
> There is still much ambiguity here and inaccuracy (JOINs on the outside 
> or JOINs on the inside, the mappings have mistakes like foreign key to 
> "pk" but no "pk", mappings without primary keys, "autoload" makes no 
> sense as I don't have your tables, etc.), so I can only guess but 
> perhaps give you enough clues.   It is highly unusual to have a string 
> of four JOINs inside of a column-based subquery, but when you say 
> s.query(Cube.restwave), that's what that means here. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
import sqlalchemy
from sqlalchemy import ForeignKey, func
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, deferred, sessionmaker
from sqlalchemy.schema import Column
from sqlalchemy.dialects.postgresql import *
from sqlalchemy.types import Float, Integer, String
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy import create_engine

# import sqlite3
# conn = sqlite3.connect('/Users/Brian/Work/python/manga/test_sqla.db')
# c = conn.cursor()
# c.execute('''create table cube (pk integer primary key, mangaid text, wavelength_pk integer, target_pk integer);''')
# c.execute('''create table nsa (pk integer primary key, z real);''')
# c.execute('''create table target (pk i

Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-03 Thread Brian Cherinka
Ok.  Yeah, I have been trying many different ways of getting results.  The 
raw SQL that I'm trying to recreate in SQLA is this (for the restwave 
column only), which works in postgresql.  The limit was only there to do 
the filter the results.  You can ignore that limit.

manga=# select c.pk,c.mangaid,c.manga_target_pk, n.z, (select 
(array_agg(unwave.restw))[0:5] as restwave from (select 
(unnest(w.wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as w) 
as unwave)
from mangadatadb.cube as c
join mangasampledb.manga_target as m on m.pk=c.manga_target_pk
join mangasampledb.manga_target_to_nsa as t on t.manga_target_pk=m.pk
join mangasampledb.nsa as n on n.pk=t.nsa_pk
join mangadatadb.wavelength as w on w.pk=c.wavelength_pk;
  pk   | mangaid | manga_target_pk | z  |   
 restwave
---+-+-++-
 11211 | 1-22286 |   30678 |  0.099954180419445 | {3292.49709827422,
3293.25529747001,3294.01371862107,3294.7723617274,3295.53100483373}
  7729 | 1-22286 |   30678 |  0.099954180419445 | {3292.49709827422,
3293.25529747001,3294.01371862107,3294.7723617274,3295.53100483373}
 11209 | 1-22298 |   15026 | 0.0614774264395237 | {3411.84452637247,
3412.63020900144,3413.41612163118,3414.2022642617,3414.98840689221}
  7727 | 1-22298 |   15026 | 0.0614774264395237 | {3411.84452637247,
3412.63020900144,3413.41612163118,3414.2022642617,3414.98840689221}
 11219 | 1-22301 |  35 |  0.105152934789658 | {3277.00884941768,
3277.76348196558,3278.51833542465,3279.27340979488,3280.02848416512}
(5 rows)

In SQLA,  this code returns something but it is the wrong array for the 
given Cube specified in the filter condition

In [20]: rwquery = session.query(datadb.Cube.restwave).filter(datadb.Cube.
mangaid=='1-113520')

In [21]: print rwquery
SELECT (SELECT array_agg(unwave.restw) AS array_agg_1
FROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + nsaz.z) 
AS restw
FROM mangadatadb.wavelength, (SELECT mangasampledb.nsa.z AS z
FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangasampledb.manga_target ON mangasampledb.manga_target.pk = mangasampledb.
manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube ON mangasampledb.
manga_target.pk = mangadatadb.cube.manga_target_pk
WHERE mangadatadb.cube.mangaid = mangadatadb.cube.mangaid) AS nsaz) AS 
unwave) AS anon_1
FROM mangadatadb.cube
WHERE mangadatadb.cube.mangaid = %(mangaid_1)s

In [22]: rwave = rwquery.first()

In [23]: rwave[0][0:10]
Out[23]:
[3292.49709827422,
 3293.25529747001,
 3294.01371862107,
 3294.7723617274,
 3295.53100483373,
 3296.28986989532,
 3297.04895691218,
 3297.80826588431,
 3298.56779681171,
 3299.32732773911]


I do rwquery.first() here instead of .one() or .all() because in my 
database, I actually have different versions of the same object, 6 
versions, which I don't care about.  But that's fine, I can add additional 
filters later.  In the code setup I sent up, there is only one version of 
each cube object.   The above results gets produced with the @expression

@restwave.expression
def restwave(cls):
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()

return restwave

I don't need any limits and I think I need 4 selects, the one in the query 
I run explicitly s.query(Cube.restwave)  and three inside the @expression. 
  When I grab the actual cube instance object and access restwave on the 
instance side, it works just fine. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-03 Thread Brian Cherinka
Ok. Here is my test file.  I tried to set it up as much as I could, but I 
don't normally set up my db and sessions this way, so you may have to hack 
a bit here and there to finish some setup.  My original setup has classes 
from two different schema.  I don't know if that makes any difference. 
 I've also added some lines to add objects into the tables that I think you 
can use to recreate my problem.   Let me know if you need anything more in 
this file. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, deferred, sessionmaker
from sqlalchemy.schema import Column
from sqlalchemy.dialects.postgresql import *
from sqlalchemy.types import Float, Integer, String
from sqlalchemy.ext.hybrid import hybrid_property


Base = declarative_base()
Session = sessionmaker()


class Target(Base):
__tablename__ = 'target'
__table_args__ = {'autoload': True, 'schema': 'sampledb'}

id = Column(Integer, primary_key=True)
mangaid = Column(String)


class Wavelength(Base):
__tablename__ = 'wavelength'
__table_args__ = {'autoload': True, 'schema': 'datadb'}

pk = Column(Integer, primary_key=True)
wavelength = deferred(Column(ARRAY(Float, zero_indexes=True)))

def __repr__(self):
return ''.format(self.pk)


class NSA(Base):
__tablename__ = 'nsa'
__table_args__ = {'autoload': True, 'schema': 'sampledb'}

z = Column(Float)

def __repr__(self):
return ''.format(self.pk, self.nsaid)


class TargetToNSA(Base):
__tablename__ = 'target_to_nsa'
__table_args__ = {'autoload': True, 'schema': 'sampledb'}

target_pk = Column(Integer, ForeignKey('target.pk'))
nsa_pk = Column(Integer, ForeignKey('nsa.pk'))

def __repr__(self):
return ''.format(self.pk)

# Relationships
NSA.targets = relationship(Target, backref='NSA_objects', secondary=TargetToNSA.__table__)


class Cube(Base):
__tablename__ = 'cube'
__table_args__ = {'autoload': True, 'schema': 'datadb'}

pk = Column(Integer, primary_key=True)
mangaid = Column(String)

target_pk = Column(Integer, ForeignKey('target.pk'))
target = relationship(Target, backref='cubes')

wavelength_pk = Column(Integer, ForeignKey('wavelength.pk'))
wavelength = relationship(Wavelength, backref='cubes')

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls):
session = Session.object_session(self)
nsaz = session.query(NSA.z.label('z')).\
join(TargetToNSA, Target, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()

return restwave


session = Session()

# Add objects
session.begin()
wave = Wavelength(wavelength=[3950.00, 4000.00, 5100.00, 5200.00, 6212.00, 8348.00])
session.add(wave)
nsa = [NSA(z=0.0543), NSA(z=1.234567), NSA(z=0.0167534563)]
session.add_all(nsa)
targets = [Target(mangaid='1-234567', NSA_objects=nsa[0]),
   Target(mangaid='1-113520', NSA_objects=nsa[2]),
   Target(mangaid='1-367842', NSA_objects=nsa[1])]
session.add_all(targets)
session.flush()
cubes = [Cube(wavelength=wave, mangaid='1-987345'),
 Cube(wavelength=wave, mangaid='1-234567', target=targets[0]),
 Cube(wavelength=wave, mangaid='1-367842', target=targets[2]),
 Cube(wavelength=wave, mangaid='1-113520', target=targets[1])]
session.commit()


q = session.query(Cube.restwave).filter(Cube.mangaid == '1-113520')

print q





Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-03 Thread Brian Cherinka
Awesome. Thanks.  Ok. I'll work on it again, and get back to you as soon as 
I can.  

On Tuesday, August 2, 2016 at 3:57:10 PM UTC-4, Mike Bayer wrote:
>
> What I need is a complete .py file that sets up a *minimal* version of 
> *every* class required, then the Query object, then prints it.   I'll 
> mangle it to do the right thing. 
>
> Like this: 
>
> from sqlalchemy import * 
> from sqlalchemy.orm import * 
> from sqlalchemy.ext.declarative import declarative_base 
>
> Base = declarative_base() 
>
>
> class A(Base): 
>  __tablename__ = 'a' 
>  id = Column(Integer, primary_key=True) 
>  bs = relationship("B") 
>
> class B(Base): 
>  __tablename__ = 'b' 
>  id = Column(Integer, primary_key=True) 
>  a_id = Column(ForeignKey('a.id')) 
>
>
> s = Session() 
>
> q = s.query(A).join(B) 
>
> print q 
>
>
>
>
>
>
>
> On 08/02/2016 02:45 PM, Brian Cherinka wrote: 
> > So I managed to get something to return using this definition of the 
> > @expression, however, I'm not quite there yet. 
> > 
> > |@hybrid_property 
> > def restwave(self): 
> > if self.target: 
> > redshift = self.target.NSA_objects[0].z 
> > wave = np.array(self.wavelength.wavelength) 
> > restwave = wave/(1+redshift) 
> > return restwave 
> > else: 
> > return None 
> > 
> > @restwave.expression 
> > def restwave(cls): 
> > session = db.Session() 
> > nsaz = session.query(sampledb.NSA.z.label('z')).\ 
> > join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, 
> Cube).\ 
> > 
> > filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels=True) 
> > 
> > unwave = 
> session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c.z)).label('restw')).subquery('unwave',
>  
> with_labels=True) 
> > restwave = 
> session.query(func.array_agg(unwave.c.restw)).as_scalar() 
> > 
> > 
> > return restwave| 
> > 
> > Using the example query, 
> > 
> > | 
> > rwquery 
> > 
> =session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520') 
>
> > rw =rwquery.first() 
> > | 
> > 
> > I am getting a modified wavelength array, but it's the wrong one, using 
> > the wrong ID.  For the ID 1-113520 I should be modifying the wavelength 
> > array by (1+0.016765) and instead it's just grabbing the very first 
> > value in the NSA.z column, which corresponds to (1+0.099954).  I think 
> > this is because my filter condition is not getting passed into the nsaz 
> > subquery, where it needs to go.   Do you know how I can pass filter 
> > condition parameters down into any subqueries I may have in me 
> > expression select statements?  Is that what .correlate() does? 
> > 
> > My query looks like 
> > 
> > | 
> > In[24]:printrwquery 
> > SELECT (SELECT array_agg(unwave.restw)AS array_agg_1 
> > FROM (SELECT unnest(mangadatadb.wavelength.wavelength)/(%(z_1)s 
> > +nsaz.z)AS restw 
> > FROM mangadatadb.wavelength,(SELECT mangasampledb.nsa.z AS z 
> > FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
> > mangasampledb.nsa.pk =mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
> > mangasampledb.manga_target ON mangasampledb.manga_target.pk 
> > =mangasampledb.manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube 
> > ON mangasampledb.manga_target.pk =mangadatadb.cube.manga_target_pk 
> > WHERE mangadatadb.cube.mangaid =mangadatadb.cube.mangaid)AS nsaz)AS 
> > unwave)AS anon_1 
> > FROM mangadatadb.cube 
> > WHERE mangadatadb.cube.mangaid =%(mangaid_1)s 
> > 
> > | 
> > 
> > 
> > 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Brian Cherinka
So I managed to get something to return using this definition of the 
@expression, however, I'm not quite there yet.

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls): 
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave

Using the example query, 

rwquery = session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid==
'1-113520')
rw = rwquery.first()

I am getting a modified wavelength array, but it's the wrong one, using the 
wrong ID.  For the ID 1-113520 I should be modifying the wavelength array 
by (1+0.016765) and instead it's just grabbing the very first value in the 
NSA.z column, which corresponds to (1+0.099954).  I think this is because 
my filter condition is not getting passed into the nsaz subquery, where it 
needs to go.   Do you know how I can pass filter condition parameters down 
into any subqueries I may have in me expression select statements?  Is that 
what .correlate() does?

My query looks like

In [24]: print rwquery
SELECT (SELECT array_agg(unwave.restw) AS array_agg_1
FROM (SELECT unnest(mangadatadb.wavelength.wavelength) / (%(z_1)s + nsaz.z) 
AS restw
FROM mangadatadb.wavelength, (SELECT mangasampledb.nsa.z AS z
FROM mangasampledb.nsa JOIN mangasampledb.manga_target_to_nsa ON 
mangasampledb.nsa.pk = mangasampledb.manga_target_to_nsa.nsa_pk JOIN 
mangasampledb.manga_target ON mangasampledb.manga_target.pk = mangasampledb.
manga_target_to_nsa.manga_target_pk JOIN mangadatadb.cube ON mangasampledb.
manga_target.pk = mangadatadb.cube.manga_target_pk
WHERE mangadatadb.cube.mangaid = mangadatadb.cube.mangaid) AS nsaz) AS 
unwave) AS anon_1
FROM mangadatadb.cube
WHERE mangadatadb.cube.mangaid = %(mangaid_1)s




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-08-02 Thread Brian Cherinka
So I managed to get something to return using this definition of the 
@expression, however, I'm not quite there yet.

@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls): 
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw)).as_scalar()


return restwave 

Using the example query, 
rwquery = 
session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid=='1-113520')


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-07-29 Thread Brian Cherinka
The @expression as column thing is a bit confusing since in the correlated 
subquery example in the hybrid attribute section, it looks like you are 
returning a select?  Does the .label() effectively turn it into a column?

class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False)

accounts = relationship("SavingsAccount", backref="owner")

@hybrid_property
def balance(self):
return sum(acc.balance for acc in self.accounts)

@balance.expression
def balance(cls):
return select([func.sum(SavingsAccount.balance)]).\
where(SavingsAccount.user_id==cls.id).\
label('total_balance')




On Friday, July 29, 2016 at 5:29:45 PM UTC-4, Brian Cherinka wrote:
>
>
> Oh interesting.  I didn't know that about the @expression.  I'll play 
> around with the as_scalar() as well, and see if I can get something to 
> work.  
>
> class Wavelength(Base):
> __tablename__ = 'wavelength'
> __table_args__ = {'autoload': True, 'schema': 'mangadatadb', 
> 'extend_existing': True}
>
>
> wavelength = deferred(Column(ARRAY_D(Float, zero_indexes=True)))
>
>
> The wavelength table has a single row and single column, which is an 
> array.  
>
> The other table of interest would look something like 
>
> class NSA(Base):
> __tablename__ = 'nsa'
> __table_args__ = ({'autoload': True, 'schema': 'mangasampledb'})
>
>
> z = Column(Float)
>
>
> This table basically has a float column that corresponds to objects in the 
> main cube (object) table. Each float value is used to modify the array in 
> wavelength to a unique array for that object. 
>
> The Cube class joins to NSA via two tables that are just intermediate 
> linking tables for this purpose  Cube -> Table A -> Table AToB - > Table B 
> (NSA)
>
> class MangaTarget(Base):
> __tablename__ = 'manga_target'
> __table_args__ = {'autoload': True, 'schema': 'mangasampledb'}
>
>
> class MangaTargetToNSA(Base):
> __tablename__ = 'manga_target_to_nsa'
> __table_args__ = (
> ForeignKeyConstraint(['manga_target_pk'],
>  ['mangasampledb.manga_target.pk']),
> ForeignKeyConstraint(['nsa_pk'], ['mangasampledb.nsa.pk']),
> {'autoload': True, 'schema': 'mangasampledb'})
>
>
>  The rest can probably be hacked together.   Let me know if you need 
> anything else.  
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] nested subqueries in a hybrid expression?

2016-07-29 Thread Brian Cherinka

Oh interesting.  I didn't know that about the @expression.  I'll play 
around with the as_scalar() as well, and see if I can get something to 
work.  

class Wavelength(Base):
__tablename__ = 'wavelength'
__table_args__ = {'autoload': True, 'schema': 'mangadatadb', 
'extend_existing': True}


wavelength = deferred(Column(ARRAY_D(Float, zero_indexes=True)))


The wavelength table has a single row and single column, which is an array. 
 

The other table of interest would look something like 

class NSA(Base):
__tablename__ = 'nsa'
__table_args__ = ({'autoload': True, 'schema': 'mangasampledb'})


z = Column(Float)


This table basically has a float column that corresponds to objects in the 
main cube (object) table. Each float value is used to modify the array in 
wavelength to a unique array for that object. 

The Cube class joins to NSA via two tables that are just intermediate 
linking tables for this purpose  Cube -> Table A -> Table AToB - > Table B 
(NSA)

class MangaTarget(Base):
__tablename__ = 'manga_target'
__table_args__ = {'autoload': True, 'schema': 'mangasampledb'}


class MangaTargetToNSA(Base):
__tablename__ = 'manga_target_to_nsa'
__table_args__ = (
ForeignKeyConstraint(['manga_target_pk'],
 ['mangasampledb.manga_target.pk']),
ForeignKeyConstraint(['nsa_pk'], ['mangasampledb.nsa.pk']),
{'autoload': True, 'schema': 'mangasampledb'})


 The rest can probably be hacked together.   Let me know if you need 
anything else.  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] nested subqueries in a hybrid expression?

2016-07-29 Thread Brian Cherinka

I'm trying to build a hybrid property / expression in one of my SQLA base 
classes, and the expression side requires several subqueries and I cannot 
for the life of me figure out the correct syntax.  I've tried many 
different versions / syntaxes for the expression, using the SQL alchemy 
expression language, the ORM language but I can't get any of it work.  What 
am I missing here?  Here is my latest attempt. 

The setup is this.  I have a class called Cube, which is my base object. 
 For each object in that cube table, I need to grab an attribute from a 
table A and use that attribute to modify an array column from a second 
table B.  And I want to make this new array queryable, thus the 
hybridization.   

The equivalent raw SQL is 

select (select (array_agg(unwave.restw)) as restwarr from (select (unnest(w.
wavelength)/(1+n.z)) as restw from mangadatadb.wavelength as w) as unwave) 
from mangadatadb.cube as c join mangasampledb.manga_target as m on 
m.pk=c.manga_target_pk 
join mangasampledb.manga_target_to_nsa as t on t.manga_target_pk=m.pk join 
mangasampledb.nsa as n on n.pk=t.nsa_pk join mangadatadb.wavelength as w on 
w.pk=c.wavelength_pk limit 5;


The instance side is quite easy.  The class side is proving difficult.   
What is the proper way to write this?  I've read through the 
documentation countless times now and it hasn't helped much.  

class Cube
@hybrid_property
def restwave(self):
if self.target:
redshift = self.target.NSA_objects[0].z
wave = np.array(self.wavelength.wavelength)
restwave = wave/(1+redshift)
return restwave
else:
return None

@restwave.expression
def restwave(cls):
session = db.Session()
nsaz = session.query(sampledb.NSA.z.label('z')).\
join(sampledb.MangaTargetToNSA, sampledb.MangaTarget, Cube).\
filter(Cube.mangaid == cls.mangaid).subquery('nsaz', with_labels
=True)
unwave = session.query((func.unnest(Wavelength.wavelength)/(1+nsaz.c
.z)).label('restw')).subquery('unwave', with_labels=True)
restwave = session.query(func.array_agg(unwave.c.restw))

return restwave


Trying 
session.query(datadb.Cube.restwave).filter(datadb.Cube.mangaid==
'1-113520').all()

fails with 
AttributeError: 'Query' object has no attribute 'is_clause_element'


I've also tried this. 

@restwave.expression
def restwave(cls):
unwave = select([(func.unnest(Wavelength.wavelength)/(1+sampledb.NSA
.z)).label('restw')])
restwave = select([func.array_agg(unwave.c.restw).label('restwarr'
)]).select_from(
Cube.__table__.join(Wavelength.wavelength).
join(sampledb.MangaTarget).
join(sampledb.MangaTargetToNSA).
join(sampledb.NSA)
)
   return restwave


and this fails

In [6]: datadb.Cube.restwave
---
ArgumentError Traceback (most recent call last)
 in ()
> 1 datadb.Cube.restwave


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/ext/hybrid.pyc 
in __get__(self, instance, owner)
738 def __get__(self, instance, owner):
739 if instance is None:
--> 740 return self.expr(owner)
741 else:
742 return self.fget(instance)


/Users/Brian/Work/sdss/repo/sdss/sdss_python_module/marvin/python/sdss/
internal/database/utah/mangadb/DataModelClasses.pyc in restwave(cls)
454 # unwave = 
select([(func.unnest(Wavelength.wavelength)/(1+sampledb.NSA.z)).label('restw')])
455 # restwave = 
select([func.array_agg(unwave.c.restw).label('restwarr')]).select_from(
--> 456 # Cube.__table__.join(Wavelength.wavelength).
457 # join(sampledb.MangaTarget).
458 # join(sampledb.MangaTargetToNSA).


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/selectable
.pyc in join(self, right, onclause, isouter)
350 """
351
--> 352 return Join(self, right, onclause, isouter)
353
354 def outerjoin(self, right, onclause=None):


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/selectable.pyc
 
in __init__(self, left, right, onclause, isouter)
658 """
659 self.left = _interpret_as_from(left)
--> 660 self.right = _interpret_as_from(right).self_group()
661
    662 if onclause is None:


/Users/Brian/anaconda2/lib/python2.7/site-packages/sqlalchemy/sql/selectable
.pyc in _interpret_as_from(element)
 47 return insp.selectable
 48 except AttributeError:
---> 49 raise exc.ArgumentError("FROM expression expected")
 50
 51

ArgumentError: FROM expression expected



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" 

Re: [sqlalchemy] zero_indexes broken for 2d-arrays?

2016-05-23 Thread Brian Cherinka
Thanks Mike.  That fixed it!.  

On Monday, May 23, 2016 at 10:18:15 AM UTC-4, Mike Bayer wrote:
>
>
>
> On 05/23/2016 10:12 AM, Brian Cherinka wrote: 
> > 
> > Hi, 
> > 
> > It seems like the ARRAY option zero_indexes=True is broken for 
> > 2-dimensional arrays.   Is this a bug that is fixed in 1.1?  I'm 
> > actually using the subclass ARRAY_D as a fix for the __getitem__ 
> > indexing.  It works for 1-D arrays. 
> > 
>
>
> if you're passing zero_indexes=True then that needs to be propagated to 
> the new ARRAY type being created inside of __getitem__.  The recipe 
> appears to miss this. 
>
> class ARRAY_D(ARRAY): 
>  class Comparator(ARRAY.Comparator): 
>  def __getitem__(self, index): 
>  super_ = super(ARRAY_D.Comparator, self).__getitem__(index) 
>  if not isinstance(index, slice) and self.type.dimensions > 1: 
>  super_ = type_coerce( 
>  super_, 
>  ARRAY_D( 
>  self.type.item_type, 
>  dimensions=self.type.dimensions - 1, 
>  zero_indexes=self.type.zero_indexes) 
>  ) 
>  return super_ 
>  comparator_factory = Comparator 
>
> > 
> > *1-D array* 
> > wave = Column(ARRAY_D(Float, zero_indexes=True)) 
> > SQL 
> > | 
> > selectw.wavelength[17]fromdatadb.wavelength asw; 
> >  wavelength 
> >  
> > 3634.96 
> > (1row) 
> > | 
> > 
> > ORM - instance and class side 
> > | 
> > wave =session.query(datadb.Wavelength).first() 
> > wave.wavelength[16] 
> > 3634.96 
> > 
> > session.query(datadb.Wavelength.wavelength[16]).one() 
> > (3634.96) 
> > | 
> > 
> > 
> > *2-D array* 
> > value = Column(ARRAY_D(Float, dimensions=2, zero_indexes=True)) 
> > SQL 
> > || 
> > | 
> > selecte.value[17][18]from dapdb.emline ase limit 1; 
> > 
> >value 
> > --- 
> >  4.962736845652115 
> > | 
> > 
> > ORM - instance and class side 
> > || 
> > | 
> > # correct value on instance side 
> > emline=session.query(dapdb.EmLine).first() 
> > emline.value[16][17] 
> > 4.962736845652115 
> > 
> > # expected correct indexing - wrong value 
> > session.query(dapdb.EmLine.value[16][17]).first() 
> > (4.8138361075679565) 
> > 
> > # both "1-indexed" - wrong value 
> > session.query(dapdb.EmLine.value[17][18]).first() 
> > (5.380134788537585) 
> > 
> > # first index is correct, but second is incremented by 1 - correct value 
> > session.query(dapdb.EmLine.value[16][18]).first() 
> > (4.962736845652115) 
> > | 
> > 
> > Cheers, Brian 
> > 
> > -- 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] zero_indexes broken for 2d-arrays?

2016-05-23 Thread Brian Cherinka

Hi, 

It seems like the ARRAY option zero_indexes=True is broken for 
2-dimensional arrays.   Is this a bug that is fixed in 1.1?  I'm actually 
using the subclass ARRAY_D as a fix for the __getitem__ indexing.  It works 
for 1-D arrays.


*1-D array*
wave = Column(ARRAY_D(Float, zero_indexes=True))
SQL
select w.wavelength[17] from datadb.wavelength as w;
 wavelength

3634.96
(1 row)

ORM - instance and class side
wave = session.query(datadb.Wavelength).first()
wave.wavelength[16]
3634.96

session.query(datadb.Wavelength.wavelength[16]).one()
(3634.96)


*2-D array*
value = Column(ARRAY_D(Float, dimensions=2, zero_indexes=True))
SQL
select e.value[17][18] from dapdb.emline as e limit 1;

   value
---
 4.962736845652115

ORM - instance and class side
# correct value on instance side
emline=session.query(dapdb.EmLine).first()
emline.value[16][17]
4.962736845652115

# expected correct indexing - wrong value
session.query(dapdb.EmLine.value[16][17]).first()
(4.8138361075679565)

# both "1-indexed" - wrong value
session.query(dapdb.EmLine.value[17][18]).first()
(5.380134788537585)

# first index is correct, but second is incremented by 1 - correct value
session.query(dapdb.EmLine.value[16][18]).first()
(4.962736845652115)

Cheers, Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] return value of array at a specific index

2016-05-22 Thread Brian Cherinka
Thanks Mike.  That ARRAY_D class did the trick.  Thanks for pointing it 
out. 

On Sunday, May 22, 2016 at 11:52:11 PM UTC-4, Mike Bayer wrote:
>
>
>
> On 05/22/2016 07:12 PM, Brian Cherinka wrote: 
> > 
> > What's the proper way to return in an ORM query the value of a Postgres 
> > array attribute at a given specific index within the array? 
> > 
> > I have a db table with a column called value, which is a 2d array, 
> > defined as REAL[][]. 
> > 
> > My ModelClass is defined as 
> > 
> > | 
> > classEmLine(Base): 
> > __tablename__ ='emline' 
> > __table_args__ ={'autoload':True,'schema':'dapdb'} 
> > 
> > def__repr__(self): 
> > return' > 
> > value =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > ivar =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > mask =Column(ARRAY(Integer,dimensions=2,zero_indexes=True)) 
> > 
> > | 
> > 
> > Pure SQL indexing an array works just fine 
> > | 
> > selecte.value[16][17]fromdapdb.emline ase; 
> > | 
> > 
> > But SQLalchemy does not 
> > | 
> > session.query(dapdb.EmLine.value[16][17]).first() 
> > | 
> > 
> > returns the error 
> > | 
> > NotImplementedError:Operator'getitem'isnotsupported on thisexpression 
>
> this is a bug that's been fixed for 1.1.   It's detailed here: 
>
> http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#correct-sql-types-are-established-from-indexed-access-of-array-json-hstore
>  
>
> For multi-dimensional access, this can be worked around for a one-off 
> using type_coerce: 
>
>  >>> from sqlalchemy import type_coerce 
>  >>> type_coerce(c[4], ARRAY(Integer))[5] 
>
> There is also a generalized workaround created for the bug that you can 
> see at 
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3487#comment-20200804 . 
> It involves creation of an ARRAY subclass that does the right thing 
> within __getitem__.   That subclass can be a drop-in replacement for 
> regular ARRAY. 
>
>
>
>
>
>
>
>
> > | 
> > 
> > I've tried defining a hybrid method/expression in my ModelClass, and 
> running 
> > | 
> > session.query(dapdb.EmLine.singleat('value',16,17)).first() 
> > | 
> > 
> >  but I'm getting the same "getitem" error 
> > 
> > | 
> > classEmLine(Base): 
> > __tablename__ ='emline' 
> > __table_args__ ={'autoload':True,'schema':'mangadapdb'} 
> > 
> > def__repr__(self): 
> > return' > 
> > value =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > ivar =Column(ARRAY(Float,dimensions=2,zero_indexes=True)) 
> > mask =Column(ARRAY(Integer,dimensions=2,zero_indexes=True)) 
> > 
> > @hybrid_method 
> > defsingleat(self,name,x,y): 
> > param =self.__getattribute__(name) 
> > returnparam[x][y] 
> > 
> > @singleat.expression 
> > defsingleat(cls,name,x,y): 
> > param =cls.__getattribute__(cls,name) 
> > print(param,x,y) 
> > returnfunc.ARRAY(param)[x][y] 
> > 
> > | 
> > 
> > In my singleat expression, I've tried a variety of returns.  return 
> > func.ARRAY(param)[x][y] ;  return param[x][y].  What's the proper syntax 
> > to match the actual SQL array indexing? 
> > 
> > 
> >
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] return value of array at a specific index

2016-05-22 Thread Brian Cherinka

What's the proper way to return in an ORM query the value of a Postgres 
array attribute at a given specific index within the array?

I have a db table with a column called value, which is a 2d array, defined 
as REAL[][]. 

My ModelClass is defined as 

class EmLine(Base):
__tablename__ = 'emline'
__table_args__ = {'autoload': True, 'schema': 'dapdb'}

def __repr__(self):
return 'https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] accessing base class on hybrid expressions from the class side

2016-05-10 Thread Brian Cherinka
Ok, thanks for the response.  What's the timeframe for the 1.1 release?  In 
the meantime, I will have a look into adding my own class_ attribute, or 
using the Comparator.  

I tried something like

setattr(datadb.Cube.plateifu, "class_", datadb.Cube.id.class_)

but it didn't seem to work.  But I'll dig a bit deeper.  If I can't get 
something working with a 1.0X release, I'll try the 1.1 in bitbucket. 


On Tuesday, May 10, 2016 at 2:32:22 PM UTC-4, Mike Bayer wrote:
>
> in 1.1 these hybrids will have the class_ attribute like other attributes. 
>
> Until then you can probably add your own class_ attribute to the object 
> which you are returning.   Also, using a custom Comparator class (see 
> the example in the hybrid docs) will also return an instrumented 
> attribute that should have a class_ attribute. 
>
> Or you could try using the 1.1 hybrid_property class yourself, it should 
> be compatible with 1.0.   The commits are illustrated in 
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3653 but you can probably 
> just use the hybrid.py straight from the git repository with 1.0. 
>
>
>
>
> On 05/10/2016 02:01 PM, Brian Cherinka wrote: 
> > 
> > I'm trying to build a query system where given a filter parameter name, 
> > I can figure out which DeclarativeBase class it is attached to.  I need 
> > to do this for a mix of standard InstrumentedAttributes and Hybrid 
> > Properties/Expressions. I have several Declarative Base classes with 
> > hybrid properties / expressions defined, in addition to the standard 
> > InstrumentedAttributes from the actual table. 
> >   mydb.dataModelClasses.Cube for example. 
> > 
> > For a standard attribute, I can access the class using the class_ 
> variable. 
> > 
> > Standard Attribute on the DeclarativeBase class Cube 
> > | 
> > type(datadb.Cube.id) 
> > sqlalchemy.orm.attributes.InstrumentedAttribute 
> > 
> > printdatadb.Cube.id.class_ 
> > mydb.DataModelClasses.Cube 
> > | 
> > 
> > What's the best way to retrieve this same information for a hybrid 
> > expression?  My expressions are other types, thus don't have the class_ 
> > attribute.  One example of my hybrid property defined in the Cube class 
> > 
> > | 
> > @hybrid_property 
> > defplateifu(self): 
> > return'{0}-{1}'.format(self.plate,self.ifu.name) 
> > 
> > @plateifu.expression 
> > defplateifu(cls): 
> > returnfunc.concat(Cube.plate,'-',IFUDesign.name) 
> > | 
> > 
> > | 
> > type(datadb.Cube.plateifu) 
> > sqlalchemy.sql.functions.concat 
> > | 
> > 
> > Since this property is now a function concat, what's the best way to 
> > retrieve the name of the class that this property is attached to, namely 
> > 'mydb.DataModelClasses.Cube'?  It doesn't seem to have a .class_ or 
> > .parent attribute.  Is there a way to add a new attribute onto my hybrid 
> > columns that let me access the parent class? 
> > 
> > I need to do this for a variety of hybrid properties/expressions, that 
> > are all constructed in unique ways.  This particular example is a 
> > function concat, however I have others that are of type 
> > sqlalchemy.sql.elements.BinaryExpression. 
> > 
> > Is there a way to generically do this no matter the type of hybrid 
> > expression I define? 
> > 
> > Thanks. 
> > 
> > -- 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] accessing base class on hybrid expressions from the class side

2016-05-10 Thread Brian Cherinka

I'm trying to build a query system where given a filter parameter name, I 
can figure out which DeclarativeBase class it is attached to.  I need to do 
this for a mix of standard InstrumentedAttributes and Hybrid 
Properties/Expressions. I have several Declarative Base classes with 
hybrid properties / expressions defined, in addition to the standard 
InstrumentedAttributes from the actual table.  mydb.dataModelClasses.Cube 
for example.

For a standard attribute, I can access the class using the class_ variable. 
 

Standard Attribute on the DeclarativeBase class Cube
type(datadb.Cube.id)
sqlalchemy.orm.attributes.InstrumentedAttribute

print datadb.Cube.id.class_
mydb.DataModelClasses.Cube

What's the best way to retrieve this same information for a hybrid 
expression?  My expressions are other types, thus don't have the class_ 
attribute.  One example of my hybrid property defined in the Cube class

@hybrid_property
def plateifu(self):
return '{0}-{1}'.format(self.plate, self.ifu.name)

@plateifu.expression
def plateifu(cls):
return func.concat(Cube.plate, '-', IFUDesign.name)

type(datadb.Cube.plateifu)
sqlalchemy.sql.functions.concat

Since this property is now a function concat, what's the best way to 
retrieve the name of the class that this property is attached to, namely '
mydb.DataModelClasses.Cube'?  It doesn't seem to have a .class_ or .parent 
attribute.  Is there a way to add a new attribute onto my hybrid columns 
that let me access the parent class?

I need to do this for a variety of hybrid properties/expressions, that are 
all constructed in unique ways.  This particular example is a function 
concat, however I have others that are of type 
sqlalchemy.sql.elements.BinaryExpression. 

Is there a way to generically do this no matter the type of hybrid 
expression I define?

Thanks. 
  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Optimally access a model paramater n number of relationships away

2016-04-06 Thread Brian Leach
Hi everyone!

I have asked this question on StackOverflow, please see it for full detail 
of how the models are laid 
out: 
http://stackoverflow.com/questions/36463623/optimally-access-an-sqlalchemy-model-paramater-n-number-of-relationships-away

I have some "test" models that all have a relationship back to a "location" 
model, each a varying number of relationships away. When I want to get the 
name of the "location" associated with a "test", there is a ton of SQL 
emitted. Is it possible to build in some sort of shortcut to the location 
relationship?

I have tried setting the 'lazy' parameter on the relationship to 'select', 
'joined', and 'subquery' although these do not seem to make a significant 
difference when it comes to tying a test model to its location.

The way I have things defined also make it difficult for me to filter a 
group of tests based on their location, since the relationship is so 
different with each. I do not even know how to join these tables using raw 
SQL or how to begin to approach trying to emulate what the ORM is 
constructing for me.

I would imagine that this has been nailed before and am hoping that one of 
you out there can fill me in on the missing concept.


 - Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Web API Wrapper for Informix Database

2016-04-03 Thread Brian Van Klaveren
I think what you actually would need to do is implement your own version of the 
DBAPI (effectively a parodying DBAPI). Then you would just configure a 
SQLAlchemy engine to use that DBAPI.

In this scenario, a requests session would be started on the acquisition of a 
cursor (or maybe on connect()). Execute could lazily set the query up and fetch 
would execute the SOAP request, download all rows (unless the SOAP service 
supports pagination) and cache them locally.

Brian

> On Apr 3, 2016, at 1:27 PM, Nathan Nelson <nrnel...@gmail.com> wrote:
> 
> hood.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-03 Thread Brian Cherinka
Yeah, that might ultimately be the best way to go if things get too 
complicated.   I think people might not want to re-run several lines of 
code to change some parameters but that could be a thing I make them just 
live with.

On Thursday, March 3, 2016 at 3:06:57 AM UTC-5, Ladislav Lenart wrote:
>
> Hello. 
>
> I think it would be (much) easier to simply rebuild the query from scratch 
> before each run. IMHO the time to build the query is not that big a factor 
> to 
> justify the added source code complexity. 
>
> HTH, 
>
> Ladislav Lenart 
>
>
> On 3.3.2016 05:47, Brian Cherinka wrote: 
> > 
> > 
> > well you need a list of names so from a mapped class you can get: 
> > 
> > for name in inspect(MyClass).column_attrs.keys(): 
> > if name in : 
> > q = q.filter_by(name = bindparam(name)) 
> > 
> > though I'd think if you're dynamically building the query you'd have 
> the 
> > values already, not sure how it's working out that you need 
> bindparam() 
> > at that stage... 
> > 
> >   
> > Ok.  I'll try this out. This looks like it could work.  I think I need 
> it for 
> > the cases where a user specifies a query with condition e.g. X < 10, 
> runs it, 
> > gets results.  Then they want to change the condition to X < 5 and rerun 
> the 
> > query.  As far as I know, if condition 2 gets added into the filter, you 
> would 
> > have both X < 10 and X < 5 in your filter expression.  Rather than a 
> single 
> > updated X < 5. 
> > 
> > What would be even more awesome is if there was a way to also update the 
> > operator in place as well.  So changing X < 10 to X > 10.   
> > 
> > 
> > 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka

Also, I've noticed that when you update the bindparams

q = q.params(x='1234')

and then try to print the whereclause, the parameters are not updated.  Yet 
in the statement, they are updated. 

print 
q.query.whereclause.compile(dialect=postgresql.dialect(),compile_kwargs={'literal_binds':True})
shows old x condition

print 
q.query.statement.compile(dialect=postgresql.dialect(),compile_kwargs={'literal_binds':True})
 
shows updated x='1234'

Is this a bug or does the whereclause need to be updated separately?  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka

>
>
>
> well you need a list of names so from a mapped class you can get: 
>
> for name in inspect(MyClass).column_attrs.keys(): 
> if name in : 
> q = q.filter_by(name = bindparam(name)) 
>
> though I'd think if you're dynamically building the query you'd have the 
> values already, not sure how it's working out that you need bindparam() 
> at that stage... 
>
>  
Ok.  I'll try this out. This looks like it could work.  I think I need it 
for the cases where a user specifies a query with condition e.g. X < 10, 
runs it, gets results.  Then they want to change the condition to X < 5 and 
rerun the query.  As far as I know, if condition 2 gets added into the 
filter, you would have both X < 10 and X < 5 in your filter expression. 
 Rather than a single updated X < 5. 

What would be even more awesome is if there was a way to also update the 
operator in place as well.  So changing X < 10 to X > 10.  




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka
I'm essentially building a tool for a group that doesn't know SQLalchemy, 
and doesn't want to know or care about it, or even SQL in general.  They 
need to be able to query a dataset using minimal input, namely only caring 
about the input filter conditions (e.g. X > 10).  Presumably, they will 
need to be able to update and change these parameters without having to 
reset or rebuild the query I think.  The parameter list is also spread 
across multiple tables as well. 

My query builder is a mix of SQLalchemy + WTForm-Alchemy, but the 
pseudocode is something like

q = Query()
q.createBaseQuery()  # this sets the default   query = 
session.query(ModelClass)
q.set_params(params=params)  # define and set the input parameters to 
search on, as dictionary of {'parameter_name': 'operand value'}  , e.g 
{'X': '< 10'} 
q.add_conditions() # builds and adds the parameters into a giant filter, 
and adds it to the query object, e.g.

myfilt = None
for param in parameters:
 if parameter_table not in join, add it to the join:  
  query = query.join(parameter_table)
 
 parse the input parameter into operand and value
 newfilter = construct a new BinaryExpression based on input
 
 if not myfilt:
 myfilt = and_(newfilter)
 else:
 myfit = and_(myfilt, newfilter)


if any filter exists, add it to query:
query = query.filter(myfilt)



results = q.run()  # runs the sql query and returns the results with either 
query.all(), or query.one(), or query.count(), etc...

So if they want to change the conditions to query with, as far as my 
limited understanding goes, they would either have to rebuild the query 
from scratch and reapply the filters, or they would have to modify the 
values inside the sqlalchemy query object?  And it seems like this 
bindparam is a nice way to allow for flexible attribute changes without 
resetting the query.

Cheers, Brian

On Wednesday, March 2, 2016 at 5:31:09 PM UTC-5, Simon King wrote:
>
> Out of interest, how are you building your query, and why do you need to 
> be able to change the values afterwards? 
>
> Simon 
>
> > On 2 Mar 2016, at 21:59, Brian Cherinka <havo...@gmail.com > 
> wrote: 
> > 
> > Thanks, Mike.  This is excellent.  That did the trick.  That's much 
> easier than what I was trying to do.   Do you know if there is a way to 
> auto bindparam every parameter I have in my Declarative Bases, if and when 
> they get added into a filter?  Basically, I need to allow the user to be 
> able to modify any parameter they set after the fact, but I have a crazy 
> amount of parameters to explicitly do this for. 
> > 
> > Cheers, Brian 
> > 
> > On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote: 
> > 
> > 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka

I'm essentially building a tool for a large group of people that don't know 
SQLalchemy, and will never care enough or want to know, about SQLalchemy or 
SQL in general.  And they need to be able to build queries to the dataset 
based on really minimal information.  All they would input are filter 
conditions of "parameter operand value" and the system builds the rest of 
the query.  They should be able to update these values without having to 
reset and rebuild the query.   

The actual query build is a strange generic combination of SQLalchemy + 
WTForms-Alchemy, but the pseudocode is something like 

q = Query()
q.set_params(params=params)  =  the input parameters they are searching on 
as {'X': '< 10'} 
q.add_conditions()  = builds and adds the filter clauses, something like 

for parameter in parameters:
   if parameter table not in the join: 









On Wednesday, March 2, 2016 at 5:31:09 PM UTC-5, Simon King wrote:
>
> Out of interest, how are you building your query, and why do you need to 
> be able to change the values afterwards? 
>
> Simon 
>
> > On 2 Mar 2016, at 21:59, Brian Cherinka <havo...@gmail.com > 
> wrote: 
> > 
> > Thanks, Mike.  This is excellent.  That did the trick.  That's much 
> easier than what I was trying to do.   Do you know if there is a way to 
> auto bindparam every parameter I have in my Declarative Bases, if and when 
> they get added into a filter?  Basically, I need to allow the user to be 
> able to modify any parameter they set after the fact, but I have a crazy 
> amount of parameters to explicitly do this for. 
> > 
> > Cheers, Brian 
> > 
> > On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote: 
> > 
> > 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka
Thanks, Mike.  This is excellent.  That did the trick.  That's much easier 
than what I was trying to do.   Do you know if there is a way to auto 
bindparam every parameter I have in my Declarative Bases, if and when they 
get added into a filter?  Basically, I need to allow the user to be able to 
modify any parameter they set after the fact, but I have a crazy amount of 
parameters to explicitly do this for.

Cheers, Brian

On Wednesday, March 2, 2016 at 4:28:46 PM UTC-5, Mike Bayer wrote:
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] replacing a filter attribute, or iterating over a booleanclauselist?

2016-03-02 Thread Brian Cherinka
Hi, 

After a query has been constructed with some filter conditions applied, but 
before the query has been run, what's the best way to replace the attribute 
in the filter clause?

Let's say I have a query like this

q = session.query(Cube).join(Version).filter(Version.version == 'v1_5_1')

and I can print the filters with and without the values bound

print q.whereclause
version.version = :version_1

q.whereclause.compile(compile_kwargs={'literal_binds':True})
version.version = 'v1_5_1'

What's the best way to replace the attribute with a new parameter, like 
version = 'v1_3_3'?

For a single condition, the type of whereclause is a BinaryExpression, and 
I figured out that left, operator, and right, get me the left-hand, 
right-hand side of the clause and the operator.  And I can modify the value 
with 

q.whereclause.right.value='v1_3_3'

print q.whereclause.compile(compile_kwargs={'literal_binds':True})
version.version = 'v1_3_3'

Is this the best way?  

My bigger problem is I have a list of clauses in my query

t = session.query(Cube).join(Version,Sample).filter(Version.version == 
'v1_5_1', Sample.x < 10)

now the t.whereclause is a BooleanClauseList and I can't figure out how to 
iterate over this list, such that I can do the above, and modify the 
version value in place.   What's the best way to do this?  I can't find the 
proper location in the documentation that describes a BooleanClauseList. 
 Searching for it returns 0 results. 

Thanks. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] relationship between declarative base and automap base

2016-02-19 Thread Brian Cherinka
Hi.  

I have two database schemas, with a table from each I would like to join. 
 The classes for one schema have been created as explicit declarative 
Bases, while the classes for the other were all created via automap Base. 
 I have a foreign key joining the two tables.  Sqlalchemy sees the foreign 
key, yet does not recognize it or use.  Upon import of my Classes, I'm 
getting this error

NoForeignKeysError: Could not determine join condition between parent/child 
tables on relationship Cube.target - there are no foreign keys linking 
these tables.  Ensure that referencing columns are associated with a 
ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.

datadb.Cube is my declarative base and sampledb.MangaTarget is an automap 
base.  Here are the foreign keys on datadb.Cube
  
In [4]: datadb.Cube.__table__.foreign_keys
Out[4]:
{ForeignKey(u'mangadatadb.ifudesign.pk'),
 ForeignKey(u'mangasampledb.manga_target.pk'),
 ForeignKey(u'mangadatadb.pipeline_info.pk'),
 ForeignKey(u'mangadatadb.wavelength.pk')}

I've created the relationship via 

Cube.target = relationship(sampledb.MangaTarget, backref='cubes')


I've also tried explicitly adding a foreign_keys=[Cube.manga_target_pk] 
here, but that didn't work either.  

I initially created the constraint in my schema table with 

ALTER TABLE ONLY mangadatadb.cube
ADD CONSTRAINT manga_target_fk
FOREIGN KEY (manga_target_pk) REFERENCES mangasampledb.manga_target(pk)
ON UPDATE CASCADE ON DELETE CASCADE;

Can a relationship be created on a foreign key between a declarative base 
class and an automap base class?  

I tested out the relationships with an explicit declarative Base class for 
MangaTarget and everything works perfectly.  However, explicitly declaring 
all the tables in my sampledb schema is not really an option, 
unfortunately. I'm at a loss here.

If it should be possible, is there a procedure somewhere documented on how 
to get that working?

Thanks for any help.

Cheers, Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: proper attribute names for many-to-many relationships using automap

2016-02-16 Thread Brian Cherinka
Yeah I realized I have somewhat nuanced relationships for automap to 
handle, and that in all the time I spent trying to figure out how to get 
automap working for me, I could have written my classes in explicit 
declarative base.  So that's what I ended up doing.  I was hoping to bypass 
that a bit since I have a large number of tables to declare.  Thanks for 
your explanations and help though.  I appreciate it.  

On Sunday, February 14, 2016 at 5:01:19 PM UTC-5, Brian Cherinka wrote:
>
> What is the proper way to get pluralized shortened names for many-to-many 
> tables when using automap?  I currently have it set to generate pluralized 
> lowercase names for collections instead of the default "_collection".  This 
> is what I want for one-to-many or many-to-one relationships, but not 
> many-to-many.  For example, I have two tables, hdu, and extcol, joined 
> together through a many-to-many table, hdu_to_extcol
>
> create table hdu (pk serial primary key not null, extname_pk integer, 
> exttype_pk integer, extno integer, file_pk integer);
> create table hdu_to_extcol (pk serial primary key not null, hdu_pk integer
> , extcol_pk integer);
> create table extcol (pk serial primary key not null, name text);
>
> ALTER TABLE ONLY mangadapdb.hdu_to_extcol
> ADD CONSTRAINT hdu_fk
> FOREIGN KEY (hdu_pk) REFERENCES mangadapdb.hdu(pk)
> ON UPDATE CASCADE ON DELETE CASCADE;
>
> ALTER TABLE ONLY mangadapdb.hdu_to_extcol
> ADD CONSTRAINT extcol_fk
> FOREIGN KEY (extcol_pk) REFERENCES mangadapdb.extcol(pk)
> ON UPDATE CASCADE ON DELETE CASCADE;
>
> When I use SQLalchemy to automap the Base classes, the relationship this 
> generates on the Hdu and Extcol classes are *Hdu.hdu_to_extcol, and 
> Extcol.hdu_to_extcols*, using the below pluralize, and relationship, 
> code.  However, ideally what I'd like the names to be are *Hdu.extcols, 
> and Extcol.hdus*, respectively.  What's the best to generate this for 
> these many-to-many tables?   I'm not sure if automap is recognizing these 
> as many-to-many tables.  The direction indicated when I print during the 
> relationship stage don't indicate as such.
>
> symbol('ONETOMANY')  extcol
> symbol('MANYTOONE')  
> hdu_to_extcol
> symbol('ONETOMANY')  hdu
> symbol('MANYTOONE')  
> hdu_to_extcol
>
> Here is my Base class generation code. 
>
> def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
> referred_cls, **kw):
> if local_cls.__table__.name in onetoones:
> kw['uselist'] = False
> # make use of the built-in function to actually return the result.
>
> return generate_relationship(base, direction, return_fn, attrname, 
> local_cls, referred_cls, **kw)
>
> _pluralizer = inflect.engine()
> def pluralize_collection(base, local_cls, referred_cls, constraint):
> referred_name = referred_cls.__name__
> uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(),
> referred_name)[1:]
> pluralized = _pluralizer.plural(uncamelized)
> return pluralized
>
> # Grabs engine
> db = DatabaseConnection()
> engine = db.engine
>
> # Selects schema and automaps it.
> metadata = MetaData(schema='mangadapdb')
> Base = automap_base(bind=engine, metadata=metadata)
> Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
> name_for_collection_relationship=pluralize_collection, 
> generate_relationship=_gen_relationship)
>
> # Explicitly declare classes
> for cl in Base.classes.keys():
> exec('{0} = Base.classes.{0}'.format(cl))
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] proper attribute names for many-to-many relationships using automap

2016-02-14 Thread Brian Cherinka
What is the proper way to get pluralized shortened names for many-to-many 
tables when using automap?  I currently have it set to generate pluralized 
lowercase names for collections instead of the default "_collection".  This 
is what I want for one-to-many or many-to-one relationships, but not 
many-to-many.  For example, I have two tables, hdu, and extcol, joined 
together through a many-to-many table, hdu_to_extcol

create table hdu (pk serial primary key not null, extname_pk integer, 
exttype_pk integer, extno integer, file_pk integer);
create table hdu_to_extcol (pk serial primary key not null, hdu_pk integer, 
extcol_pk integer);
create table extcol (pk serial primary key not null, name text);

ALTER TABLE ONLY mangadapdb.hdu_to_extcol
ADD CONSTRAINT hdu_fk
FOREIGN KEY (hdu_pk) REFERENCES mangadapdb.hdu(pk)
ON UPDATE CASCADE ON DELETE CASCADE;

ALTER TABLE ONLY mangadapdb.hdu_to_extcol
ADD CONSTRAINT extcol_fk
FOREIGN KEY (extcol_pk) REFERENCES mangadapdb.extcol(pk)
ON UPDATE CASCADE ON DELETE CASCADE;

When I use SQLalchemy to automap the Base classes, the relationship this 
generates on the Hdu and Extcol classes are *Hdu.hdu_to_extcol, and 
Extcol.hdu_to_extcols*, using the below pluralize, and relationship, code. 
 However, ideally what I'd like the names to be are *Hdu.extcols, and 
Extcol.hdus*, respectively.  What's the best to generate this for these 
many-to-many tables?   I'm not sure if automap is recognizing these as 
many-to-many tables.  The direction indicated when I print during the 
relationship stage don't indicate as such.

symbol('ONETOMANY')  extcol
symbol('MANYTOONE')  
hdu_to_extcol
symbol('ONETOMANY')  hdu
symbol('MANYTOONE')  
hdu_to_extcol

Here is my Base class generation code. 

def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
referred_cls, **kw):
if local_cls.__table__.name in onetoones:
kw['uselist'] = False
# make use of the built-in function to actually return the result.

return generate_relationship(base, direction, return_fn, attrname, 
local_cls, referred_cls, **kw)

_pluralizer = inflect.engine()
def pluralize_collection(base, local_cls, referred_cls, constraint):
referred_name = referred_cls.__name__
uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(),
referred_name)[1:]
pluralized = _pluralizer.plural(uncamelized)
return pluralized

# Grabs engine
db = DatabaseConnection()
engine = db.engine

# Selects schema and automaps it.
metadata = MetaData(schema='mangadapdb')
Base = automap_base(bind=engine, metadata=metadata)
Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
name_for_collection_relationship=pluralize_collection, generate_relationship
=_gen_relationship)

# Explicitly declare classes
for cl in Base.classes.keys():
exec('{0} = Base.classes.{0}'.format(cl))




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] best way to declare one-to-one relationships with automap and non-explicit relationships

2016-02-14 Thread Brian Cherinka
Hi, 

I'm trying to use automap a schema, and let it generate all classes and 
relationships between my tables.  It seems to work well for all 
relationships except for one-to-one.  I know to set a one-to-one 
relationship, you must apply the uselist=True keyword in relationship(). 
 What's the best way to do that when I'm letting automap generate them? 
 Without having to manually do it myself by removing the automap generated 
ones, and setting them explicitly.  Here is what I'm trying so far, but 
it's not working. 

onetoones = ['file']

def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
referred_cls, **kw):
if local_cls.__table__.name in onetoones:
kw['uselist'] = False
# make use of the built-in function to actually return the result.
return generate_relationship(base, direction, return_fn, attrname, 
local_cls, referred_cls, **kw)

def camelizeClassName(base, tablename, table):
return str(tablename[0].upper() + re.sub(r'_([a-z])', lambda m: 
m.group(1).upper(), tablename[1:]))

_pluralizer = inflect.engine()
def pluralize_collection(base, local_cls, referred_cls, constraint):
referred_name = referred_cls.__name__
uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(), 
referred_name)[1:]
pluralized = _pluralizer.plural(uncamelized)
return pluralized

# Grabs engine
db = DatabaseConnection()
engine = db.engine

# Selects schema and automaps it.
metadata = MetaData(schema='mangadapdb')
Base = automap_base(bind=engine, metadata=metadata)

# Pre-define Dap class.  Necessary so automap knows to join this table to a 
declarative base class from another schema
class Dap(Base):
__tablename__ = 'dap'

cube_pk = Column(Integer, ForeignKey(datadb.Cube.pk))
cube = relationship(datadb.Cube, backref='dap', uselist=False)

# Prepare the base
Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
name_for_collection_relationship=pluralize_collection, 
generate_relationship=_gen_relationship)

# Explicitly declare classes
for cl in Base.classes.keys():
exec('{0} = Base.classes.{0}'.format(cl))

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: best way to declare one-to-one relationships with automap and non-explicit relationships

2016-02-14 Thread Brian Cherinka


>
>
> you'd need to implement a generate_relationship function as described at 
> http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/automap.html#custom-relationship-arguments
>  
> which applies the rules you want in order to establish those relationships 
> that you'd like to be one-to-one.
>
>  
Yeah, that's what I tried to do here, but it appeared to do nothing.  That 
documentation isn't entirely clear I'm afraid.  The direction input doesn't 
have an interaction.ONETOONE option, so I tried to find an alternative way 
of identifying which tables needed a one-to-one relationship.  Having a 
look, I don't think this function works as is because I don't define the 
pair of tables (from local_cls, referred_cls) that are in a one-to-one.  I 
only define one in my onetoones list.  

onetoones = ['file']

def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
referred_cls, **kw):
if local_cls.__table__.name in onetoones:
kw['uselist'] = False
# make use of the built-in function to actually return the result.
return generate_relationship(base, direction, return_fn, attrname, 
local_cls, referred_cls, **kw)
 

>  
>
>  Here is what I'm trying so far, but it's not working. 
>>
>
>
> I don't see anything obviously wrong with it but you'd want to step 
> through with pdb.set_trace() to ensure every aspect of it is doing what 
> you'd expect.   Otherwise "not working" can mean lots of things.
>
>  
Ok.  Well I'll keep digging around.  
 

>  
>
>>  
>>  
>>
>
>
>> onetoones = ['file']
>>
>> def _gen_relationship(base, direction, return_fn, attrname, local_cls, 
>> referred_cls, **kw):
>> if local_cls.__table__.name in onetoones:
>> kw['uselist'] = False
>> # make use of the built-in function to actually return the result.
>> return generate_relationship(base, direction, return_fn, attrname, 
>> local_cls, referred_cls, **kw)
>>
>> def camelizeClassName(base, tablename, table):
>> return str(tablename[0].upper() + re.sub(r'_([a-z])', lambda m: 
>> m.group(1).upper(), tablename[1:]))
>>
>> _pluralizer = inflect.engine()
>> def pluralize_collection(base, local_cls, referred_cls, constraint):
>> referred_name = referred_cls.__name__
>> uncamelized = re.sub(r'[A-Z]', lambda m: "_%s" % m.group(0).lower(), 
>> referred_name)[1:]
>> pluralized = _pluralizer.plural(uncamelized)
>> return pluralized
>>
>> # Grabs engine
>> db = DatabaseConnection()
>> engine = db.engine
>>
>> # Selects schema and automaps it.
>> metadata = MetaData(schema='mangadapdb')
>> Base = automap_base(bind=engine, metadata=metadata)
>>
>> # Pre-define Dap class.  Necessary so automap knows to join this table to 
>> a declarative base class from another schema
>> class Dap(Base):
>> __tablename__ = 'dap'
>>
>> cube_pk = Column(Integer, ForeignKey(datadb.Cube.pk))
>> cube = relationship(datadb.Cube, backref='dap', uselist=False)
>>
>> # Prepare the base
>> Base.prepare(engine, reflect=True, classname_for_table=camelizeClassName, 
>> name_for_collection_relationship=pluralize_collection, 
>> generate_relationship=_gen_relationship)
>>
>> # Explicitly declare classes
>> for cl in Base.classes.keys():
>> exec('{0} = Base.classes.{0}'.format(cl))
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Is this a correct way to set up table inheritance? (circular dependency issue on delete attempt)

2016-02-02 Thread Brian Leach
This question is sparked by some trouble that I am having with deleting 
model instances. Please see this 
question: 
http://stackoverflow.com/questions/35163325/sqlalchemy-circular-dependency-on-delete

I have set up my models like below. I have several kinds of "tests" defined 
in my database. Each "test" shares certain columns like 'status', 'date', 
'who preformed it', etc. Each different test type then defines columns 
specific to it. 

class HasID(object):
@declared_attr
def id(cls):
return Column('id', Integer, Sequence('test_id_seq'), primary_key=True)
...

class TestParent(HasID, Model)
__tablename__ = 'tests'
discriminator = Column(String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
...

class FooTest(TestParent, Model):
__tablename__ = 'footests'
__mapper_args__ = {'polymorphic_identity': 'footests'}
id = Column(Integer, ForeignKey('tests.id'), primary_key=True)
parent_id = Column(Integer, ForeignKey('footests.id'))
children = relationship('FooTest',
foreign_keys='FooTest.id',
lazy='joined',
join_depth=2,
cascade='save-update, merge, delete, delete-orphan')
...

class BarTest(TestParent, Model):
__tablename__ = 'bartests'
__mapper_args__ = {'polymorphic_identity': 'bartests'}
id = Column(Integer, ForeignKey('tests.id'), primary_key=True)
...



Now, I am wondering if this is a correct way to set up the FooTest, as one 
instance of FooTest may have several child instances of FooTest as 
children. 

I am unable to delete any test instance (BarTest or otherwise) and am 
getting a circular dependency error referring to the FooTest table. 

Am I missing any fundamental concepts related to table inheritance?


Thanks everyone,

Brian Leach

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-19 Thread Brian Cherinka
Hmm.  So I removed the column definitions and it worked just fine.  This 
doesn't make any sense to me, since this is essentially my original class 
definition, and now it works.  Maybe I had a type before?, but it didn't 
look like it.  I'm using a postgreSQL database, and my table definition is 
very straightforward.  It looks like 

CREATE TABLE datadb.sample (pk serial PRIMARY KEY NOT NULL, tileid INTEGER, 
nsa_mstar REAL, nsa_id INTEGER,   bunch of other column definitions ) 

No my table does not have a logmstar definition.  Only mstar.  I create the 
property in my Sample class definition.   This is very strange.  

Brian

On Friday, January 15, 2016 at 6:31:23 PM UTC-5, Simon King wrote:
>
> You shouldn’t need to define the columns. Here’s another test script: 
>
> ### 
> import math 
>
> import sqlalchemy as sa 
> import sqlalchemy.orm as saorm 
> from sqlalchemy.ext.hybrid import hybrid_property 
> from sqlalchemy.ext.declarative import declarative_base 
>
> Base = declarative_base() 
>
> engine = sa.create_engine('sqlite:///hybridtest.db') 
>
> engine.execute(""" 
> CREATE TABLE sample ( 
> pk INTEGER NOT NULL, 
> nsa_mstar FLOAT, 
> PRIMARY KEY (pk) 
> ) 
> """) 
>
> class Sample(Base): 
> __tablename__ = 'sample' 
> __table_args__ = {'autoload' : True, 'autoload_with': engine} 
>
> @hybrid_property 
> def nsa_logmstar(self): 
> try: 
> return math.log10(self.nsa_mstar) 
> except ValueError: 
> return -.0 
> except TypeError: 
> return None 
>
> @nsa_logmstar.expression 
> def nsa_logmstar(cls): 
> return sa.func.log(cls.nsa_mstar) 
>
>
> if __name__ == '__main__': 
> sm = saorm.sessionmaker() 
> session = sm() 
> print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9) 
> ### 
>
> What database are you using, and what is your SQL table definition? Does 
> your table already have a nsa_logmstar column? (I don’t think that should 
> matter, but it would be worth checking) 
>
> Simon 
>
>
> > On 15 Jan 2016, at 22:27, Brian Cherinka <havo...@gmail.com 
> > wrote: 
> > 
> > It looks like I needed to define the columns inside my class.  That's 
> the only difference between your class and mine.  And I tested out the 
> query and it now works, and returns the correct number of rows. 
> > 
> > In [4]: print 
> > session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar 
> < 9) 
> > 
> > SELECT datadb.sample.pk AS datadb_sample_pk 
> > FROM datadb.sample 
> > WHERE log(datadb.sample.nsa_mstar) < %(log_1)s 
> > 
> > In [6]: 
> > len(session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar 
> < 9,datadb.Sample.nsa_mstar > 0).all()) 
> > Out[6]: 273 
> > 
> > Do you have any idea why the column definition matters here?  Thanks for 
> all your help. 
> > 
> > Brian 
> > 
> > On Friday, January 15, 2016 at 5:02:03 PM UTC-5, Brian Cherinka wrote: 
> > Here is the print immediately after my original class definition: 
> > 
> > print 'sample nsa log mstar', 
> Sample.nsa_logmstar 
> > 
> > and the result 
> > 
> > sample nsa log mstar None 
> > 
> > When I run your script exactly as is, I get the same output as you.   
> > 
> > When I replace my class definition with yours, inside my code, as 
> follows 
> > 
> > class Sample(Base): 
> > __tablename__ = 'sample' 
> > __table_args__ = {'autoload' : True, 'schema' : 'mangadatadb'}  (I 
> needed to add this line in) 
> > 
> > pk = Column(Integer, primary_key=True) 
> > nsa_mstar = Column(Float) 
> > 
> > @hybrid_property 
> > def nsa_logmstar(self): 
> > try: 
> > return math.log10(self.nsa_mstar) 
> > except ValueError: 
> > return -.0 
> > except TypeError: 
> > return None 
> > 
> > @nsa_logmstar.expression 
> > def nsa_logmstar(cls): 
> > return func.log(cls.nsa_mstar) 
> > 
> > now the print statement :  print 'sample nsa log mstar', 
> Sample.nsa_logmstar 
> > returns 
> > 
> > sample nsa log mstar log(mangadatadb.sample.nsa_mstar) 
> > 
> > 
> > On Friday, January 15, 2016 at 4:28:31 PM UTC-5, Simon King wrote: 
> > Does my test script produce the right output for you in your 
> installation? 
>

Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
Ahh.  Thanks.  Here is the class side then.  Still None.

In [14]: print datadb.Sample.nsa_logmstar
None

Brian

On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote:
>
> "Sample()" is an instance. "Sample" is the class. Try:
>
> print datadb.Sample.nsa_logmstar
>
> Simon
>
> On Fri, Jan 15, 2016 at 1:46 PM, Brian Cherinka <havo...@gmail.com 
> > wrote:
>
>> Hi Simon, 
>>
>> Printing on the class side, I get 
>>
>> In [11]: print datadb.Sample().nsa_logmstar
>> None
>>
>> It looks like it's getting set to None (or remaining None).  I'm not 
>> quite sure what this tells me, except that it's not working.  Printing on 
>> in the instance side, I get
>>
>> In [12]: print cube.sample[0].nsa_mstar
>> 138616.0
>>
>> In [13]: print cube.sample[0].nsa_logmstar
>> 9.14181336239
>>
>> nsa_mstar is a column in my database table, and nsa_logmstar I want to be 
>> simply the log-base10 of that quantity.  
>>
>> If this doesn't give any insight, then it will take me some time to 
>> provide a small script.  This code is embedded into a bunch of stuff.  But 
>> I'll work on it.  
>>
>> Brian
>>
>>
>> On Friday, January 15, 2016 at 5:00:51 AM UTC-5, Simon King wrote:
>>
>>> On Fri, Jan 15, 2016 at 6:16 AM, Brian Cherinka <havo...@gmail.com> 
>>> wrote:
>>>
>>>> I'm trying to set up a hybrid property / expression in a custom class, 
>>>> that I can use in queries. I think I have the syntax correct, however the 
>>>> query returns the entire table, instead of the correct subset of results.  
>>>> And the where clause just indicates True rather than the correct 
>>>> expression. 
>>>>
>>>>
>>>> Here is my hybrid property/expression definition
>>>>
>>>> class Sample(Base,ArrayOps):
>>>>__tablename__ = 'sample'
>>>>__table_args__ = {'autoload' : True, 'schema' : 'datadb'}
>>>>
>>>>def __repr__(self):
>>>>return '>>>
>>>>@hybrid_property
>>>>def nsa_logmstar(self):
>>>>try: return math.log10(self.nsa_mstar)
>>>>except ValueError as e:
>>>>return -.0
>>>>except TypeError as e:
>>>>return None
>>>>
>>>>@nsa_logmstar.expression
>>>>def nsa_logmstar(cls):
>>>>return func.log(cls.nsa_mstar)  
>>>>
>>>> The session query is
>>>>
>>>> session.query(Sample.pk).filter(Sample.nsa_logmstar < 9)
>>>>
>>>> But printing it does not show the appropriate condition. I get
>>>>
>>>> SELECT datadb.sample.pk AS datadb_sample_pk, 
>>>> FROM datadb.sample
>>>> WHERE true 
>>>>
>>>> and the results return the entire table of ~11000 rows instead of the 
>>>> expected 272 rows. What's going on here?  Everything looks correct to me, 
>>>> but I can't figure it out.  
>>>>
>>>> I'm expecting the SQL statement to look like this
>>>>
>>>> select s.pk 
>>>> from datadb.sample as s 
>>>> where log(s.nsa_mstar) < 9;
>>>>
>>>> Any thoughts?  Thanks. 
>>>>
>>>>
>>> I can't see anything obviously wrong with your code, but it looks like 
>>> Sample.nsa_logmstar is not actually resolving to the hybrid property in 
>>> your query. What happens if you "print Sample.nsa_logmstar" just before the 
>>> query?
>>>
>>> Otherwise, please provide a small runnable script that demonstrates the 
>>> problem.
>>>
>>> Simon
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+...@googlegroups.com .
>> To post to this group, send email to sqlal...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
Actually, the class definition is entirely what I posted in the original 
message.  I didn't cut anything out of that.  I don't define the columns in 
mine, as you did.  The property nsa_logmstar is not defined anywhere else 
in the class or in any other place in this code, or in any code that 
interacts with this code.

class Sample(Base,ArrayOps):
   __tablename__ = 'sample'
   __table_args__ = {'autoload' : True, 'schema' : 'datadb'}

   def __repr__(self):
   return '
> What happens if you put the print statement immediately after the class 
> definition? Is there any chance that you've got "nsa_logmstar = None" 
> somewhere in your class definition?
>
> Here's a test script which appears to work:
>
> import math
>
> import sqlalchemy as sa
> import sqlalchemy.orm as saorm
> from sqlalchemy.ext.hybrid import hybrid_property
> from sqlalchemy.ext.declarative import declarative_base
>
> Base = declarative_base()
>
> class Sample(Base):
> __tablename__ = 'sample'
>
> pk = sa.Column(sa.Integer, primary_key=True)
> nsa_mstar = sa.Column(sa.Float)
>
> @hybrid_property
> def nsa_logmstar(self):
> try:
> return math.log10(self.nsa_mstar)
> except ValueError:
> return -.0
> except TypeError:
> return None
>
> @nsa_logmstar.expression
> def nsa_logmstar(cls):
> return sa.func.log(cls.nsa_mstar)
>
>
> if __name__ == '__main__':
> sm = saorm.sessionmaker()
> session = sm()
> print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9)
>
>
> And here's the output:
>
>
> SELECT sample.pk AS sample_pk
> FROM sample
> WHERE log(sample.nsa_mstar) < :log_1
>
>
> Simon
>
>
> On Fri, Jan 15, 2016 at 2:23 PM, Brian Cherinka <havo...@gmail.com 
> > wrote:
>
>> Ahh.  Thanks.  Here is the class side then.  Still None.
>>
>> In [14]: print datadb.Sample.nsa_logmstar
>> None
>>
>> Brian
>>
>> On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote:
>>>
>>> "Sample()" is an instance. "Sample" is the class. Try:
>>>
>>> print datadb.Sample.nsa_logmstar
>>>
>>> Simon
>>>
>>> On Fri, Jan 15, 2016 at 1:46 PM, Brian Cherinka <havo...@gmail.com> 
>>> wrote:
>>>
>>>> Hi Simon, 
>>>>
>>>> Printing on the class side, I get 
>>>>
>>>> In [11]: print datadb.Sample().nsa_logmstar
>>>> None
>>>>
>>>> It looks like it's getting set to None (or remaining None).  I'm not 
>>>> quite sure what this tells me, except that it's not working.  Printing on 
>>>> in the instance side, I get
>>>>
>>>> In [12]: print cube.sample[0].nsa_mstar
>>>> 138616.0
>>>>
>>>> In [13]: print cube.sample[0].nsa_logmstar
>>>> 9.14181336239
>>>>
>>>> nsa_mstar is a column in my database table, and nsa_logmstar I want to 
>>>> be simply the log-base10 of that quantity.  
>>>>
>>>> If this doesn't give any insight, then it will take me some time to 
>>>> provide a small script.  This code is embedded into a bunch of stuff.  But 
>>>> I'll work on it.  
>>>>
>>>> Brian
>>>>
>>>>
>>>> On Friday, January 15, 2016 at 5:00:51 AM UTC-5, Simon King wrote:
>>>>
>>>>> On Fri, Jan 15, 2016 at 6:16 AM, Brian Cherinka <havo...@gmail.com> 
>>>>> wrote:
>>>>>
>>>>>> I'm trying to set up a hybrid property / expression in a custom 
>>>>>> class, that I can use in queries. I think I have the syntax correct, 
>>>>>> however the query returns the entire table, instead of the correct 
>>>>>> subset 
>>>>>> of results.  And the where clause just indicates True rather than the 
>>>>>> correct expression. 
>>>>>>
>>>>>>
>>>>>> Here is my hybrid property/expression definition
>>>>>>
>>>>>> class Sample(Base,ArrayOps):
>>>>>>__tablename__ = 'sample'
>>>>>>__table_args__ = {'autoload' : True, 'schema' : 'datadb'}
>>>>>>
>>>>>>def __repr__(self):
>>>>>>return '>>>>>
>>>>>>@hybrid_property
>>>>>>def nsa_logmstar(self):
>>>>>>try:

Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
Here is the print immediately after my original class definition:

print 'sample nsa log mstar', Sample.nsa_logmstar 

and the result

sample nsa log mstar None

When I run your script exactly as is, I get the same output as you.  

When I replace my class definition with yours, inside my code, as follows

class Sample(Base):
__tablename__ = 'sample'
__table_args__ = {'autoload' : True, 'schema' : 'mangadatadb'}  (I 
needed to add this line in)

pk = Column(Integer, primary_key=True)
nsa_mstar = Column(Float)

@hybrid_property
def nsa_logmstar(self):
try:
return math.log10(self.nsa_mstar)
except ValueError:
return -.0
except TypeError:
return None

@nsa_logmstar.expression
def nsa_logmstar(cls):
return func.log(cls.nsa_mstar)

now the print statement :  print 'sample nsa log mstar', Sample.nsa_logmstar
returns 

sample nsa log mstar log(mangadatadb.sample.nsa_mstar)


On Friday, January 15, 2016 at 4:28:31 PM UTC-5, Simon King wrote:
>
> Does my test script produce the right output for you in your installation? 
>
> What does the print statement immediately after the class definition 
> produce? 
>
> Simon 
>
> > On 15 Jan 2016, at 19:10, Brian Cherinka <havo...@gmail.com 
> > wrote: 
> > 
> > Actually, the class definition is entirely what I posted in the original 
> message.  I didn't cut anything out of that.  I don't define the columns in 
> mine, as you did.  The property nsa_logmstar is not defined anywhere else 
> in the class or in any other place in this code, or in any code that 
> interacts with this code. 
> > 
> > class Sample(Base,ArrayOps): 
> >__tablename__ = 'sample' 
> >__table_args__ = {'autoload' : True, 'schema' : 'datadb'} 
> > 
> >def __repr__(self): 
> >return ' > 
> >@hybrid_property 
> >def nsa_logmstar(self): 
> >try: return math.log10(self.nsa_mstar) 
> >except ValueError as e: 
> >return -.0 
> >except TypeError as e: 
> >return None 
> > 
> >@nsa_logmstar.expression 
> >def nsa_logmstar(cls): 
> >return func.log(cls.nsa_mstar)   
> > 
> > My database connection is a singleton and my base is defined inside 
> that, essentially 
> > 
> > engine = create_engine(database_connection_string) 
> > Base = declarative_base(bind=engine) 
> > 
> > Brian 
> > 
> > On Friday, January 15, 2016 at 9:43:39 AM UTC-5, Simon King wrote: 
> > What happens if you put the print statement immediately after the class 
> definition? Is there any chance that you've got "nsa_logmstar = None" 
> somewhere in your class definition? 
> > 
> > Here's a test script which appears to work: 
> > 
> > import math 
> > 
> > import sqlalchemy as sa 
> > import sqlalchemy.orm as saorm 
> > from sqlalchemy.ext.hybrid import hybrid_property 
> > from sqlalchemy.ext.declarative import declarative_base 
> > 
> > Base = declarative_base() 
> > 
> > class Sample(Base): 
> > __tablename__ = 'sample' 
> > 
> > pk = sa.Column(sa.Integer, primary_key=True) 
> > nsa_mstar = sa.Column(sa.Float) 
> > 
> > @hybrid_property 
> > def nsa_logmstar(self): 
> > try: 
> > return math.log10(self.nsa_mstar) 
> > except ValueError: 
> > return -.0 
> > except TypeError: 
> > return None 
> > 
> > @nsa_logmstar.expression 
> > def nsa_logmstar(cls): 
> > return sa.func.log(cls.nsa_mstar) 
> > 
> > 
> > if __name__ == '__main__': 
> > sm = saorm.sessionmaker() 
> > session = sm() 
> > print session.query(Sample.pk).filter(Sample.nsa_logmstar < 9) 
> > 
> > 
> > And here's the output: 
> > 
> > 
> > SELECT sample.pk AS sample_pk 
> > FROM sample 
> > WHERE log(sample.nsa_mstar) < :log_1 
> > 
> > 
> > Simon 
> > 
> > 
> > On Fri, Jan 15, 2016 at 2:23 PM, Brian Cherinka <havo...@gmail.com> 
> wrote: 
> > Ahh.  Thanks.  Here is the class side then.  Still None. 
> > 
> > In [14]: print datadb.Sample.nsa_logmstar 
> > None 
> > 
> > Brian 
> > 
> > On Friday, January 15, 2016 at 8:48:30 AM UTC-5, Simon King wrote: 
> > "Sample()" is an instance. "Sample" is the class. Try: 
> > 
> > print datadb.Sample.nsa_logmstar 
> > 
> > Simon 
> > 
> > On Fri, Jan 1

Re: [sqlalchemy] hybrid property / expression returns entire table

2016-01-15 Thread Brian Cherinka
It looks like I needed to define the columns inside my class.  That's the 
only difference between your class and mine.  And I tested out the query 
and it now works, and returns the correct number of rows. 

In [4]: print 
session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar < 9)

SELECT datadb.sample.pk AS datadb_sample_pk
FROM datadb.sample
WHERE log(datadb.sample.nsa_mstar) < %(log_1)s

In [6]: 
len(session.query(datadb.Sample.pk).filter(datadb.Sample.nsa_logmstar < 
9,datadb.Sample.nsa_mstar > 0).all())
Out[6]: 273

Do you have any idea why the column definition matters here?  Thanks for 
all your help.

Brian

On Friday, January 15, 2016 at 5:02:03 PM UTC-5, Brian Cherinka wrote:
>
> Here is the print immediately after my original class definition:
>
> print 'sample nsa log mstar', Sample.nsa_logmstar 
>
> and the result
>
> sample nsa log mstar None
>
> When I run your script exactly as is, I get the same output as you.  
>
> When I replace my class definition with yours, inside my code, as follows
>
> class Sample(Base):
> __tablename__ = 'sample'
> __table_args__ = {'autoload' : True, 'schema' : 'mangadatadb'}  (I 
> needed to add this line in)
> 
> pk = Column(Integer, primary_key=True)
> nsa_mstar = Column(Float)
>
> @hybrid_property
> def nsa_logmstar(self):
> try:
> return math.log10(self.nsa_mstar)
> except ValueError:
> return -.0
> except TypeError:
> return None
>
> @nsa_logmstar.expression
> def nsa_logmstar(cls):
> return func.log(cls.nsa_mstar)
>
> now the print statement :  print 'sample nsa log mstar', 
> Sample.nsa_logmstar
> returns 
>
> sample nsa log mstar log(mangadatadb.sample.nsa_mstar)
>
>
> On Friday, January 15, 2016 at 4:28:31 PM UTC-5, Simon King wrote:
>>
>> Does my test script produce the right output for you in your 
>> installation? 
>>
>> What does the print statement immediately after the class definition 
>> produce? 
>>
>> Simon 
>>
>> > On 15 Jan 2016, at 19:10, Brian Cherinka <havo...@gmail.com> wrote: 
>> > 
>> > Actually, the class definition is entirely what I posted in the 
>> original message.  I didn't cut anything out of that.  I don't define the 
>> columns in mine, as you did.  The property nsa_logmstar is not defined 
>> anywhere else in the class or in any other place in this code, or in any 
>> code that interacts with this code. 
>> > 
>> > class Sample(Base,ArrayOps): 
>> >__tablename__ = 'sample' 
>> >__table_args__ = {'autoload' : True, 'schema' : 'datadb'} 
>> > 
>> >def __repr__(self): 
>> >return '> > 
>> >@hybrid_property 
>> >def nsa_logmstar(self): 
>> >try: return math.log10(self.nsa_mstar) 
>> >except ValueError as e: 
>> >return -.0 
>> >except TypeError as e: 
>> >return None 
>> > 
>> >@nsa_logmstar.expression 
>> >def nsa_logmstar(cls): 
>> >return func.log(cls.nsa_mstar)   
>> > 
>> > My database connection is a singleton and my base is defined inside 
>> that, essentially 
>> > 
>> > engine = create_engine(database_connection_string) 
>> > Base = declarative_base(bind=engine) 
>> > 
>> > Brian 
>> > 
>> > On Friday, January 15, 2016 at 9:43:39 AM UTC-5, Simon King wrote: 
>> > What happens if you put the print statement immediately after the class 
>> definition? Is there any chance that you've got "nsa_logmstar = None" 
>> somewhere in your class definition? 
>> > 
>> > Here's a test script which appears to work: 
>> > 
>> > import math 
>> > 
>> > import sqlalchemy as sa 
>> > import sqlalchemy.orm as saorm 
>> > from sqlalchemy.ext.hybrid import hybrid_property 
>> > from sqlalchemy.ext.declarative import declarative_base 
>> > 
>> > Base = declarative_base() 
>> > 
>> > class Sample(Base): 
>> > __tablename__ = 'sample' 
>> > 
>> > pk = sa.Column(sa.Integer, primary_key=True) 
>> > nsa_mstar = sa.Column(sa.Float) 
>> > 
>> > @hybrid_property 
>> > def nsa_logmstar(self): 
>> > try: 
>> > return math.log10(self.nsa_mstar) 
>> > except ValueError: 
>> > return -.0 
>> > except TypeError: 
>>

[sqlalchemy] Re: best way to query from a tuple of parameters

2015-10-26 Thread Brian Cherinka
Yeah that almost works.   I needed to add an and_ around each of the 
subclauses, otherwise the or_ breaks.  

abcd = or_(and_(table.col_1 == a, table.col_2 == b),
   and_(table.col_1 == b, table.col_2 == c))

For posterity, to loop over my rows I found I could put it in a generator 
over the columns

col1 = list of column 1 
col2 = list of column 2

or_(and_(table.col_1==p, table.col_2==col2[i]) for i,p in enumerate(col1))

Thanks for the help.

On Tuesday, October 27, 2015 at 2:05:31 PM UTC+11, Brian Cherinka wrote:
>
>
> What's the best way to query, with the ORM, based on a list of multiple 
> parameters without looping over each parameter tuple?  
>
> I have a list of parameters say:
>
> Col-1, Col-2
> a, b
> c, d
>
> where the combination of the parameters in one row defines a unique table 
> entry.  Such that I would normally do 
>
> object = session.query(table).filter(table.Col-1 == a, table.Col-2 == 
> b).one()  , and 
> object = session.query(table).filter(table.Col-1 == c, table.Col-2 == 
> d).one()
>
> to retrieve the two table objects
>
> Can I run a query that would grab the list of objects all at once, in one 
> query, without having to loop over each row, doing each query, and 
> combining the objects into a list?  Something akin to the 
> table.Col-1..in_([list of values) but with a combinatorial component to it. 
>  
>
> I know I can do something like 
> session.query(table).filter(table.Col-1.in_([a,c]).all()  to grab all the 
> objects that have column 1 values of a, or c, but I need to constrain those 
> to a+b, and c+d
>
> Does this make sense?
>
> Cheers, Brian
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] best way to query from a tuple of parameters

2015-10-26 Thread Brian Cherinka

What's the best way to query, with the ORM, based on a list of multiple 
parameters without looping over each parameter tuple?  

I have a list of parameters say:

Col-1, Col-2
a, b
c, d

where the combination of the parameters in one row defines a unique table 
entry.  Such that I would normally do 

object = session.query(table).filter(table.Col-1 == a, table.Col-2 == 
b).one()  , and 
object = session.query(table).filter(table.Col-1 == c, table.Col-2 == 
d).one()

to retrieve the two table objects

Can I run a query that would grab the list of objects all at once, in one 
query, without having to loop over each row, doing each query, and 
combining the objects into a list?  Something akin to the 
table.Col-1..in_([list of values) but with a combinatorial component to it. 
 

I know I can do something like 
session.query(table).filter(table.Col-1.in_([a,c]).all()  to grab all the 
objects that have column 1 values of a, or c, but I need to constrain those 
to a+b, and c+d

Does this make sense?

Cheers, Brian
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: how to tell which tables already joined in a query statement (or best way to dynamically build a query?)

2015-09-10 Thread Brian Cherinka
Hi Michael, 

Thanks for your response.  It helped a lot.  I ended up going with the 
quick and dirty query.from_obj[0] method you described.  That was faster to 
implement and served my purposes exactly. 

Cheers, Brian

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] get_schema_names doesn't work with SQLite attached databases

2015-09-09 Thread Van Klaveren, Brian N.
Hi,

I'm trying to find the attached databases of a SQLite database. I was expecting 
Inspector.get_schema_names to return something like:


$ sqlite3 foo.db
sqlite> create table a (a_id integer);

sqlite3 bar.db
sqlite> create table b (b_id integer);


from sqlalchemy import create_engine
from sqlalchemy.inspection import inspect

engine = create_engine("sqlite://", echo=True)

engine.execute("attach database 'foo.db' as foo")
engine.execute("attach database 'bar.db' as bar")

refl = inspect(engine)

refl.get_table_names(schema="foo") # works
refl.get_table_names(schema="bar") # works

refl.get_columns("a", schema="foo") # works
refl.get_columns("b", schema="bar") # works

refl.get_schema_names() # doesn't work, returns []



It doesn't seem the SQLite dialect supports this, but it does seem like the 
SQLite dialect could support this via the equivalent of this:


@reflection.cache
def get_schema_names
dl = connection.execute("PRAGMA database_list")
return [r[1] for r in dl]


Is this reasonable? Could it be included in the SQLite dialect?

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Retrieving a row/cursor's description object from a ResultSetProxy

2015-08-10 Thread Van Klaveren, Brian N.
Hi,

I want to get extra type information from a given column after performing a 
query like the following:

results = engine.execute(text(SELECT a, b FROM Attributes))

It seems the only way to really do this is to use cursor from 
results.cursor.description.

Is this the preferred method, or is there a better alternative?

I ask because my database may have decimals larger than double precision, and 
integers larger than 64 bits, and I'm confused as to what the call:
type(row[0]).

...would return in this case.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] how to tell which tables already joined in a query statement (or best way to dynamically build a query?)

2015-08-07 Thread Brian Cherinka

Hi, 

I'm trying to build an SQLalchemy ORM query dynamically based on user 
selected options.  Some of these options come from the same table, but the 
user could select either one or both criteria to filter on.  Since I don't 
know which options the user will select ahead of time, I have to join to 
the same table multiple times.  However this throws an error 

ProgrammingError: (psycopg2.ProgrammingError) table name TableB specified 
more than once

when I try to submit the query.  How can I find out which tables have 
already been joined in a query?  Or what's the best way to handle building 
a query based on multiple criteria?   I'm using SQLalchemy 1.0.0. 

Here is my pseudo-code.  

Option 1.  Option 2.  Option 3.   (any or all options can be selected, and 
they all come from the same joined table)

// base table
query = session.query(TableA)

// add on new criteria
if option 1: query = query.join(TableB).filter(TableB.option1  X )
if option 2: query = query.join(TableB).filter(TableB.option2  X )
if option 3: query = query.join(TableB).filter(TableB.option3  X )

However, when attempting query.all(), this throws the above error, if I 
have selected any two options.   What I think it should be is something 
like this...

//base
query = session.query(TableA)
//join
query = query.join(TableB)
// add on new criteria
if option 1: query = query.filter(TableB.option1  X )
if option 2: query = query.filter(TableB.option2  X )
if option 3: query = query.filter(TableB.option3  X )

but I don't want to join to TableB if I don't have to.  I have many 
different tables where this kind of situation applies, and it seems 
inefficient to join to all other tables just in case I may need to filter 
on something.  

Any thoughts, help or suggestions?
Thanks, Brian





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Reproducible oddity in with_for_update()

2015-06-15 Thread Brian Candler
I have an issue which I have boiled down to a full test case below. This 
test program reproduces the problem with both sqlalchemy 0.9.9 and 1.0.5, 
under python 2.7.6 and ubuntu 14.04, and PyMySQL-0.6.2.

There are a combination of circumstances:

1. After you rollback a session, touching any attribute of an object (even 
just accessing its id) causes the whole object to be re-read from the 
database. That's OK.
2. Reading the object again using a new query and with_for_update() 
generates a fresh query with SELECT .. FOR UPDATE. This is what I expect. 
It also correctly blocks if another client has the row locked.
3. However, once the query has completed, the data seen in the object 
appears to be the value read from the previous query, not the SELECT .. FOR 
UPDATE one.

In the test program, a database object is created with val=abc. Two 
threads both read the row under a lock, append X and write it back again. 
So the final answer should be abcXX, but in fact it's abcX.

Points to note:

- this has to be run on a proper database (I am using mysql). sqlite 
doesn't support SELECT .. FOR UPDATE.

- I have some workarounds. If instead of reading a new object I do 
db.refresh(v, 
lockmode=update) then all is fine. However I understood that the 
lockmode=string interface is being deprecated.

Similarly, if I discard the object using db.expire(v) before reading it 
again then it also works correctly. But in any case, I'd like to understand 
why it doesn't work to fetch the new object in the way I am, and I suspect 
a bug. Surely if SQLAlchemy has just issued a SELECT .. FOR UPDATE then the 
object should be updated with the values of that SELECT?

Regards,

Brian.

-
from __future__ import absolute_import, division, print_function, 
unicode_literals
from sqlalchemy import create_engine
from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

from contextlib import contextmanager
from six.moves.queue import Queue, Empty
from threading import Thread

DEFAULT_DB_URI = 'mysql+pymysql://root@localhost/testdb'

Base = declarative_base()

class Foo(Base):
__tablename__ = foo
id = Column(Integer, primary_key=True)
val = Column(String(255))

engine = create_engine(DEFAULT_DB_URI, echo=True)
try: Base.metadata.drop_all(engine)
except: pass
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)

@contextmanager
def private_session():
s = Session()
try:
yield s
finally:
s.rollback()
s.close()

def runner(ref, omsg, imsg):
with private_session() as db:
print( Read object)
v = db.query(Foo).filter_by(id=ref).one()
print( Discard session)
db.rollback()
print( Get object's id)
id = v.id
print( Reload object with FOR UPDATE)
# db.expire(v)
v = db.query(Foo).filter_by(id=id).with_for_update().one()
# Alt: db.refresh(v, lockmode='update')
print( v.val=%r % v.val)
omsg.put(started)
imsg.get()
v.val += X
db.commit()

with private_session() as db:
f = Foo(id=1, val=abc)
db.add(f)
db.commit()

o1 = Queue()
i1 = Queue()
o2 = Queue()
i2 = Queue()

t1 = Thread(target=runner, kwargs={ref:1, omsg: o1, imsg: i1})
t2 = Thread(target=runner, kwargs={ref:1, omsg: o2, imsg: i2})

t1.start()
assert o1.get(True, 1) == started
# Next thread should block on SELECT FOR UPDATE
t2.start()
try:
o2.get(True, 1)
raise RuntimeError(This thread should be blocked on SELECT FOR 
UPDATE)
except Empty:
pass
# Let first thread complete
i1.put(go)
# Now second thread is unblocked
assert o2.get(True, 1) == started
i2.put(go)

t1.join(2)
assert not t1.isAlive()
t2.join(2)
assert not t2.isAlive()

# Check final state
print(*** FINISHED ***)
id = f.id
print(*** RESULTS ***)
print(id=%d % f.id)
print(val=%r % f.val)

Base.metadata.drop_all(engine)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] correct syntax to pass an sqlalchemy table class into postgresql functions?

2015-05-20 Thread Brian Cherinka
Hi, 

I've built a postgresql function that takes as input a row from a table, 
and returns the sum of a particular column (of type array) between two 
specified indices.  Here is how I've defined my function

CREATE OR REPLACE FUNCTION specres_sum_segment(cube datadb.cube, index1 
integer, index2 integer) RETURNS numeric
LANGUAGE plpgsql STABLE
AS $$

DECLARE result numeric;
BEGIN
select sum(f) from unnest(cube.specres[index1:index2]) as f into result;
return result;
END; $$;


and here is how it works in psql.  

select c.pk from datadb.cube as c where(specres_sum_segment(c,2000,2005)  
12000);


This works and returns the cube entries where this condition is true.  Now 
I'm trying to call this function with an SQLalchemy query.  I've mapped a 
DeclarativeMeta class called Cube to my datadb.cube table, but when I try 
to run my session query I'm getting an error.   

My sqlalchemy session query is 

session.query(datadb.Cube).filter(func.specres_sum_segment(datadb.Cube,2000,2005)==
 
12000).all()


but I get the error

ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 
'DeclarativeMeta'


What is the right syntax to use when passing a mapped SQLalchemy class into 
a function so postgresql will understand it?  I'm using SQLalchemy 1.0.0 
and PostgreSQL 9.3.  Any help would be appreciated.  Thanks.  

Cheers, Brian



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] correct syntax to pass an sqlalchemy table class into postgresql functions?

2015-05-20 Thread Brian Cherinka
Hi Michael, 

Here is the beginning of my Cube class in SQLalchemy.  It also has a bunch 
of properties and methods I'm not printing here, to keep it short   

class Cube(Base,ArrayOps):
__tablename__ = 'cube'
__table_args__ = {'autoload' : True, 'schema' : 'mangadatadb', 
'extend_existing':True}

specres = deferred(Column(ARRAY(Float)))

def __repr__(self):
return 'Cube (pk={0}, plate={1}, ifudesign={2}, tag={3})'.format(self.pk, 
self.plate, self.ifu.name,self.pipelineInfo.version.version)

Here is the chain (going backwards) that produces my Base

Base = db.Base

db = DatabaseConnection()

and here is my DatabaseConnection class

class DatabaseConnection(object):

_singletons = dict()

def __new__(cls, database_connection_string=None, expire_on_commit=True):
This overrides the object's usual creation mechanism.

if not cls in cls._singletons:
assert database_connection_string is not None, A database connection 
string must be specified!
cls._singletons[cls] = object.__new__(cls)

# 
# This is the custom initialization
# 
me = cls._singletons[cls] # just for convenience (think self)

me.database_connection_string = database_connection_string

# change 'echo' to print each SQL query (for debugging/optimizing/the 
curious)
me.engine = create_engine(me.database_connection_string, echo=False)

me.metadata = MetaData()
me.metadata.bind = me.engine
me.Base = declarative_base(bind=me.engine)
me.Session = scoped_session(sessionmaker(bind=me.engine, autocommit=True,

 expire_on_commit=expire_on_commit))

Cheers, Brian

On Wednesday, May 20, 2015 at 12:51:36 PM UTC-4, Michael Bayer wrote:

  

 On 5/20/15 12:09 PM, Brian Cherinka wrote:
  
 Hi,  

  I've built a postgresql function that takes as input a row from a table, 
 and returns the sum of a particular column (of type array) between two 
 specified indices.  Here is how I've defined my function

   CREATE OR REPLACE FUNCTION specres_sum_segment(cube datadb.cube, index1 
 integer, index2 integer) RETURNS numeric
  LANGUAGE plpgsql STABLE
  AS $$
  
   DECLARE result numeric;
  BEGIN
   select sum(f) from unnest(cube.specres[index1:index2]) as f into result;
   return result;
  END; $$;
  
  
  and here is how it works in psql.  

  select c.pk from datadb.cube as c where(specres_sum_segment(c,2000,2005) 
  12000);


  This works and returns the cube entries where this condition is true. 
  Now I'm trying to call this function with an SQLalchemy query.  I've 
 mapped a DeclarativeMeta class called Cube to my datadb.cube table, but 
 when I try to run my session query I'm getting an error.   

  My sqlalchemy session query is 

  
 session.query(datadb.Cube).filter(func.specres_sum_segment(datadb.Cube,2000,2005)==
  
 12000).all()


  but I get the error

  ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 
 'DeclarativeMeta'

  sounds like your use of declarative is incorrect, please share the means 
 by which the Cube class is declared as well as its base.




  
  What is the right syntax to use when passing a mapped SQLalchemy class 
 into a function so postgresql will understand it?  I'm using SQLalchemy 
 1.0.0 and PostgreSQL 9.3.  Any help would be appreciated.  Thanks.  

  Cheers, Brian

  
  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] The use of SQLAlchemy for a long term project

2015-04-20 Thread Van Klaveren, Brian N.
Thanks for the detailed response. I didn't think to look to Red Hat's to see if 
they backported security fixes, so that's good to know.

As for the undefined behavior with respect to less-than idiomatic programming, 
I think that's something we'll definitely need to keep in mind and hopefully be 
able to enforce with code reviews.

I've personally been wanting to use SQLAlchemy, but it's important we 
understand implications for decisions like this when the lifetime of code is 
guaranteed to surpass a decade. I'm pretty sure we'll end up using SQLAlchemy 
for the long term.

Thanks again,
Brian


On Apr 18, 2015, at 2:47 PM, Mike Bayer 
mike...@zzzcomputing.commailto:mike...@zzzcomputing.com wrote:



On 4/17/15 6:58 PM, Van Klaveren, Brian N. wrote:
Hi,

I'm investigating the use and dependency on SQLAlchemy for a long-term 
astronomy project. Given Version 1.0 just came out, I've got a few questions 
about it.

1. It seems SQLAlchemy generally EOLs versions after about two releases/years. 
Is this an official policy? Is this to continue with version 1.0 as well? Or is 
it possible 1.0 might be a something of a long-term release.
2. While well documented and typically minimal, SQLAlchemy does have occasional 
API and behavioral changes to be aware of between versions. Is the 1.0 API more 
likely to be stable on the time frame of ~4 years?

Put another way, would you expect that it should be easier to migrate from 
version 1.0 to the 1.4 (or whatever the current version is) of SQLAlchemy in 
five years than it would be to migrate from 0.6 to 1.0 today.

I know these questions are often hard to answer with any certainty, but these 
sorts of projects typically outlive the software they are built on and are 
often underfunded as far as software maintenance goes, so we try to plan 
accordingly.

(Of course, some people just give up and through everything in VMs behind 
firewalls)
Well the vast majority of bugs that are fixed, like 99% of them, impact only 
new development, that is, they only have a positive impact someone who is 
writing new code, using new features of their database backend, or otherwise 
attempting to do something new; they typically only serve to raise risk and 
decrease stability of code that is not under active development and is 
stabilized on older versions of software.

These kinds of issues mean that some way of structuring tables, mapped classes, 
core SQL or DDL objects, ORM queries, or calls to a Session produce some 
unexpected result, but virtually always, this unexpected result is consistent 
and predictable.   An application that is sitting on 0.5 or 0.6 and is running 
perfectly fine, because it hasn't hit any of these issues, or quite often 
because it has and is working around them (or even relying upon their behavior) 
would not benefit at all from these kinds of fixes being backported, but would 
instead have a greater chance of hitting a regression or a change in 
assumptions if lots of bugfixes were being backported from two or three major 
versions forward.

So it's not like we don't backport issues three or four years back because it's 
too much trouble, it's because these backports wouldn't benefit anyone and they 
would only serve to wreak havoc with old and less maintained applications when 
some small new feature or improvement in behavioral consistency breaks some 
assumption made by that application.

As far as issues that are more appropriate for backporting, which would be 
security fixes and stability enhancements, we almost never have issues like 
that; the issues we have regarding stability, like memory leaks and race 
conditions, again typically occur in conjunction with a user application doing 
something strange and unexpected (e.g. new development), and as far as security 
issues the only issue we ever had like that even resembled a security issue was 
issue 2116 involving limit/offset integers not being escaped, which was 
backported from 0.7 to 0.6.  Users who actually needed enterprise-level 
longevity who happened to be using for example the Red Hat package could see 
the backport for this issue backported all the way to their 0.5 and 0.3 
packages.  But presence of security/memory leak/stability issues in modern 
versions is extremely rare, and we generally only see new issues involving 
memory or stability as a result of new features (e.g. regressions).

There's also the class of issues that involve performance enhancements.   Some 
of these features would arguably be appropriate to backport more than several 
major versions, but again they are often the result of significant internal 
refactorings and definitely would raise risk for an older application not 
undergoing active development.   An older application that wants to take 
advantage of newer performance features would be better off going through the 
upgrade process than risking running on top of a library that is a hybrid of 
very old code and backported newer approaches, which will see

[sqlalchemy] The use of SQLAlchemy for a long term project

2015-04-17 Thread Van Klaveren, Brian N.
Hi,

I'm investigating the use and dependency on SQLAlchemy for a long-term 
astronomy project. Given Version 1.0 just came out, I've got a few questions 
about it.

1. It seems SQLAlchemy generally EOLs versions after about two releases/years. 
Is this an official policy? Is this to continue with version 1.0 as well? Or is 
it possible 1.0 might be a something of a long-term release.
2. While well documented and typically minimal, SQLAlchemy does have occasional 
API and behavioral changes to be aware of between versions. Is the 1.0 API more 
likely to be stable on the time frame of ~4 years?

Put another way, would you expect that it should be easier to migrate from 
version 1.0 to the 1.4 (or whatever the current version is) of SQLAlchemy in 
five years than it would be to migrate from 0.6 to 1.0 today.

I know these questions are often hard to answer with any certainty, but these 
sorts of projects typically outlive the software they are built on and are 
often underfunded as far as software maintenance goes, so we try to plan 
accordingly. 

(Of course, some people just give up and through everything in VMs behind 
firewalls)

Brian


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] JSONB double casting

2015-03-18 Thread Brian Findlay
I'm having some difficulty using SQLAlchemy's jsonb operators to produce my 
desired SQL.

Intended SQL:

SELECT *
FROM foo
WHERE foo.data-'key1' ? 'a'

...where `foo.data` is formatted like this:

{
'key1': ['a', 'b', 'c'],
'key2': ['d', 'e', 'f']
}

So, I'm trying to find records where the array associated with `key1` 
contains some value, 'a' in this case.

I thought it'd be a straightforward query, like:

sess.query(Foo).filter(Foo.data['key1'].has_key('a')).all()

But this is yielding:

AttributeError: Neither 'JSONElement' object nor 'Comparator' object 
has an attribute 'has_key'

So I changed the query to:

sess.query(Foo).filter(Foo.data['key1'].cast(JSONB).has_key('a')).all()

But this query produces the following SQL statement:

SELECT *
FROM foo
WHERE (foo.data-'key1')::JSONB ? 'a'

Here, the `-` operator is casting the jsonb value associated with key 
`key1` to text, which I then have to cast back to jsonb in order to use the 
`?` operator (jsonb-specific) to check if a string is contained in the 
first value.

Any ideas?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: JSONB double casting

2015-03-18 Thread Brian Findlay
Test data attached. Perhaps I'm doing something else wrong?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm.session import Session

from sqlalchemy.schema import Column
from sqlalchemy.types import Integer
from sqlalchemy.dialects.postgresql import JSONB

Base = declarative_base()

class Foo(Base):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
data = Column(JSONB)

database_url = postgresql://brian@10.0.1.10:5432/test
e = create_engine(database_url, echo=True)
Base.metadata.create_all(e)

sess = Session(e)

# Insert data

user1 = Foo(id=1, data={'key1': ['a', 'b', 'c'], 'key2': ['d', 'e', 'f']})
user2 = Foo(id=2, data={'key1': ['g', 'h', 'i'], 'key2': ['j', 'k', 'l']})
user3 = Foo(id=3, data={'key1': ['m', 'n', 'o'], 'key2': ['p', 'q', 'r']})

sess.add_all([user1, user2, user3])
sess.commit()

# Tests

# stmt1 = sess.query(Foo).filter(Foo.data['key1'].has_key('a')).all()
# stmt2 = sess.query(Foo).filter(Foo.data['key1'].cast(JSONB).has_key('a')).all()


Re: [sqlalchemy] JSONB double casting

2015-03-18 Thread Brian Findlay
`type_coerce()` did the trick. Thanks, Mike!


On Wednesday, March 18, 2015 at 12:55:57 PM UTC-4, Michael Bayer wrote:


 try using the type_coerce() function instead of cast, it should give you 
 the 
 has_key() but won’t change the SQL.   (type_cast(Foo.data[‘key’], 
 JSONB).has_key()) 

 just a guess. 





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQL join between two tables from two databases

2015-01-23 Thread Brian Glogower
Simon,

I was able to get the select join working with the following:

select = 
SELECT hostname, sha256
FROM hosts
JOIN environments ON hosts.environment_id = environments
.environmentID
JOIN zones ON environments.zone_id = zones.ZoneID
JOIN %s.ssh_host_keys USING (hostname)
WHERE ZoneName = %s
 % (self.config['db']['private']['database'], zone)
rp = self.session.execute(select)

It might not be the best, but it works. Luckily, I only need to read one
table from the other database.



On 21 January 2015 at 12:31, Brian Glogower bglogo...@ifwe.co wrote:

 Simon, thanks for your response. Let me wrap my head around this and try
 it out.

 Brian

 On 21 January 2015 at 04:59, Simon King si...@simonking.org.uk wrote:

 You don't need to convert it to a Table object, but you probably do
 need to add 'schema': 'whatever' to the __table_args__ dictionary.

 In answer to your second question, I very much doubt you can use
 query.join() with 2 DB connections. query.join() simply adds an SQL
 JOIN clause to the query that is eventually sent to the database -
 there's no way of making that work with 2 separate connections.

 As an alternative, I think it should be possible to put the tables
 that exist in a separate schema in a separate SQLAlchemy MetaData
 (they'd need to use a separate declarative Base class). The MetaData
 can hold the default schema for the tables, and I *think* you should
 be able to use tables from different MetaData in query.join(). (I
 haven't tested this though).


 http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative/basic_use.html#accessing-the-metadata


 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html#sqlalchemy.schema.MetaData


 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html#metadata-describing

 Hope that helps,

 Simon

 On Wed, Jan 21, 2015 at 9:09 AM, Brian Glogower bglogo...@ifwe.co
 wrote:
  Hi Michael,
 
  Do I need to redefined mapped class ssh_host_keys as a Table object?
 
  ssh_host_keys = Table('ssh_host_keys', metadata,
  Column('hostname', VARCHAR(30), primary_key=True),
  Column('pub', VARCHAR(1600)),
  Column('sha256', CHAR(64)),
  Column('priv', VARCHAR(2000)),
  schema='keys',
  mysql_engine='InnoDB'
  )
 
  Do I need to convert mapped class 'Host' to a Table object as well? I
 would
  prefer not to touch this class, since it is part of a separate module,
 but
  if needed, it is possible.
 
  class Host(Base):
  __tablename__ = 'hosts'
  __table_args__ = {'mysql_engine': 'InnoDB'}
 
  id = Column(u'HostID', INTEGER(), primary_key=True)
  hostname = Column(String(length=30))
 
  Can you please give an example how to use schema with a query.join(),
 for my
  scenario (two sessions, one for each DB connection)?
 
  Thanks,
  Brian
 
  On 20 January 2015 at 16:12, Michael Bayer mike...@zzzcomputing.com
 wrote:
 
 
 
  Jonathan Vanasco jonat...@findmeon.com wrote:
 
  
  
   On Tuesday, January 20, 2015 at 5:34:27 PM UTC-5, Brian Glogower
 wrote:
  
   Thanks for the idea. Do you have an example?
  
   I don't have a personal example handle, but from the docs...
  
  
  
 http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#using-literal-sql
  
session.query(User).from_statement(
   ... text(SELECT * FROM users where
 name=:name)).
   \
  
   ... params(name='ed').all()
   [User(name='ed', fullname='Ed Jones', password='f8s7ccs')]
  
  
   So you should be able to do something like:
  
  query = Session.query(Host)\
  .from_statement(
 sqlaclhemy.text(SELECT hostname, sha256 FROM
 DATABASE1.hosts
   LEFT JOIN DATABASE2.ssh_host_keys ssh_host_keys ON
 ssh_host_keys.hostname ==
   hosts.hostname)
  )
 
  why is text() needed here?these could be the Table objects set up
 with
  “schema=‘schema name’” to start with, then you’d just do the join with
  query.join().
 
 
  --
  You received this message because you are subscribed to the Google
 Groups
  sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an
  email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.
 
 
  --
  You received this message because you are subscribed to the Google
 Groups
  sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an
  email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group

Re: [sqlalchemy] SQL join between two tables from two databases

2015-01-21 Thread Brian Glogower
Simon, thanks for your response. Let me wrap my head around this and try it
out.

Brian

On 21 January 2015 at 04:59, Simon King si...@simonking.org.uk wrote:

 You don't need to convert it to a Table object, but you probably do
 need to add 'schema': 'whatever' to the __table_args__ dictionary.

 In answer to your second question, I very much doubt you can use
 query.join() with 2 DB connections. query.join() simply adds an SQL
 JOIN clause to the query that is eventually sent to the database -
 there's no way of making that work with 2 separate connections.

 As an alternative, I think it should be possible to put the tables
 that exist in a separate schema in a separate SQLAlchemy MetaData
 (they'd need to use a separate declarative Base class). The MetaData
 can hold the default schema for the tables, and I *think* you should
 be able to use tables from different MetaData in query.join(). (I
 haven't tested this though).


 http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative/basic_use.html#accessing-the-metadata


 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html#sqlalchemy.schema.MetaData


 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html#metadata-describing

 Hope that helps,

 Simon

 On Wed, Jan 21, 2015 at 9:09 AM, Brian Glogower bglogo...@ifwe.co wrote:
  Hi Michael,
 
  Do I need to redefined mapped class ssh_host_keys as a Table object?
 
  ssh_host_keys = Table('ssh_host_keys', metadata,
  Column('hostname', VARCHAR(30), primary_key=True),
  Column('pub', VARCHAR(1600)),
  Column('sha256', CHAR(64)),
  Column('priv', VARCHAR(2000)),
  schema='keys',
  mysql_engine='InnoDB'
  )
 
  Do I need to convert mapped class 'Host' to a Table object as well? I
 would
  prefer not to touch this class, since it is part of a separate module,
 but
  if needed, it is possible.
 
  class Host(Base):
  __tablename__ = 'hosts'
  __table_args__ = {'mysql_engine': 'InnoDB'}
 
  id = Column(u'HostID', INTEGER(), primary_key=True)
  hostname = Column(String(length=30))
 
  Can you please give an example how to use schema with a query.join(),
 for my
  scenario (two sessions, one for each DB connection)?
 
  Thanks,
  Brian
 
  On 20 January 2015 at 16:12, Michael Bayer mike...@zzzcomputing.com
 wrote:
 
 
 
  Jonathan Vanasco jonat...@findmeon.com wrote:
 
  
  
   On Tuesday, January 20, 2015 at 5:34:27 PM UTC-5, Brian Glogower
 wrote:
  
   Thanks for the idea. Do you have an example?
  
   I don't have a personal example handle, but from the docs...
  
  
  
 http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#using-literal-sql
  
session.query(User).from_statement(
   ... text(SELECT * FROM users where name=:name)).
   \
  
   ... params(name='ed').all()
   [User(name='ed', fullname='Ed Jones', password='f8s7ccs')]
  
  
   So you should be able to do something like:
  
  query = Session.query(Host)\
  .from_statement(
 sqlaclhemy.text(SELECT hostname, sha256 FROM
 DATABASE1.hosts
   LEFT JOIN DATABASE2.ssh_host_keys ssh_host_keys ON
 ssh_host_keys.hostname ==
   hosts.hostname)
  )
 
  why is text() needed here?these could be the Table objects set up
 with
  “schema=‘schema name’” to start with, then you’d just do the join with
  query.join().
 
 
  --
  You received this message because you are subscribed to the Google
 Groups
  sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send
 an
  email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.
 
 
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To unsubscribe from this group and stop receiving emails from it, send an
  email to sqlalchemy+unsubscr...@googlegroups.com.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  Visit this group at http://groups.google.com/group/sqlalchemy.
  For more options, visit https://groups.google.com/d/optout.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit

Re: [sqlalchemy] SQL join between two tables from two databases

2015-01-21 Thread Brian Glogower
Hi Michael,

Do I need to redefined mapped class ssh_host_keys as a Table object?

ssh_host_keys = Table('ssh_host_keys', metadata,
Column('hostname', VARCHAR(30), primary_key=True),
Column('pub', VARCHAR(1600)),
Column('sha256', CHAR(64)),
Column('priv', VARCHAR(2000)),
schema='keys',
mysql_engine='InnoDB'
)

Do I need to convert mapped class 'Host' to a Table object as well? I would
prefer not to touch this class, since it is part of a separate module, but
if needed, it is possible.

class Host(Base):
__tablename__ = 'hosts'
__table_args__ = {'mysql_engine': 'InnoDB'}

id = Column(u'HostID', INTEGER(), primary_key=True)
hostname = Column(String(length=30))

Can you please give an example how to use schema with a query.join(), for
my scenario (two sessions, one for each DB connection)?

Thanks,
Brian

On 20 January 2015 at 16:12, Michael Bayer mike...@zzzcomputing.com wrote:



 Jonathan Vanasco jonat...@findmeon.com wrote:

 
 
  On Tuesday, January 20, 2015 at 5:34:27 PM UTC-5, Brian Glogower wrote:
 
  Thanks for the idea. Do you have an example?
 
  I don't have a personal example handle, but from the docs...
 
 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#using-literal-sql
 
   session.query(User).from_statement(
  ... text(SELECT * FROM users where name=:name)).
  \
 
  ... params(name='ed').all()
  [User(name='ed', fullname='Ed Jones', password='f8s7ccs')]
 
 
  So you should be able to do something like:
 
 query = Session.query(Host)\
 .from_statement(
sqlaclhemy.text(SELECT hostname, sha256 FROM DATABASE1.hosts
 LEFT JOIN DATABASE2.ssh_host_keys ssh_host_keys ON ssh_host_keys.hostname
 == hosts.hostname)
 )

 why is text() needed here?these could be the Table objects set up with
 “schema=‘schema name’” to start with, then you’d just do the join with
 query.join().


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


  1   2   >