[sqlalchemy] looking for help building relationship that references an intermediate mixer table

2020-03-20 Thread Mark Aquino
I'd like to create a relationship that joins a table linked to a related 
table on a model:

I have a Vessel class, a Well class, and a Batch class.

Vessel and Well are related via a FK on Well (well.vessel_id) but Batch is 
a many to many relationship with Well, e.g.

Batch(Base):
  id = Column(Integer)

Well(Base):
   id = Column(Integer)
   vessel_id = Column(Integer)
   batches = relationship("Batch", secondary="mix_well_tracked_entity")

Is it possible to make a relationship on Vessel to the Batches linked to 
its wells?

Vessel(Base):
   id = Column(Integer)
   wells = relationship("Well")
   wells_batches = relationship("Batch", ...???)


select * from vessel v join well w on w.vessel_id = v.id join 
mix_well_tracked_entity mix1 on mix1.well_id = w.id join batch b on b.id = 
mix1.tracked_entity_id;

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/54af97fe-9de7-4cb2-a4d9-96b33107c6af%40googlegroups.com.


Re: [sqlalchemy] Re: AM/PM question [ANSWERED]

2020-03-20 Thread Rich Shepard

On Fri, 20 Mar 2020, Jonathan Vanasco wrote:


It doesn't matter when the time is entered or what it is supposed to
reflect. The best option -- by a wide margin -- for storing any time
values is in a TIMESTAMP column ...


Jonathan,

Thanks. I've not before needed this information and I appreciate your
providing the insight I need.

Best regards,

Rich

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/alpine.LNX.2.20.2003201211150.2460%40salmo.appl-ecosys.com.


Re: [sqlalchemy] Re: AM/PM question

2020-03-20 Thread Jonathan Vanasco
It doesn't matter when the time is entered or what it is supposed to 
reflect.  The best option -- by a wide margin -- for storing any time 
values is in a TIMESTAMP column, and for storing date+time values is in a 
DATETIME column.  These types of columns/fields/formats exist to streamline 
data storage and leverage numerous operations on the database and in your 
application.

Even though you will only show the user an option of 12hours and am/pm, on 
the backend you should be storing the time in a 24hour native time object; 
if there is a date associated, you should be storing it in a timestamp.  
That will allow you to easily query records with these fields in the future.

Timestamp/Datetime:
   * Python https://docs.python.org/3/library/datetime.html#datetime-objects
 
   * SqlAlchemy 
https://docs.sqlalchemy.org/en/13/core/type_basics.html?highlight=timestamp#sqlalchemy.types.DateTime

Time:
   * Python - https://docs.python.org/3/library/datetime.html#time-objects
   * SqlAlchemy Column - 
https://docs.sqlalchemy.org/en/13/core/type_basics.html?highlight=timestamp#sqlalchemy.types.Time

If you feel a strong need to handle this otherwise, you can use an Enum (
https://docs.sqlalchemy.org/en/13/core/type_basics.html?highlight=timestamp#sqlalchemy.types.Enum)
 
column to constrain the am/pm options.  You can also just use a string 
column and enforce the selection with constraints.

Your best path forward, however, is convert the user-input into a python 
Time or DateTime object and store that object in the database; then you can 
construct the widgets for "display/edit" by setting their defaults to be 
the day/month/year/hour/minute of the datetime field in your sqlalchemy 
object.



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/50f736a5-4dd3-4537-9530-7b990eb64473%40googlegroups.com.


Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Brian Hill
That works!

Thank you.

Brian

On Friday, March 20, 2020 at 12:11:21 PM UTC-4, Mike Bayer wrote:
>
> OK one more addition to the recipe, please do this:
>
> DB_SCHEMA = "my_foo_schema"
>
> with connectable.connect() as connection:
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> connection.execute(f'set search_path to {DB_SCHEMA}')
>
> connection.dialect.default_schema_name = DB_SCHEMA
>
>
> will work completely
>
> alternatively you can use an on connect event on the engine to do the same 
> thing.
>
>
> On Fri, Mar 20, 2020, at 10:48 AM, Brian Hill wrote:
>
> this is what i tried but it generated the whole schema:
> .
> with connectable.connect() as connection:
>
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> connection.execute(f'set search_path to {DB_SCHEMA}')
>
> context.configure(
> connection=connection,
> target_metadata=metadata,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
> it works when my schema is 'public', when it matches metadata. my solution 
> will be to set my schema to 'public' and use this simple env.py.
>
> thanks,
>
> brian
>
> On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>
> I just realized that you really dont need to even use schema_translate_map 
> at all here.  If you use a completely straight env.py, and simply set 
> search_path=SCHEMA, you should be able to run Alembic in entirely 
> "schemaless" mode; don't include schemas anywhere nor would you need to set 
> it for the alembic version table.Postgresql will also CREATE/ ALTER in 
> that schema as well.As long as your environment runs completely in just 
> one schema at a time, this can be set up entirely at the connection level.
>
>
> On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many schemas
>
>
> yes
>  
>
>
> 2. you want to run autogenerate only once
>
>
> yes
>  
>
> 3. you want your mirations genrated with None for schema
>
>
> yes
>  
>
> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
> arbitrary?) schema to use as the target, is that right?
>
>
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
>  
>
>
>
> As it stands, your autogenerate is not going to look in any schema except 
> "public", assuming default PG search path, because you did not set 
> "include_schemas=True".
>
>
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
>
>
> If you truly want autogenerate to look at one schema and only that schema, 
> and generate everything as non-schema qualified, then I would simply set 
> the default schema in PG to that schema name using search_path:
>
> connection.execute("SET search_path TO my_schema")
>
> that way everything Alembic reflects will be from "my_schema" and it will 
> see the schema as blank, and it should generate as such.  you might need to 
> disable the schema translate map when autogenerate runs but try it without 
> doing that first.
>
>
> it's still generating the entire schema in the revision file.
>
> here's the updated section and it behaves the same with/without schema 
> translate map:
>
> with connectable.connect() as connection:
>
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>
> connection.execute(f"set search_path to {DB_SCHEMA}")
>
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
>
>
>
> no, don't use "include_schemas".  you want to run alembic in a mode where 
> it has no idea there are other schemas to use.   i think 
> version_table_schema is OK here but don't use include_schemas.  Also turn 
> on INFO or DEBUG logging for the sqlalchemy logger to see exactly what 
> tables it is reflecting.
>
>
> same behavior, entire schema generated. i'll look at using INFO and DEBUG 
> tomorrow to get a better idea of what's going on, but using public as my 
> dev schema to autogenerate against works. i just have to exclude the 
> alembic tables.
>
> again thanks for your help in understanding what i was trying to do.
>
> brian
>  
>
>
>
>
>
>
>
>  
>
>
>
>
> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>
> Here's my env.py. Thanks for the help.
> Brian
>
> On Thursday, March 19, 2020 at 5:37:38 PM 

Re: [sqlalchemy] Re: AM/PM question

2020-03-20 Thread Rich Shepard

On Fri, 20 Mar 2020, Jonathan Vanasco wrote:


The common approach to this situation is storing the data as a 24hour
timestamp in the database/sqlalchemy and converting it to a 12hour am/pm
for display/editing.

You could use a "12 hour" option and an enum column for am/pm or a string.
You're going to have a much easier time in the longterm by using a standard
timestamp though.


Jonathan,

I understand this working if the time to be entered is the current time, but
not if the time is different. For example, a sample collection time and its
analysis times will be different and, in my application, entered well after
they occured. That's why I'm asking for guidance.

For example, a sample could be taken on 1 June at 11:00 hours, transported
to the analytical laboratory where it was analyzed on 2 June at 13:50 hours.
The report is sent to the user on 10 June and those times are entered in the
application any time after that.

Regards,

Rich

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/alpine.LNX.2.20.2003201101410.2460%40salmo.appl-ecosys.com.


[sqlalchemy] Re: AM/PM question

2020-03-20 Thread Jonathan Vanasco
The common approach to this situation is storing the data as a 24hour 
timestamp in the database/sqlalchemy and converting it to a 12hour am/pm 
for display/editing.

You could use a "12 hour" option and an enum column for am/pm or a string. 
You're going to have a much easier time in the longterm by using a standard 
timestamp though.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/1fcf1565-a1d1-40b9-b094-4f8671838905%40googlegroups.com.


[sqlalchemy] AM/PM question

2020-03-20 Thread Rich Shepard

A couple of sqlalchemy classes need to store user-entered times. The
intended user audience is probably unfamiliar with using a 24-hour clock so
I'll use the ttk.Radiobutton to have the user select AM or PM for the period.

How do I represent this variable in the class definition of columns? The
widget returns a boolean True/False for both choices, AM and PM so do I use
a String containing 'AM' or 'PM'? Are there examples for this available?

Hope you're all staying healthy,

Rich

--
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper


http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/alpine.LNX.2.20.2003200956060.2460%40salmo.appl-ecosys.com.


Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Mike Bayer
OK one more addition to the recipe, please do this:

 DB_SCHEMA = "my_foo_schema"

 with connectable.connect() as connection:
 connection.execute(f'create schema if not exists {DB_SCHEMA}')
 connection.execute(f'set search_path to {DB_SCHEMA}')

 connection.dialect.default_schema_name = DB_SCHEMA


will work completely

alternatively you can use an on connect event on the engine to do the same 
thing.


On Fri, Mar 20, 2020, at 10:48 AM, Brian Hill wrote:
> this is what i tried but it generated the whole schema:
> .
> with connectable.connect() as connection:
> 
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> connection.execute(f'set search_path to {DB_SCHEMA}')
> 
> context.configure(
> connection=connection,
> target_metadata=metadata,
> )
> 
> with context.begin_transaction():
> context.run_migrations()
> 
> it works when my schema is 'public', when it matches metadata. my solution 
> will be to set my schema to 'public' and use this simple env.py.
> 
> thanks,
> 
> brian
> 
> On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>> I just realized that you really dont need to even use schema_translate_map 
>> at all here. If you use a completely straight env.py, and simply set 
>> search_path=SCHEMA, you should be able to run Alembic in entirely 
>> "schemaless" mode; don't include schemas anywhere nor would you need to set 
>> it for the alembic version table. Postgresql will also CREATE/ ALTER in that 
>> schema as well. As long as your environment runs completely in just one 
>> schema at a time, this can be set up entirely at the connection level.
>> 
>> 
>> On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
>>> 
>>> 
>>> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
 
 
 On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
> 
> 
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>> so let me get this straight:
>> 
>> 1. you have many schemas
>> 
> 
> yes
> 
>> 
>> 2. you want to run autogenerate only once
> 
> yes
> 
>> 3. you want your mirations genrated with None for schema
> 
> yes
> 
>> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random? 
>> arbitrary?) schema to use as the target, is that right?
> 
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
> 
>> 
>> 
>> As it stands, your autogenerate is not going to look in any schema 
>> except "public", assuming default PG search path, because you did not 
>> set "include_schemas=True".
>> 
> 
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
> 
>> 
>> If you truly want autogenerate to look at one schema and only that 
>> schema, and generate everything as non-schema qualified, then I would 
>> simply set the default schema in PG to that schema name using 
>> search_path:
>> 
>> connection.execute("SET search_path TO my_schema")
>> 
>> that way everything Alembic reflects will be from "my_schema" and it 
>> will see the schema as blank, and it should generate as such. you might 
>> need to disable the schema translate map when autogenerate runs but try 
>> it without doing that first.
>> 
> 
> it's still generating the entire schema in the revision file.
> 
> here's the updated section and it behaves the same with/without schema 
> translate map:
> 
> with connectable.connect() as connection:
> 
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> 
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
> 
> connection.execute(f"set search_path to {DB_SCHEMA}")
> 
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
> 
> with context.begin_transaction():
> context.run_migrations()
> 
> 
 
 
 no, don't use "include_schemas". you want to run alembic in a mode where 
 it has no idea there are other schemas to use. i think 
 version_table_schema is OK here but don't use include_schemas. Also turn 
 on INFO or DEBUG logging for the sqlalchemy logger to see exactly what 
 tables it is reflecting.
>>> 
>>> same behavior, entire schema generated. i'll look at using INFO and DEBUG 
>>> tomorrow to get a better idea of what's going on, but using public as my 
>>> dev schema to autogenerate against works. i just have to exclude the 
>>> alembic tables.
>>> 
>>> again thanks 

Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Brian Hill
this is what i tried but it generated the whole schema:
.
with connectable.connect() as connection:

connection.execute(f'create schema if not exists {DB_SCHEMA}')
connection.execute(f'set search_path to {DB_SCHEMA}')

context.configure(
connection=connection,
target_metadata=metadata,
)

with context.begin_transaction():
context.run_migrations()

it works when my schema is 'public', when it matches metadata. my solution 
will be to set my schema to 'public' and use this simple env.py.

thanks,

brian

On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>
> I just realized that you really dont need to even use schema_translate_map 
> at all here.  If you use a completely straight env.py, and simply set 
> search_path=SCHEMA, you should be able to run Alembic in entirely 
> "schemaless" mode; don't include schemas anywhere nor would you need to set 
> it for the alembic version table.Postgresql will also CREATE/ ALTER in 
> that schema as well.As long as your environment runs completely in just 
> one schema at a time, this can be set up entirely at the connection level.
>
>
> On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>
>
>
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>
> so let me get this straight:
>
> 1. you have many schemas
>
>
> yes
>  
>
>
> 2. you want to run autogenerate only once
>
>
> yes
>  
>
> 3. you want your mirations genrated with None for schema
>
>
> yes
>  
>
> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  
> arbitrary?) schema to use as the target, is that right?
>
>
> yes one, but it won't be arbitrary, it will be a development schema 
> generated from the latest version (prior to the change we want to 
> autogenerate).
>  
>
>
>
> As it stands, your autogenerate is not going to look in any schema except 
> "public", assuming default PG search path, because you did not set 
> "include_schemas=True".
>
>
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
>
>
> If you truly want autogenerate to look at one schema and only that schema, 
> and generate everything as non-schema qualified, then I would simply set 
> the default schema in PG to that schema name using search_path:
>
> connection.execute("SET search_path TO my_schema")
>
> that way everything Alembic reflects will be from "my_schema" and it will 
> see the schema as blank, and it should generate as such.  you might need to 
> disable the schema translate map when autogenerate runs but try it without 
> doing that first.
>
>
> it's still generating the entire schema in the revision file.
>
> here's the updated section and it behaves the same with/without schema 
> translate map:
>
> with connectable.connect() as connection:
>
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>
> connection.execute(f"set search_path to {DB_SCHEMA}")
>
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
>
> with context.begin_transaction():
> context.run_migrations()
>
>
>
>
> no, don't use "include_schemas".  you want to run alembic in a mode where 
> it has no idea there are other schemas to use.   i think 
> version_table_schema is OK here but don't use include_schemas.  Also turn 
> on INFO or DEBUG logging for the sqlalchemy logger to see exactly what 
> tables it is reflecting.
>
>
> same behavior, entire schema generated. i'll look at using INFO and DEBUG 
> tomorrow to get a better idea of what's going on, but using public as my 
> dev schema to autogenerate against works. i just have to exclude the 
> alembic tables.
>
> again thanks for your help in understanding what i was trying to do.
>
> brian
>  
>
>
>
>
>
>
>
>  
>
>
>
>
> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>
> Here's my env.py. Thanks for the help.
> Brian
>
> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>
>
>
> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>
> Are there known issues with using autogenerate with multi-tenant 
> (schema_translate_map)?
>
>
> it's complicated and not one-size-fits-all, if you consider that to be an 
> issue
>
>
>
> My metadata doesn't have a schema and I my 
> schema_translate_map={None:'my_schema'}.
>
> This works for migrations but when I use autogenerate the generated 
> revision file is the full schema and not the diff.
>
> It's detecting the alembic version table in the tenant schema. If I run 
> the autogenerate a second time with the new revision file it fails saying 
> Taget 

Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Mike Bayer
I just realized that you really dont need to even use schema_translate_map at 
all here. If you use a completely straight env.py, and simply set 
search_path=SCHEMA, you should be able to run Alembic in entirely "schemaless" 
mode; don't include schemas anywhere nor would you need to set it for the 
alembic version table. Postgresql will also CREATE/ ALTER in that schema as 
well. As long as your environment runs completely in just one schema at a time, 
this can be set up entirely at the connection level.


On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
> 
> 
> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>> 
>> 
>> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>>> 
>>> 
>>> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
 so let me get this straight:
 
 1. you have many schemas
 
>>> 
>>> yes
>>> 
 
 2. you want to run autogenerate only once
>>> 
>>> yes
>>> 
 3. you want your mirations genrated with None for schema
>>> 
>>> yes
>>> 
 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random? 
 arbitrary?) schema to use as the target, is that right?
>>> 
>>> yes one, but it won't be arbitrary, it will be a development schema 
>>> generated from the latest version (prior to the change we want to 
>>> autogenerate).
>>> 
 
 
 As it stands, your autogenerate is not going to look in any schema except 
 "public", assuming default PG search path, because you did not set 
 "include_schemas=True".
 
>>> 
>>> i had it set to True and it behaves the same. i just took it out based on 
>>> your previous suggestion, but again, it behaves the same.
>>> 
 
 If you truly want autogenerate to look at one schema and only that schema, 
 and generate everything as non-schema qualified, then I would simply set 
 the default schema in PG to that schema name using search_path:
 
 connection.execute("SET search_path TO my_schema")
 
 that way everything Alembic reflects will be from "my_schema" and it will 
 see the schema as blank, and it should generate as such. you might need to 
 disable the schema translate map when autogenerate runs but try it without 
 doing that first.
 
>>> 
>>> it's still generating the entire schema in the revision file.
>>> 
>>> here's the updated section and it behaves the same with/without schema 
>>> translate map:
>>> 
>>> with connectable.connect() as connection:
>>> 
>>> # create DB_SCHEMA if it doesn't exist
>>> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>>> 
>>> # map metadata schema (None) to DB_SCHEMA
>>> # connection = 
>>> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>>> 
>>> connection.execute(f"set search_path to {DB_SCHEMA}")
>>> 
>>> # set alembic version table location in DB_SCHEMA
>>> context.configure(
>>> connection=connection,
>>> include_schemas=True,
>>> target_metadata=metadata,
>>> version_table_schema=DB_SCHEMA,
>>> )
>>> 
>>> with context.begin_transaction():
>>> context.run_migrations()
>>> 
>>> 
>> 
>> 
>> no, don't use "include_schemas". you want to run alembic in a mode where it 
>> has no idea there are other schemas to use. i think version_table_schema is 
>> OK here but don't use include_schemas. Also turn on INFO or DEBUG logging 
>> for the sqlalchemy logger to see exactly what tables it is reflecting.
> 
> same behavior, entire schema generated. i'll look at using INFO and DEBUG 
> tomorrow to get a better idea of what's going on, but using public as my dev 
> schema to autogenerate against works. i just have to exclude the alembic 
> tables.
> 
> again thanks for your help in understanding what i was trying to do.
> 
> brian
> 
>> 
>> 
>>> 
>>> 
>>> 
>>> 
>>> 
 
 
 
 On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
> Here's my env.py. Thanks for the help.
> Brian
> 
> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>> 
>> 
>> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>>> Are there known issues with using autogenerate with multi-tenant 
>>> (schema_translate_map)?
>> 
>> it's complicated and not one-size-fits-all, if you consider that to be 
>> an issue
>> 
>> 
>>> 
>>> My metadata doesn't have a schema and I my 
>>> schema_translate_map={None:'my_schema'}.
>>> 
>>> This works for migrations but when I use autogenerate the generated 
>>> revision file is the full schema and not the diff.
>>> 
>>> It's detecting the alembic version table in the tenant schema. If I run 
>>> the autogenerate a second time with the new revision file it fails 
>>> saying Taget database is not up to date.
>> 
>> are you setting include_schemas=True in your environment? I'd probably 
>> not do this. Can't help much more without a complete and concise working 
>> example of your env.py, please remove all extraneous