Tried your approach but I can't get past the inspector wall. 

I set the search_path using connection.execute(), then call 
context.configure(connection, 
#etc..). In the diff between the metadata and the inspector results, my 
metadata will always contain schema-less tables (due to setting search_path 
in advance), while the inspector will either

   1. with include_schemas=True will report tables with schemas, which 
   always ends up with the "Detected removed table" result or 
   2. with include_schemas=False will report only the tables in the 
   'public' schema which are useless.

Any pointers on configuring the inspector to disregard schemas or use the 
existing search_path on the connection?

On Tuesday, January 14, 2014 5:40:22 PM UTC+1, Michael Bayer wrote:
>
> this is a hard problem, but my notion is that since you are setting up the 
> “schema” by manipulating search_path, you should be making sure this is set 
> to what it needs to be before autogenerate runs.
>
> if you want to autogenerate for three schemas, you’d modify your env.py to 
> run “context.run_migrations()” three times, each time setting “search_path” 
> on the connection ahead of time to the appropriate value.   That is, I 
> think this should be approached like a “multiple database” problem, where 
> you’d also have an alembic_version table in each schema separately.
>
> I would not get into setting “schema” on the MetaData object as it is more 
> portable just to have it set up on the connection.
>
>
> On Jan 14, 2014, at 9:16 AM, Dimitris Theodorou 
> <dimitris....@gmail.com<javascript:>> 
> wrote:
>
> My progress so far has been to configure my Metadata.schema before 
> importing my Declarative objects (which is going to associate ALL my tables 
> with that schema), and then filter on the tables that I know belong to that 
> particular schema. However that only adds a limited subset of the tables in 
> the created version script, I want to add all modifications in 1 script.
>
> On Tuesday, January 14, 2014 3:09:55 PM UTC+1, Dimitris Theodorou wrote:
>>
>> Hi,
>>
>> Also posted this question at 
>> http://stackoverflow.com/questions/21109218/alembic-support-for-multiple-postgres-schemas,
>>  
>> not sure what the best place to continue is.
>>
>> My problem is the following:
>>
>> I have one SQL alchemy model with no schema specifications. In my 
>> database though I duplicate several tables over multiple schemas, which 
>> correspond to different application  users. Every schemas that contain 
>> subsets of the SQL alchemy model's tables. The schema is set at the 
>> application run-time to the proper value based on the logged-in user, with 
>> session.execute("SET 
>> search_path TO client1,shared") for example. I also have a shared schema 
>> which contains some tables; the rest of the tables are duplicated over 
>> multiple schemas.
>>
>> I want to use Alembic's --autogenerate to migrate all my schemas. The 
>> default --autogenerate behavior is detecting multiple schemas that do 
>> not exist in the model and ends up deleting the schemas and re-creating 
>> every table in the default schema.
>>
>> I would really like to use --autogenerate though, with the proper 
>> plumbing to set the schemas correctly. Any suggestions on if/how Alembic's 
>> API can do this?
>> Regards,
>> Dimitris
>>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com <javascript:>.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to