On Jan 14, 2014, at 4:56 PM, Dimitris Theodorou <dimitris.theodo...@gmail.com> 
wrote:

> Tried your approach but I can't get past the inspector wall. 
> 
> I set the search_path using connection.execute(), then call 
> context.configure(connection, #etc..). In the diff between the metadata and 
> the inspector results, my metadata will always contain schema-less tables 
> (due to setting search_path in advance), while the inspector will either
> with include_schemas=True will report tables with schemas, which always ends 
> up with the "Detected removed table" result or 
> with include_schemas=False will report only the tables in the 'public' schema 
> which are useless.
> Any pointers on configuring the inspector to disregard schemas or use the 
> existing search_path on the connection?

OK I know why that is, the engine is detecting the “default schema” when it 
first connects, then the inspector actually takes schema into account and uses 
it in the information schema queries, so it’s not going to find things.

OK well there is a ticket to make what you want possible at the connection 
level, it has a patch attached but needs more work, that would allow you to say 
conn = connection.execution_options(default_schema=‘foo’); but that’s not ready 
yet.  There’s ways you can get this using execution events too, e.g. you can 
intercept the inspector queries and inject a different schema name, but that 
will be intricate.

So lets go back to the metadata.   You can make a copy of a given MetaData 
object for a new schema, so maybe do it this way:

meta = my_original_metadata
meta_schemax = MetaData()
for table in meta.tables.values():
    table.to_metadata(meta_schemax, schema=“schemax”)

so then you’ll have another MetaData (or you can actually lump them all into 
one MetaData() if you want) with multiple copies of the same Table with a 
different schema.   then if you do autogen with include_schemas, that would do 
the entire thing in one pass (optionally using the “include_object” hook to 
limit what schemas/tables are examined).

sorry for the reversal here, if i were just trying it out I’d probably settle 
on something sooner.


> 
> On Tuesday, January 14, 2014 5:40:22 PM UTC+1, Michael Bayer wrote:
> this is a hard problem, but my notion is that since you are setting up the 
> “schema” by manipulating search_path, you should be making sure this is set 
> to what it needs to be before autogenerate runs.
> 
> if you want to autogenerate for three schemas, you’d modify your env.py to 
> run “context.run_migrations()” three times, each time setting “search_path” 
> on the connection ahead of time to the appropriate value.   That is, I think 
> this should be approached like a “multiple database” problem, where you’d 
> also have an alembic_version table in each schema separately.
> 
> I would not get into setting “schema” on the MetaData object as it is more 
> portable just to have it set up on the connection.
> 
> 
> On Jan 14, 2014, at 9:16 AM, Dimitris Theodorou <dimitris....@gmail.com> 
> wrote:
> 
>> My progress so far has been to configure my Metadata.schema before importing 
>> my Declarative objects (which is going to associate ALL my tables with that 
>> schema), and then filter on the tables that I know belong to that particular 
>> schema. However that only adds a limited subset of the tables in the created 
>> version script, I want to add all modifications in 1 script.
>> 
>> On Tuesday, January 14, 2014 3:09:55 PM UTC+1, Dimitris Theodorou wrote:
>> Hi,
>> 
>> Also posted this question at 
>> http://stackoverflow.com/questions/21109218/alembic-support-for-multiple-postgres-schemas,
>>  not sure what the best place to continue is.
>> 
>> My problem is the following:
>> 
>> I have one SQL alchemy model with no schema specifications. In my database 
>> though I duplicate several tables over multiple schemas, which correspond to 
>> different application  users. Every schemas that contain subsets of the SQL 
>> alchemy model's tables. The schema is set at the application run-time to the 
>> proper value based on the logged-in user, with session.execute("SET 
>> search_path TO client1,shared") for example. I also have a shared schema 
>> which contains some tables; the rest of the tables are duplicated over 
>> multiple schemas.
>> 
>> I want to use Alembic's --autogenerate to migrate all my schemas. The 
>> default --autogeneratebehavior is detecting multiple schemas that do not 
>> exist in the model and ends up deleting the schemas and re-creating every 
>> table in the default schema.
>> 
>> I would really like to use --autogenerate though, with the proper plumbing 
>> to set the schemas correctly. Any suggestions on if/how Alembic's API can do 
>> this?
>> 
>> Regards,
>> Dimitris
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy-alembic" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to