Re: Import data to alembic migrations

2022-05-10 Thread Mike Bayer
yes, one particular approach to this issue is discussed here: 
https://alembic.sqlalchemy.org/en/latest/front.html#installation   start with 
that if nothing else is working (There are many ways to do this).

On Tue, May 10, 2022, at 5:34 AM, Ilya Demidov wrote:
> Hi!
> 
> I have some problem. I need to add new field in table and populate it by some 
> default data. I can not hardcode it because it different in debug and 
> production enviroment. I want to store data in some "config file". The 
> problem is migration does not see my modules.
> 
> ├───alembic │
> ├───versions │ 
> ├───first_migration.py │ 
> ├───second_migration.py │
> ├─── migration_data.py
> 
> If I import from alembic. migration_data  import SOME_DATA it throws import 
> error while migrating. 
> Any good pratice, how can I store migration data? Thanks for advices.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/b809c48a-fe48-4323-96ad-fa4efbe0e6bdn%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/1fd9c5e3-85aa-4227-9d36-779a46e97de1%40www.fastmail.com.


Re: [Renaming Alembic Table]

2022-04-11 Thread Mike Bayer


On Mon, Apr 11, 2022, at 12:47 AM, arvind aithal wrote:
> Is it possible to rename the alembic table name to customised one? 
> 
> Once we run the migrations it creates *alembic_version table* which stores 
> the version. 
> 
> After init, when we run command migrate, by default it generates 
> alembic_version table. Can this be customised with different names? ex: 
> myservice_alembic_version. 
> 
> Purpose is, since running different micro services connecting to same db. 
> Just to avoid the losing data by bifurcating alembic_version table. 

yes, in env.py, use the version_table parameter in the configure() call: 
https://alembic.sqlalchemy.org/en/latest/api/runtime.html?highlight=version_table#alembic.runtime.environment.EnvironmentContext.configure.params.version_table
  



> 
> Thanks,
> Aravind
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/40fbf802-0d5f-4205-81f7-8ef4e96c380cn%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/7ffe43a1-9d4b-4311-96e1-01838935a3c3%40www.fastmail.com.


Re: alembic upgrade heads

2021-08-25 Thread Mike Bayer
seems like you should ensure that Alembic is installed in the current Python 
environment.  looks like it is not installed correctly.

On Wed, Aug 25, 2021, at 2:23 PM, Vinod Rao wrote:
> Hi all:
> 
> I have been struggling to solve the following error:
> 
> File "/virtual_env/bin/alembic", line 5, in 
>   from alembic.config import main
> ModuleNotFoundError: No module named 'alembic.config'
> 
> Please let me know if any resolution on it.
> Thanks.
> Regards,
> Vinod Rao
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/4f75ea87-1ebf-4180-acae-abf3776cedc4n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/1fe98760-dcb3-4ab8-9bb6-54dce06bae77%40www.fastmail.com.


Re: Managing multiple tenant "namespaces"

2021-04-16 Thread Mike Bayer


On Fri, Apr 16, 2021, at 7:15 AM, Nikola Radovanovic wrote:
> Hi,
> we have one Postgres DB with multiple schemes:
>  1. general (one schema, contains users and some common and data shared 
> between clients/clusters)
>  2. client schemes: c_client_1, c_client_2, c_client_3, etc. (all clients 
> have same table layout, and have some access to general schema)
>  3. cluster schemes: cl_cluster_1, cl_cluster_2, cl_cluster_3, etc. (all 
> clusters have same table layout, and have some access to general schema)
> Basically, I want to find the best possible way to handle migrations - 
> preferably using one env.py/alembic.ini. 

yes, use the recipe for passing --name, but you can have everything in one 
alembic.ini file and one env.py file as long as that file knows how to respond 
to the different environments


> 
> 
> 
> Each client/cluster should have its migration table in its own schema and 
> general has its own migration table in general schema. Client and cluster 
> will share base class with.
> 
> What I noticed is when using schema_translation_map, table layout is detected 
> OK, but when I run migration next time, it does not detect changes in tables, 
> but entire tables again. I guess it is due schema_translation_map is not used 
> then.

schema_translate_map is not supported by migrations right now: 
https://github.com/sqlalchemy/alembic/issues/555

use the search_path recipe at 
https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases
 , this has been tested.






> 
> Thank you in advance.
> 
> Regards
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/ed08c030-f35a-49c7-9ee3-5b263a7177a9n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/bf8aa229-cb51-431d-a599-969a5592a262%40www.fastmail.com.


Re: Managing multiple tenant "namespaces"

2021-04-09 Thread Mike Bayer


On Wed, Apr 7, 2021, at 5:18 AM, Nikola Radovanovic wrote:
> Hi,
> I have a bit unusual use-case in software I work on. 
> 
> There are three main "namespaces" (kinds of data): first one is "general" 
> (not multi-tenant), where we keep some common security and settings data.
> 
> Second one is "client", where we keep clients (which are 
> companies/organisations) and this one is multi-tenant, so multiple clients 
> share same table layout and there are also relationships between "general" 
> and "client" tables. 
> 
> Third one can be seen as a "cluster" where we keep data "clients" wants to 
> share among each other. "Cluster" is multi-tenant, so all "clusters" share 
> same table layout, and which differs somewhat with the one in "client". There 
> might be relationships between "cluster" and "general", but most probably not 
> with the "client".

I'm going to assume "client" and "cluster" are just two multitenant systems 
that are independent of each other.

also by "multiple clients share the same table layout" I assume you mean, each 
client has their own database that's theirs alone, where the tables inside of 
it look like the tables inside of another client's database.


> 
> Now, my questions are related to what would be the best way to create a 
> migration management based on Alembic in this case?
>  1. I guess some custom migration script  based on Alembic API

well you would have custom things to do in the env.py script to set up for the 
tenants.   i dont know how much more customization you would need other than 
connecting to the right databases.


>  1. where to keep migration related data table - one in "general" schema for 
> "general" tables, then each "client" and "cluster" schema has its own

since you have to run migrations for the multitenant schemas many times,   each 
"namespace" has to have it's own alembic_version table.so with separate 
versioning tables the first level of switching is using the --name parameter 
documented at 
https://alembic.sqlalchemy.org/en/latest/cookbook.html#run-multiple-alembic-environments-from-one-ini-file
  .   this means the three namespaces have their own set of revisions.





>  1. how to pass "clients" and "cluster" for which we want to perform 
> migration? Shall we read them from DB itself, or pass as command line params 
> (by default do for all)

the above recipe should handle this part


>  1. I have not use Alembic API so far, so what would be the best place to 
> start - apart official docs is there any recommended tutorial or something 
> similar?

so.to do "multitenant" you also have to run the migrations for every 
database. There's a recipe for doing this in terms of Postgresql here: 
https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases
  but the general idea applies to multitenant setups in general, within env.py 
you need to apply the migrations to the "tenant" or "tenants" that you want.   


> Also, as a side question, I guess separate **declarative_base** for 
> "general", "client" and "cluster" would be a good thing, or shall we use one 
> - common for all, but in which case we have to decide on thy fly which tables 
> belongs to "clients" and which to "clusters" so we don't mix them.

the important part would be that they use separate MetaData collections.  you 
can share multiple MetaData among one base if you prefer using a pattern such 
as https://docs.sqlalchemy.org/en/14/orm/declarative_config.html#metadata  or 
the one that follows it in 
https://docs.sqlalchemy.org/en/14/orm/declarative_config.html#abstract .


> 
> Thank you in advance.
> 
> Kindest regards
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/d130316d-5974-4c98-abb1-e5af64a370c9n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/a557e105-6ef7-4d05-a930-f4e758ffde68%40www.fastmail.com.


Re: branch merge schema conflict

2021-03-09 Thread Mike Bayer
just so you know, surgical-precision downgrades will be more reliable once we 
get the 1.6 series of Alembic out, where we have a total rewrite of how the 
internal traversal works in order to make upgrade/downgrades across complex 
trees more reliable.  downgrades are not *too* common in production systems.

On Tue, Mar 9, 2021, at 10:15 PM, 'br...@derocher.org' via sqlalchemy-alembic 
wrote:
> c787 did not fail, nor did 60e8 on upgrade.  According to alembic history, 
> alembic ran 1efb before c787.  So the result is I have a view in my database 
> that will never get used because the query is now embedded in code.  The 
> solution is clear.  60e8 should drop a view if there is one.
> 
> For downgrade we get lucky.
> 
> I feel the situation could be more diabolic for downgrade.  Unlike code 
> version control, it seems the precondition does not need to exist, at least 
> for views when you change only the body, not the "view signature".  If for 
> example the c787 upgrade as "create or replace view v1 as select 2, 2" then 
> it gets messy.  I don't even know what downgrade would look like.  I assume 
> alembic would undo the last upgrade applied based on history.
> 
> For a single repository, only one branch gets into master first.  So that 
> defines the order.  When there's a fork, it gets more complicated.
> 
> On Tuesday, March 9, 2021 at 6:12:06 PM UTC-5 mik...@zzzcomputing.com wrote:
>> 
>> 
>> 
>> On Tue, Mar 9, 2021, at 5:06 PM, 'br...@derocher.org' via sqlalchemy-alembic 
>> wrote:
>>> I'm wondering how to solve an issue.
>>> 
>>> One developer created migration 1efb
>>> 
>>> upgrade:
>>>   drop view v1 -- moved to query in the code
>>> 
>>> downgrade:
>>>create view v1 as select 1
>>> 
>>> Another developer created a migration c787
>>> 
>>> upgrade:
>>>create or replace view v1 as select 2
>>> 
>>> downgrade
>>>create or replace view v1 as select 1
>>> 
>>> Now I'm merging them with alembic merge heads.  
>>> 
>>> I get revision = 60e8, down_revision = (c787, 1efb).
>>> 
>>> My question is which migration runs first.  If 1efb goes first, c787 will 
>>> fail.  If 1efb goes first upgrade is smooth.
>>> 
>>> My concern is some databases are at 1efb and some are at c787.  For the 
>>> database at 1efb, how to I prevent a failure?  Can I put some logic in 
>>> upgrade() (and downgrade()) to prevent this?
>> 
>> seems tricky but does c787 actually fail?  it says "create or replace", so 
>> if "v1" was already dropped it would just create a new view. in the 
>> other direction, the view is still named "v1" so 1efb will just drop "v1".
>> 
>> if this is not exactly accurate and the name is different or something you 
>> can always modify 1efb to read "drop view if exists" and "create or replace 
>> view ".if there's no "drop view if exists" on your target DB then I'd 
>> stick a quick reflection query in the migration to check if it exists, 
>> inspect(engine).get_view_definition() should have that.
>> 
>> 
>>> 
>>> 
>>> Thanks,
>>> Brian
>>> 
>>> PS: It's actually a bit more complicated than this because I'm merging git 
>>> forks.
>>> 

>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy-alembic" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to sqlalchemy-alem...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy-alembic/e1fe03f2-bc06-4c76-a75b-e8d0efcf4473n%40googlegroups.com
>>>  
>>> .
>> 
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/3873492a-b421-4fe1-b736-9a0d462961a1n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/de6f0fa8-596b-4c12-9bd5-ed0d687904a8%40www.fastmail.com.


Re: branch merge schema conflict

2021-03-09 Thread Mike Bayer



On Tue, Mar 9, 2021, at 5:06 PM, 'br...@derocher.org' via sqlalchemy-alembic 
wrote:
> I'm wondering how to solve an issue.
> 
> One developer created migration 1efb
> 
> upgrade:
>   drop view v1 -- moved to query in the code
> 
> downgrade:
>create view v1 as select 1
> 
> Another developer created a migration c787
> 
> upgrade:
>create or replace view v1 as select 2
> 
> downgrade
>create or replace view v1 as select 1
> 
> Now I'm merging them with alembic merge heads.  
> 
> I get revision = 60e8, down_revision = (c787, 1efb).
> 
> My question is which migration runs first.  If 1efb goes first, c787 will 
> fail.  If 1efb goes first upgrade is smooth.
> 
> My concern is some databases are at 1efb and some are at c787.  For the 
> database at 1efb, how to I prevent a failure?  Can I put some logic in 
> upgrade() (and downgrade()) to prevent this?

seems tricky but does c787 actually fail?  it says "create or replace", so if 
"v1" was already dropped it would just create a new view. in the other 
direction, the view is still named "v1" so 1efb will just drop "v1".

if this is not exactly accurate and the name is different or something you can 
always modify 1efb to read "drop view if exists" and "create or replace view ". 
   if there's no "drop view if exists" on your target DB then I'd stick a quick 
reflection query in the migration to check if it exists, 
inspect(engine).get_view_definition() should have that.


> 
> Thanks,
> Brian
> 
> PS: It's actually a bit more complicated than this because I'm merging git 
> forks.
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/e1fe03f2-bc06-4c76-a75b-e8d0efcf4473n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/2aee3837-7e4d-4e6b-a537-a26f64dbe1a1%40www.fastmail.com.


Re: ask for help on best practices when changing table name from uppercase to lowercase

2021-01-05 Thread Mike Bayer


On Tue, Jan 5, 2021, at 9:50 AM, kz...@us.ibm.com wrote:
> Thanks mike!
> 
> I understand that it should work when querying over SQLAlchemy API. the 
> missing part of my use case I forgot to provide is, it only uses 
> Alembic/SQLAlchemy to manage schema changes, while the actual query may comes 
> from other query engine like a dashboard, a desktop DB client or a notebook 
> where data scientists knows SQL only, the name cases would force user to 
> remember such convention from DB design (unpleasant part when switching 
> RDBMS).

if these other folks are writing raw SQL then they just use identifiers without 
quotes, with any casing they want in their SQL string, and the names are case 
insensitive.  so there should be no need for them to remember any convention 
since case insensitive means there is no convention in the first place.   they 
can write all uppercase names in their PostgreSQL query if they'd like:

psql (12.4) 
Type "help" for help.

test=> create table foo(colx integer, ColY integer);
CREATE TABLE
test=> SELECT COLX, COLY FROM FOO;
colx | coly
--+--
(0 rows)


if they are writing SQLAlchemy Python code, then they would use all lower case 
names at all times.

on the other hand, if the names were created *with quotes*, now the names are 
case sensitive, and users *must* use quotes, uppercase is not enough:

test=> create table "FOO" ("COLX" integer, "COLY" integer); 
CREATE TABLE
test=> select colx from FOO;
ERROR:  relation "foo" does not exist

test=> select colx from "FOO";
ERROR:  column "colx" does not exist


test=> select "COLX" from "FOO";
COLX
--
(0 rows)





> 
> A simple question is, if it already created tables via Alembic migration with 
> uppercase table names, 

"uppercase table names" is not specific enough, were they quoted as UPPERCASE?  
 which database?


> 
> 
> 
> 
> is there any API in Alembic simply support the statement like `ALTER TABLE 
> XXX_YYY_ZZZ RENAME TO xxx_yyy_zzz`? Since I noticed that the revision 
> autogenerate is not sensitive to table name. should I manually create a 
> revision for postgres only?

if you have a case sensitive name in Alembic and want to migrate to case 
insensitive, on PostrgreSQL you likely could emit 
op.rename_table("XXX_YYY_ZZZ", "xxx_yyy_zzz") and it will just work, the first 
identifier will be quoted and the second one will not.  if you want to make it 
fully explicit, you can do this:

from sqlalchemy.sql import quoted_name
op.rename_table(quoted_name("XXX_YYY_ZZZ", quote=True), 
quoted_name("xxx_yyy_zzz", quote=False))



> 
> On Monday, January 4, 2021 at 6:57:33 PM UTC-5 mik...@zzzcomputing.com wrote:
>> __
>> This is the casing convention of the database and SQLAlchemy does not 
>> consider the name to be in "uppercase" or "lowercase" (or even "mixed case") 
>> but rather "case insensitive", which really means the name is referred 
>> towards without any quoting.When no quoting is applied, there is 
>> essentially no "casing" at all to the identifier that is stored by the 
>> database.
>> 
>> When you refer to the table name in SQLAlchemy, such as in a Table() object, 
>> or in Alembic as in op.create_table(), refer to it as all lower case at all 
>> times in the Python code- this indicates to SQLAlchemy that this is a case 
>> insensitive name and should not be quoted (as long as it has no special 
>> characters, spaces, etc).  SQLAlchemy will automatically render the name 
>> without quotes in DDL, DML and DQL, and render it as UPPERCASE when it 
>> queries DB2's catalogs, and as lower case when it queries PostgreSQL's 
>> catalogs.
>> 
>> 
>> On Mon, Jan 4, 2021, at 4:12 PM, Ke Zhu - kz...@us.ibm.com wrote:
>>> The use case is migrating data from a Db2 instance (where schema is not
>>> managed under Alembic migration) to a postgresql instance (where I plan
>>> to manage schema via Alembic).
>>> 
>>> In Db2, by default, it stores unquoted mixed case identifiers in
>>> Uppercase. While the postgres I got it stores unquoted mixed case
>>> indetifiers in lowercase. So If I plan to keep such convention in
>>> postgres (store unquoted mixed case identifiers like table name in
>>> lowercase), what's the suggested practice to migrate an existing table
>>> like "XXX_YYY_ZZZ" to "xxx_yyy_zzz"?
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy-alembic" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to sqlalchemy-alem...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy-alembic/5d4bf2d1f07e6958f94ee08d941db96c96fd91f0.camel%40us.ibm.com.
>>> 
>> 
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> 

Re: ask for help on best practices when changing table name from uppercase to lowercase

2021-01-04 Thread Mike Bayer
This is the casing convention of the database and SQLAlchemy does not consider 
the name to be in "uppercase" or "lowercase" (or even "mixed case") but rather 
"case insensitive", which really means the name is referred towards without any 
quoting.When no quoting is applied, there is essentially no "casing" at all 
to the identifier that is stored by the database.

When you refer to the table name in SQLAlchemy, such as in a Table() object, or 
in Alembic as in op.create_table(), refer to it as all lower case at all times 
in the Python code- this indicates to SQLAlchemy that this is a case 
insensitive name and should not be quoted (as long as it has no special 
characters, spaces, etc).  SQLAlchemy will automatically render the name 
without quotes in DDL, DML and DQL, and render it as UPPERCASE when it queries 
DB2's catalogs, and as lower case when it queries PostgreSQL's catalogs.


On Mon, Jan 4, 2021, at 4:12 PM, Ke Zhu - k...@us.ibm.com wrote:
> The use case is migrating data from a Db2 instance (where schema is not
> managed under Alembic migration) to a postgresql instance (where I plan
> to manage schema via Alembic).
> 
> In Db2, by default, it stores unquoted mixed case identifiers in
> Uppercase. While the postgres I got it stores unquoted mixed case
> indetifiers in lowercase. So If I plan to keep such convention in
> postgres (store unquoted mixed case identifiers like table name in
> lowercase), what's the suggested practice to migrate an existing table
> like "XXX_YYY_ZZZ" to "xxx_yyy_zzz"?
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5d4bf2d1f07e6958f94ee08d941db96c96fd91f0.camel%40us.ibm.com.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/99214f97-7066-4204-a3c4-bf0438621ec8%40www.fastmail.com.


Re: Example of inline autogenerated use?

2020-11-05 Thread Mike Bayer
Not quite given in an example, I guess I could add as a recipe but then people 
will be using it which as you've noted isn't reliable in the general sense, 
let's put together how to get the MigrationScript, which it looks like you 
have, with then how to make the Operations object programmatically, which is at 
https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations,
 then there's a method invoke() which is at 
https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.invoke
 , so we can use that to run the ops.   We have to traverse them too where I'll 
use a little bit of assumptions to make it succinct, seems to work for simple 
operations, like this:

migrations = produce_migrations(mc, metadata)

operations = Operations(mc)

stack = [migrations.upgrade_ops]
while stack:
elem = stack.pop(0)
if hasattr(elem, "ops"):
stack.extend(elem.ops)
else:
# work around Alembic issue #753
if hasattr(elem, "column"):
elem.column = elem.column.copy()
operations.invoke(elem)



full example below

from alembic.migration import MigrationContext
from alembic.autogenerate import produce_migrations
from sqlalchemy.schema import SchemaItem
from sqlalchemy.types import TypeEngine
from sqlalchemy import create_engine, MetaData, Column, Integer, String, Table
import pprint
from alembic.operations import Operations


engine = create_engine("mysql://scott:tiger@localhost/test", echo=True)

with engine.connect() as conn:
m = MetaData()
m.reflect(conn)
m.drop_all(conn)

conn.execute(
"""
create table foo (
id integer not null primary key,
old_data varchar(50),
x integer
)"""
)

conn.execute(
"""
create table bar (
data varchar(50)
)"""
)

metadata = MetaData()
Table(
"foo",
metadata,
Column("id", Integer, primary_key=True),
Column("data", Integer),
Column("x", Integer, nullable=False),
)
Table("bat", metadata, Column("info", String(100)))

mc = MigrationContext.configure(engine.connect())

migrations = produce_migrations(mc, metadata)

operations = Operations(mc)

stack = [migrations.upgrade_ops]
while stack:
elem = stack.pop(0)
if hasattr(elem, "ops"):
stack.extend(elem.ops)
else:
# work around Alembic issue #753
if hasattr(elem, "column"):
elem.column = elem.column.copy()
operations.invoke(elem)




On Thu, Nov 5, 2020, at 2:14 AM, Jason Pascucci wrote:
> I couldn't lay my hands on an example of using 
> MigrationContext(connect=) and running the results of 
> produce_migrations(Base.metadata, mc). 
> 
> I got close, I think, evaling the results of render_python_code (which failed 
> due to spaces, but I can wrap it into a def, I guess), but I think there's 
> got to be a better way, I'm just not seeing it.
> 
> (NB: I can guarantee that the migrations will always be simple alters to add 
> only, , so not using the whole infrastructure lets it be consistent, and 
> due to the nature of the project, I'd like to make it simple)
> 
> Thanks
> 
> JRP
> 
> 

> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/7d7499f6-ce53-4aa3-97f5-20d5c25cc41en%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/6672eb10-9e79-4e39-b464-1f5b72c850ed%40www.fastmail.com.


Re: command.init api ignores the cfg settings

2020-10-09 Thread Mike Bayer
hi there -

this question lacks specifics.   There are no configuration options that are 
relevant to the "alembic init" command in any case so it's not clear what 
config settings you are seeing as "ignored"; init uses only the name of the ini 
file given and this works:

from alembic.config import Config
from alembic import command


alembic_cfg = Config("my_alembic_init.ini")
command.init(alembic_cfg, "./my_test_init")


output:

Creating directory /home/classic/dev/alembic/my_test_init ...  done
  Creating directory /home/classic/dev/alembic/my_test_init/versions ...  done
  Generating /home/classic/dev/alembic/my_test_init/script.py.mako ...  done
  Generating /home/classic/dev/alembic/my_test_init/README ...  done
  File /home/classic/dev/alembic/my_alembic_init.ini already exists, skipping
  Generating /home/classic/dev/alembic/my_test_init/env.py ...  done
  Please edit configuration/connection/logging settings in 
'/home/classic/dev/alembic/my_alembic_init.ini' before proceeding.

Perhaps you're suggesting that the "my_alembic_init.ini" file should be 
*written* to include the options that are present in the object? We are 
using Python stdlib configparser: 
https://docs.python.org/3/library/configparser.html   and it is actually news 
to me that this even includes an option to "write" the file.  I would suggest 
running alembic.init(), then programmatically reading the .ini file that's 
generated, as this depends upon lots of settings set up in templates like 
alembic/templates/generic/alembic.ini.mako which are not otherwise present in 
Alembic, then add the options you want, then write the file.   this will get 
you that result, hope this helps.



On Fri, Oct 9, 2020, at 5:39 PM, 'Car
ol Guo' via sqlalchemy-alembic wrote:
> I noticed that the command.init api ignores any config settings set by 
> cfg.set_main_option or cfg.set_option. Is there any reasons that we could not 
> allow the init file generated by command.ini api to use those user config 
> settings?
> 
> 

> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/76df5d17-a059-4e42-909d-4d81b132eb3en%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/b88b8093-0f1f-4474-9847-f3a9c4ac9588%40www.fastmail.com.


Re: Use events with expression api

2020-09-30 Thread Mike Bayer
SQL expressions are intercepted by the SQL Execution events described at 
https://docs.sqlalchemy.org/en/13/core/events.html#sql-execution-and-connection-events
 and the main ones are before_execute() and before_cursor_execute(), but it 
depends on what you want to do.if you want to change how expressions are 
compiled to strings you would redefine their compilation as described at 
https://docs.sqlalchemy.org/en/13/core/compiler.html.



On Wed, Sep 30, 2020, at 9:27 PM, brian...@blue-newt.com wrote:
> Is there a way to use events with the expression api?
> 
> I see how they're used with ORM and Core, but not expressions.
> 
> Thanks,
> Brian
> 

> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/2da059cf-9e55-41eb-8b64-9a79cc978b67n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/8e723ba7-7c02-4b00-bf76-eaf587f29ea6%40www.fastmail.com.


Re: Stray changes detected only sometimes with revision --autogenerate

2020-09-29 Thread Mike Bayer


Hi, so I added a quick recipe to the site just now just so that the "set search 
path" idea is documented to some extent, that is at 
https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases
 .

Re: autogenerate, if you have many schemas that all contain an exact copy of 
the same schema, then autogenerate implies you'd be running this exactly once 
for just one of the schemas, or a "master" schema, such that you generate the 
.py files that will then be applicable to all the schemas.  The recipe above 
discusses this, and in particular it works in a completely schema agnostic 
fashion, you would set include_schemas=False in the config and allow the 
PostgreSQL search_path to handle all movement between schemas.

then for the actual migration "upgrade" run, you would run those files against 
each schema in sequence, again making use of search_path in order to select 
each schema.

as far as how alembic is looking at current tables you would need to turn on 
SQL logging, using the "debug" level, such as in alembic.ini (if you're using 
that):

[logger_sqlalchemy]
level = DEBUG
handlers =
qualname = sqlalchemy.engine



this will emit a lot of queries and result sets.  You'd have to capture that 
logging and then grep through for the "nonexistent" schema - it will be present 
in a result set that Alembic is capturing, most fundamentally the one it uses 
to get all the schema names.




On Tue, Sep 29, 2020, at 4:01 PM, Daniel Krebs wrote:
> Hi Mike,
> 
> thanks a bunch for the quick response!
> 
> >> This is supported by SQLAlchemy more robustly using the 
> >> schema_translate_map feature, see 
> >> https://docs.sqlalchemy.org/en/13/core/connections.html#schema-translating 
> >> .   this feature allows schema names to render explicitly without the 
> >> need to manipulate search_path.
> > 
> > this feature may not work completely with Alembic however, I'm 
> > evaluating that now.
> > 
> > still doesnt shed any light on your sporadic problem, however.
> 
> It's been some time already since I've implemented this, so can't recall 
> the exact reason why I went down that road but I know for sure that I 
> was trying to make it work with schema_translate_map but didn't succeed. 
> I would have very much preferred a solution without patching alembic it 
> seemed to my last resort at the time. I'd be more than happy to find a 
> solution here :)
> 
> 
> > However, when using that approach you'd normally be running 
> autogenerate from just one schema, since you are looking to generate 
> migrations just once and then run them on every tenant.
> >
> > so it's not clear why autogenerate is being run on every tenant 
> explicitly - you'd have just one "model" schema that's the one where you 
> actually run autogenerate upon.   there would be no need to consider 
> other schemas and include_schemas would be set to False.
> 
> Indeed that sounds like a good idea. So you'd suggest to keep one schema 
> just for autogeneration purposes and then apply the migrations to all 
> customer schemas? Is that possible from within alembic or would you wrap 
> some tooling around alembic in order to apply migrations one-by-one to 
> every schema?
> 
> 
> > As far as being able to filter out schemas across many, the current 
> approach is the include_object hook, which is insufficient for many 
> schemas as it does not block the reflection of all the tables.   a new 
> hook include_name is being added in the coming weeks that allows for 
> pre-emptive inclusion or exclusion of specific schema names.
> 
> I tried the include_object hook back then, but I was facing reflection 
> issues if I remember correctly. We're using TimescaleDB [1] in the same 
> database which adds some schemas of its own that alembic doesn't seem to 
> be able to handle.
> 
> 
> > I don't have much insight onto this other than looking at concurrent 
> activities on the database.   "sometimes" usually indicates a race 
> condition of some sort, and the "autogenerate" process is strictly one 
> of reading data.
> 
> Indeed I'm suspecting the same thing. However, I'm not the most 
> knowledgable DBA nor do I have an understanding of how alembic actually 
> compares the current state of the DB with the SA models. Could you maybe 
> provide a more concrete point what and where to look for?
> 
> 
> Thank you for taking the time to help!
> 
> 
> Cheers,
> Daniel
> 
> [1] https://www.timescale.com/
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/fc2bc5e4-274b-a9e7-0566-3b745c4e5fc5%40enlyze.com.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.

Re: Stray changes detected only sometimes with revision --autogenerate

2020-09-29 Thread Mike Bayer


On Tue, Sep 29, 2020, at 3:35 PM, Mike Bayer wrote:
> 
> 
> On Tue, Sep 29, 2020, at 9:17 AM, Daniel Krebs wrote:
>> Hi,
>> 
>> we're having rather strange problems with Alembic 1.4.2 and Postgres 12, 
>> detecting stray changes *sometimes* but also sometimes not. I already dug 
>> through the code but I increasingly get the feel that this is rooted 
>> somewhere in the interaction between alembic and Postgres.
>> 
>> But let me explain our setup first. We maintain a set of SQLAlchemy model 
>> definitions that we evolve over time and use alembic to migrate the database 
>> accordingly. For every customer, we add a new Postgres schema to the 
>> database identified by a corresponding UUID (e.g. 
>> a3d74dcc-1634-33a5-ff74-235f3a7c6322). See [1] for our (stripped) env.py.
>> 
>> Since we want all customers to use the same DDL, we want common migrations 
>> that disregard the schema name altogether. It seems that this is not a 
>> supported use-case of alembic, so I hacked around it such that alembic only 
>> sees schema-less objects and I run SET search_path TO "uuid" before each 
>> migration (see env.py [1]). The patch should be rather straight-forward and 
>> can be found at [2].
> 
> This is supported by SQLAlchemy more robustly using the schema_translate_map 
> feature, see 
> https://docs.sqlalchemy.org/en/13/core/connections.html#schema-translating .  
>  this feature allows schema names to render explicitly without the need to 
> manipulate search_path.

this feature may not work completely with Alembic however, I'm evaluating that 
now.

still doesnt shed any light on your sporadic problem, however.


> 
> However, when using that approach you'd normally be running autogenerate from 
> just one schema, since you are looking to generate migrations just once and 
> then run them on every tenant.
> 
> so it's not clear why autogenerate is being run on every tenant explicitly - 
> you'd have just one "model" schema that's the one where you actually run 
> autogenerate upon.   there would be no need to consider other schemas and 
> include_schemas would be set to False.
> 
> As far as being able to filter out schemas across many, the current approach 
> is the include_object hook, which is insufficient for many schemas as it does 
> not block the reflection of all the tables.   a new hook include_name is 
> being added in the coming weeks that allows for pre-emptive inclusion or 
> exclusion of specific schema names.
> 
> 
> 
>> 
>> Now the issue that we're facing is, that *sometimes* autogenerate detects 
>> changes in one or two customer schemas that are not real [3]. Deleting the 
>> migration and creating a new one often doesn't detect the changes anymore or 
>> for a different customer/schema. The tables that are incorrectly found to 
>> have changed also change over time. 
> 
> I don't have much insight onto this other than looking at concurrent 
> activities on the database.   "sometimes" usually indicates a race condition 
> of some sort, and the "autogenerate" process is strictly one of reading data. 
>  
> 
> 
> 
>> 
>> My current workaround is to autogenerate migrations until alembic "does the 
>> right thing". I know that patching alembic is not the best base upon which 
>> to ask for support, but to be honest I am running out of theories what is 
>> going wrong here. Hope someone can help or point me into the right direction 
>> :)
>> 
>> Cheers,
>> Daniel
>> 
>> [1] 
>> https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-env-py
>> [2] 
>> https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-alembic_patch-diff
>> [2] 
>> https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-log-txt
>> 
>> 
>> 
>> 

>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy-alembic" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy-alembic/479fee75-e006-4c07-9ab3-149250205521n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/sqlalchemy-alembic/479fee75-e006-4c07-9ab3-149250205521n%40googlegroups.com?utm_medium=email_source=footer>.
> 
> 

> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop rec

Re: Stray changes detected only sometimes with revision --autogenerate

2020-09-29 Thread Mike Bayer


On Tue, Sep 29, 2020, at 9:17 AM, Daniel Krebs wrote:
> Hi,
> 
> we're having rather strange problems with Alembic 1.4.2 and Postgres 12, 
> detecting stray changes *sometimes* but also sometimes not. I already dug 
> through the code but I increasingly get the feel that this is rooted 
> somewhere in the interaction between alembic and Postgres.
> 
> But let me explain our setup first. We maintain a set of SQLAlchemy model 
> definitions that we evolve over time and use alembic to migrate the database 
> accordingly. For every customer, we add a new Postgres schema to the database 
> identified by a corresponding UUID (e.g. 
> a3d74dcc-1634-33a5-ff74-235f3a7c6322). See [1] for our (stripped) env.py.
> 
> Since we want all customers to use the same DDL, we want common migrations 
> that disregard the schema name altogether. It seems that this is not a 
> supported use-case of alembic, so I hacked around it such that alembic only 
> sees schema-less objects and I run SET search_path TO "uuid" before each 
> migration (see env.py [1]). The patch should be rather straight-forward and 
> can be found at [2].

This is supported by SQLAlchemy more robustly using the schema_translate_map 
feature, see 
https://docs.sqlalchemy.org/en/13/core/connections.html#schema-translating .   
this feature allows schema names to render explicitly without the need to 
manipulate search_path.

However, when using that approach you'd normally be running autogenerate from 
just one schema, since you are looking to generate migrations just once and 
then run them on every tenant.

so it's not clear why autogenerate is being run on every tenant explicitly - 
you'd have just one "model" schema that's the one where you actually run 
autogenerate upon.   there would be no need to consider other schemas and 
include_schemas would be set to False.

As far as being able to filter out schemas across many, the current approach is 
the include_object hook, which is insufficient for many schemas as it does not 
block the reflection of all the tables.   a new hook include_name is being 
added in the coming weeks that allows for pre-emptive inclusion or exclusion of 
specific schema names.



> 
> Now the issue that we're facing is, that *sometimes* autogenerate detects 
> changes in one or two customer schemas that are not real [3]. Deleting the 
> migration and creating a new one often doesn't detect the changes anymore or 
> for a different customer/schema. The tables that are incorrectly found to 
> have changed also change over time. 

I don't have much insight onto this other than looking at concurrent activities 
on the database.   "sometimes" usually indicates a race condition of some sort, 
and the "autogenerate" process is strictly one of reading data.  



> 
> My current workaround is to autogenerate migrations until alembic "does the 
> right thing". I know that patching alembic is not the best base upon which to 
> ask for support, but to be honest I am running out of theories what is going 
> wrong here. Hope someone can help or point me into the right direction :)
> 
> Cheers,
> Daniel
> 
> [1] 
> https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-env-py
> [2] 
> https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-alembic_patch-diff
> [2] 
> https://gist.github.com/daniel-k/114aa2ac846c02e437b8d86ab89d21ac#file-log-txt
> 
> 
> 
> 

> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/479fee75-e006-4c07-9ab3-149250205521n%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/fafa603f-715c-44ce-8d85-3860468c7e7e%40www.fastmail.com.


Re: KeyError: 'sqlnotapplicable' when running alembic revision --autogenerate

2020-09-04 Thread Mike Bayer
this warning is the culprit:

SAWarning: index key 'sqlnotapplicable' was not located in columns for table 
‘github_active_users'

I believe you should report this to the ibm_db_sa folks. 
https://github.com/ibmdb/python-ibmdbsa/issues

you would need to share the "CREATE TABLE" statements which produce this 
problem.



On Thu, Sep 3, 2020, at 11:54 PM, Ke Zhu - k...@us.ibm.com wrote:
> I added new model, then run
> 
> $ pipenv run alembic revision --autogenerate -m "Add new table 
> TRAVIS_ACTIVE_USERS" --depends-on=
> 
> Then I got this:
> 
> INFO  [alembic.runtime.migration] Context impl Db2Impl.
> INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
> /root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/sqlalchemy/engine/reflection.py:943:
>  SAWarning: index key 'sqlnotapplicable' was not located in columns for table 
> ‘github_active_users'
>   "columns for table '%s'" % (flavor, c, table_name)
> Traceback (most recent call last):
>   File "/root/.local/share/virtualenvs/butane-c47Aapyx/bin/alembic", line 8, 
> in 
> sys.exit(main())
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/config.py",
>  line 577, in main
> CommandLine(prog=prog).main(argv=argv)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/config.py",
>  line 571, in main
> self.run_cmd(cfg, options)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/config.py",
>  line 551, in run_cmd
> **dict((k, getattr(options, k, None)) for k in kwarg)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/command.py",
>  line 214, in revision
> script_directory.run_env()
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/script/base.py",
>  line 489, in run_env
> util.load_python_file(self.dir, "env.py")
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/util/pyfiles.py",
>  line 98, in load_python_file
> module = load_module_py(module_id, path)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/util/compat.py",
>  line 184, in load_module_py
> spec.loader.exec_module(module)
>   File "", line 728, in exec_module
>   File "", line 219, in _call_with_frames_removed
>   File "butane/warehouse/env.py", line 91, in 
> run_migrations_online()
>   File "butane/warehouse/env.py", line 85, in run_migrations_online
> context.run_migrations()
>   File "", line 8, in run_migrations
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/runtime/environment.py",
>  line 846, in run_migrations
> self.get_context().run_migrations(**kw)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/runtime/migration.py",
>  line 509, in run_migrations
> for step in self._migrations_fn(heads, self):
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/command.py",
>  line 190, in retrieve_migrations
> revision_context.run_autogenerate(rev, context)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/api.py",
>  line 442, in run_autogenerate
> self._run_environment(rev, migration_context, True)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/api.py",
>  line 482, in _run_environment
> autogen_context, migration_script
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/compare.py",
>  line 25, in _populate_migration_script
> _produce_net_changes(autogen_context, upgrade_ops)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/compare.py",
>  line 51, in _produce_net_changes
> autogen_context, upgrade_ops, schemas
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/util/langhelpers.py",
>  line 303, in go
> fn(*arg, **kw)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/compare.py",
>  line 83, in _autogen_for_tables
> autogen_context,
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/compare.py",
>  line 170, in _compare_tables
> autogen_context, modify_table_ops, s, tname, t, None
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/util/langhelpers.py",
>  line 303, in go
> fn(*arg, **kw)
>   File 
> "/root/.local/share/virtualenvs/butane-c47Aapyx/lib/python3.7/site-packages/alembic/autogenerate/compare.py",
>  line 493, in _compare_indexes_and_uniques
> conn_indexes = 

Re: Delay between consecutive SQL statements when using Alembic on Jenkins

2020-08-17 Thread Mike Bayer


On Sun, Aug 16, 2020, at 11:10 PM, Michał Guzek wrote:
> I have a problem with delays between consecutive SQL statements when Alembic 
> executes a migration script's upgrade() function on Jenkins:

> def upgrade():
> op.execute("DELETE FROM employee WHERE name='John';") #John also has its 
> primary key ID field equal to 1
> op.execute("INSERT INTO employee (id,name,occupation) VALUES (1, 'Michael', 
> 'Bartender');")
> Basically the second statement can't run to completion because I'm getting an 
> error that ID column would contain duplicate values. Apparently, there is 
> some sort of delay between the first and second op.execute() statement and 
> when the second one is executed, the DB still contains the old entry for John 
> whose ID is also 1.


there's no inherent "delay" between statements.   this would sound like you 
have something going on in the background, such as special triggers and/or 
replication that has some kind of lag.   What else can you learn from looking 
at the (not shown here) error message you're getting?  is the table name what 
you expect?  does the stack trace make sense?   

> I use SQLAlchemy's create_engine() to initiate a DB connection to MySQL in 
> Alembic's run_migration_online() function in its env.py configuration file. I 
> don't experience any such delays or errors on my local machine where I use 
> the exact same codebase and MySQL as well. Also, the problem on Jenkins is 
> intermittent, sometimes the build succeeds when I just hit rebuild.


What else can you say about the MySQL database in use?   what's the exact 
version in use ?  mariadb or mysql?   is it galera?  is there a proxy server or 
scaling server in the middle?  what storage engine is used for these tables ?  
(innodb?)  does the issue happen only when there's lots of load on the machine? 
  was there some configurational change that precipated this failure beginning 
to happen?

is this jenkins running on a VM or container, does it get built out to 
arbitrary servers? is it running anywhere else without a problem?  

> Do you know what might be the cause of those weird delays on Jenkins?

> Previously I put the DELETE statement in the downgrade() function and run it 
> first by using alembic downgrade -1 and subsequently alembic upgrade head. 
> But I had the same issue, it's as if alembic downgrade updated 
> alembic_version table correctly on time but wasn't able to update my employee 
> table on time, before upgrade() has started... And it also happened only on 
> Jenkins.

> 

> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/2dd0ce93-b4c9-4664-855a-ba81cacd519dn%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/0c4839d4-0a96-4ece-8cab-d41ae233b3fa%40www.fastmail.com.


Re: 'alembic current' crashes with stack trace

2020-03-26 Thread Mike Bayer
Somewhere in your model, likely inside of a Table() object, there is a keyword 
"index" being used which is invalid. Look for that keyword and remove it.





On Thu, Mar 26, 2020, at 1:53 PM, Rob Schottland wrote:
> Suddenly, I can't get alembic (1.3.3?). alembic history does work, but my 
> most recent migration crashes with an enormous stack-trace (as usual): here's 
> my current pretty-simple migration script:
> 
>> """Citation datasource optional
>> 
>> Revision ID: f5a6e316ee8d
>> Revises: 2a181e783d96
>> Create Date: 2020-03-26 08:54:59.547135
>> 
>> """
>> 
>> # revision identifiers, used by Alembic.
>> revision = 'f5a6e316ee8d'
>> down_revision = '2a181e783d96'
>> branch_labels = None
>> depends_on = None
>> 
>> from alembic import op
>> import sqlalchemy as sa
>> 
>> 
>> def upgrade():
>>  op.alter_column('citation', 'id_datasource',
>>  existing_Type=sa.INTEGER(),
>>  nullable=True)
>>  pass
>> 
>> 
>> def downgrade():
>>  op.alter_column('citation', 'id_datasource',
>>  existing_Type=sa.INTEGER(),
>>  nullable=False)
>>  pass
> 
> Here's the last stanza of the stack trace:
> 
>  File 
> "/home/rschottland/.local/share/virtualenvs/astorb_db-u7fxujye/lib/python3.7/site-packages/sqlalchemy/sql/base.py",
>  line 289, in _validate_dialect_kwargs
>  "named _, got '%s'" % k
> 
> 
> TypeError: Additional arguments should be named _, got 
> 'index'
> 
> The whole 69 line stack trace is attached.
> 
> This is the 30th migration for this database going back to 2015. Nothing like 
> this has occurred before. Could my alembic package be corrupted? The previous 
> (successful) igration was only 3 days ago.
> 
> Any thoughts on this would be appreciated.
> 
> 
> 
> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/619800be-2eb5-4728-a695-5000c3668646%40googlegroups.com
>  
> .
> 
> 
> *Attachments:*
>  * alembic_stacktrace.txt

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/97174b7c-874f-434e-be53-c4996a9045c0%40www.fastmail.com.


Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Mike Bayer
OK one more addition to the recipe, please do this:

 DB_SCHEMA = "my_foo_schema"

 with connectable.connect() as connection:
 connection.execute(f'create schema if not exists {DB_SCHEMA}')
 connection.execute(f'set search_path to {DB_SCHEMA}')

 connection.dialect.default_schema_name = DB_SCHEMA


will work completely

alternatively you can use an on connect event on the engine to do the same 
thing.


On Fri, Mar 20, 2020, at 10:48 AM, Brian Hill wrote:
> this is what i tried but it generated the whole schema:
> .
> with connectable.connect() as connection:
> 
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> connection.execute(f'set search_path to {DB_SCHEMA}')
> 
> context.configure(
> connection=connection,
> target_metadata=metadata,
> )
> 
> with context.begin_transaction():
> context.run_migrations()
> 
> it works when my schema is 'public', when it matches metadata. my solution 
> will be to set my schema to 'public' and use this simple env.py.
> 
> thanks,
> 
> brian
> 
> On Friday, March 20, 2020 at 9:49:52 AM UTC-4, Mike Bayer wrote:
>> I just realized that you really dont need to even use schema_translate_map 
>> at all here. If you use a completely straight env.py, and simply set 
>> search_path=SCHEMA, you should be able to run Alembic in entirely 
>> "schemaless" mode; don't include schemas anywhere nor would you need to set 
>> it for the alembic version table. Postgresql will also CREATE/ ALTER in that 
>> schema as well. As long as your environment runs completely in just one 
>> schema at a time, this can be set up entirely at the connection level.
>> 
>> 
>> On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
>>> 
>>> 
>>> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>>>> 
>>>> 
>>>> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>>>>> 
>>>>> 
>>>>> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>>>>>> so let me get this straight:
>>>>>> 
>>>>>> 1. you have many schemas
>>>>>> 
>>>>> 
>>>>> yes
>>>>> 
>>>>>> 
>>>>>> 2. you want to run autogenerate only once
>>>>> 
>>>>> yes
>>>>> 
>>>>>> 3. you want your mirations genrated with None for schema
>>>>> 
>>>>> yes
>>>>> 
>>>>>> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random? 
>>>>>> arbitrary?) schema to use as the target, is that right?
>>>>> 
>>>>> yes one, but it won't be arbitrary, it will be a development schema 
>>>>> generated from the latest version (prior to the change we want to 
>>>>> autogenerate).
>>>>> 
>>>>>> 
>>>>>> 
>>>>>> As it stands, your autogenerate is not going to look in any schema 
>>>>>> except "public", assuming default PG search path, because you did not 
>>>>>> set "include_schemas=True".
>>>>>> 
>>>>> 
>>>>> i had it set to True and it behaves the same. i just took it out based on 
>>>>> your previous suggestion, but again, it behaves the same.
>>>>> 
>>>>>> 
>>>>>> If you truly want autogenerate to look at one schema and only that 
>>>>>> schema, and generate everything as non-schema qualified, then I would 
>>>>>> simply set the default schema in PG to that schema name using 
>>>>>> search_path:
>>>>>> 
>>>>>> connection.execute("SET search_path TO my_schema")
>>>>>> 
>>>>>> that way everything Alembic reflects will be from "my_schema" and it 
>>>>>> will see the schema as blank, and it should generate as such. you might 
>>>>>> need to disable the schema translate map when autogenerate runs but try 
>>>>>> it without doing that first.
>>>>>> 
>>>>> 
>>>>> it's still generating the entire schema in the revision file.
>>>>> 
>>>>> here's the updated section and it behaves the same with/without schema 
>>>>> translate map:
>>>>> 
>>>>> with connectable.connect() as connection:
>>>>> 
>>>>> # create DB_SCHEMA if it doesn't exist
>>>

Re: Autogenerate with Multi-Tenant

2020-03-20 Thread Mike Bayer
I just realized that you really dont need to even use schema_translate_map at 
all here. If you use a completely straight env.py, and simply set 
search_path=SCHEMA, you should be able to run Alembic in entirely "schemaless" 
mode; don't include schemas anywhere nor would you need to set it for the 
alembic version table. Postgresql will also CREATE/ ALTER in that schema as 
well. As long as your environment runs completely in just one schema at a time, 
this can be set up entirely at the connection level.


On Thu, Mar 19, 2020, at 9:15 PM, Brian Hill wrote:
> 
> 
> On Thursday, March 19, 2020 at 8:20:06 PM UTC-4, Mike Bayer wrote:
>> 
>> 
>> On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
>>> 
>>> 
>>> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>>>> so let me get this straight:
>>>> 
>>>> 1. you have many schemas
>>>> 
>>> 
>>> yes
>>> 
>>>> 
>>>> 2. you want to run autogenerate only once
>>> 
>>> yes
>>> 
>>>> 3. you want your mirations genrated with None for schema
>>> 
>>> yes
>>> 
>>>> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random? 
>>>> arbitrary?) schema to use as the target, is that right?
>>> 
>>> yes one, but it won't be arbitrary, it will be a development schema 
>>> generated from the latest version (prior to the change we want to 
>>> autogenerate).
>>> 
>>>> 
>>>> 
>>>> As it stands, your autogenerate is not going to look in any schema except 
>>>> "public", assuming default PG search path, because you did not set 
>>>> "include_schemas=True".
>>>> 
>>> 
>>> i had it set to True and it behaves the same. i just took it out based on 
>>> your previous suggestion, but again, it behaves the same.
>>> 
>>>> 
>>>> If you truly want autogenerate to look at one schema and only that schema, 
>>>> and generate everything as non-schema qualified, then I would simply set 
>>>> the default schema in PG to that schema name using search_path:
>>>> 
>>>> connection.execute("SET search_path TO my_schema")
>>>> 
>>>> that way everything Alembic reflects will be from "my_schema" and it will 
>>>> see the schema as blank, and it should generate as such. you might need to 
>>>> disable the schema translate map when autogenerate runs but try it without 
>>>> doing that first.
>>>> 
>>> 
>>> it's still generating the entire schema in the revision file.
>>> 
>>> here's the updated section and it behaves the same with/without schema 
>>> translate map:
>>> 
>>> with connectable.connect() as connection:
>>> 
>>> # create DB_SCHEMA if it doesn't exist
>>> connection.execute(f'create schema if not exists {DB_SCHEMA}')
>>> 
>>> # map metadata schema (None) to DB_SCHEMA
>>> # connection = 
>>> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
>>> 
>>> connection.execute(f"set search_path to {DB_SCHEMA}")
>>> 
>>> # set alembic version table location in DB_SCHEMA
>>> context.configure(
>>> connection=connection,
>>> include_schemas=True,
>>> target_metadata=metadata,
>>> version_table_schema=DB_SCHEMA,
>>> )
>>> 
>>> with context.begin_transaction():
>>> context.run_migrations()
>>> 
>>> 
>> 
>> 
>> no, don't use "include_schemas". you want to run alembic in a mode where it 
>> has no idea there are other schemas to use. i think version_table_schema is 
>> OK here but don't use include_schemas. Also turn on INFO or DEBUG logging 
>> for the sqlalchemy logger to see exactly what tables it is reflecting.
> 
> same behavior, entire schema generated. i'll look at using INFO and DEBUG 
> tomorrow to get a better idea of what's going on, but using public as my dev 
> schema to autogenerate against works. i just have to exclude the alembic 
> tables.
> 
> again thanks for your help in understanding what i was trying to do.
> 
> brian
> 
>> 
>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>> 
>>>> 
>>>> 
>>>> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>>>>> Here's my env.py. Thanks for the help.
>>>>> Brian
>&

Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Mike Bayer


On Thu, Mar 19, 2020, at 7:41 PM, Brian Hill wrote:
> 
> 
> On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
>> so let me get this straight:
>> 
>> 1. you have many schemas
>> 
> 
> yes
> 
>> 
>> 2. you want to run autogenerate only once
> 
> yes
> 
>> 3. you want your mirations genrated with None for schema
> 
> yes
> 
>> 4. *HOWEVER*, when you run autogenerate, you are picking *one* (random? 
>> arbitrary?) schema to use as the target, is that right?
> 
> yes one, but it won't be arbitrary, it will be a development schema generated 
> from the latest version (prior to the change we want to autogenerate).
> 
>> 
>> 
>> As it stands, your autogenerate is not going to look in any schema except 
>> "public", assuming default PG search path, because you did not set 
>> "include_schemas=True".
>> 
> 
> i had it set to True and it behaves the same. i just took it out based on 
> your previous suggestion, but again, it behaves the same.
> 
>> 
>> If you truly want autogenerate to look at one schema and only that schema, 
>> and generate everything as non-schema qualified, then I would simply set the 
>> default schema in PG to that schema name using search_path:
>> 
>> connection.execute("SET search_path TO my_schema")
>> 
>> that way everything Alembic reflects will be from "my_schema" and it will 
>> see the schema as blank, and it should generate as such. you might need to 
>> disable the schema translate map when autogenerate runs but try it without 
>> doing that first.
>> 
> 
> it's still generating the entire schema in the revision file.
> 
> here's the updated section and it behaves the same with/without schema 
> translate map:
> 
> with connectable.connect() as connection:
> 
> # create DB_SCHEMA if it doesn't exist
> connection.execute(f'create schema if not exists {DB_SCHEMA}')
> 
> # map metadata schema (None) to DB_SCHEMA
> # connection = 
> connection.execution_options(schema_translate_map={None:DB_SCHEMA})
> 
> connection.execute(f"set search_path to {DB_SCHEMA}")
> 
> # set alembic version table location in DB_SCHEMA
> context.configure(
> connection=connection,
> include_schemas=True,
> target_metadata=metadata,
> version_table_schema=DB_SCHEMA,
> )
> 
> with context.begin_transaction():
> context.run_migrations()
> 
> 


no, don't use "include_schemas". you want to run alembic in a mode where it has 
no idea there are other schemas to use. i think version_table_schema is OK here 
but don't use include_schemas. Also turn on INFO or DEBUG logging for the 
sqlalchemy logger to see exactly what tables it is reflecting.



> 
> 
> 
> 
> 
>> 
>> 
>> 
>> On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
>>> Here's my env.py. Thanks for the help.
>>> Brian
>>> 
>>> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>>>> 
>>>> 
>>>> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>>>>> Are there known issues with using autogenerate with multi-tenant 
>>>>> (schema_translate_map)?
>>>> 
>>>> it's complicated and not one-size-fits-all, if you consider that to be an 
>>>> issue
>>>> 
>>>> 
>>>>> 
>>>>> My metadata doesn't have a schema and I my 
>>>>> schema_translate_map={None:'my_schema'}.
>>>>> 
>>>>> This works for migrations but when I use autogenerate the generated 
>>>>> revision file is the full schema and not the diff.
>>>>> 
>>>>> It's detecting the alembic version table in the tenant schema. If I run 
>>>>> the autogenerate a second time with the new revision file it fails saying 
>>>>> Taget database is not up to date.
>>>> 
>>>> are you setting include_schemas=True in your environment? I'd probably not 
>>>> do this. Can't help much more without a complete and concise working 
>>>> example of your env.py, please remove all extraneous details.
>>>> 
>>>> 
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Brian
>>>>> 

>>>>> --
>>>>> You received this message because you are subscribed to the Google Groups 
>>>>> "sqlalchemy-alembic" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>>> email 

Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Mike Bayer
so let me get this straight:

1. you have many schemas

2. you want to run autogenerate only once

3. you want your mirations genrated with None for schema

4. *HOWEVER*, when you run autogenerate, you are picking *one* (random? 
arbitrary?) schema to use as the target, is that right?

As it stands, your autogenerate is not going to look in any schema except 
"public", assuming default PG search path, because you did not set 
"include_schemas=True".

If you truly want autogenerate to look at one schema and only that schema, and 
generate everything as non-schema qualified, then I would simply set the 
default schema in PG to that schema name using search_path:

connection.execute("SET search_path TO my_schema")

that way everything Alembic reflects will be from "my_schema" and it will see 
the schema as blank, and it should generate as such. you might need to disable 
the schema translate map when autogenerate runs but try it without doing that 
first.




On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
> Here's my env.py. Thanks for the help.
> Brian
> 
> On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:
>> 
>> 
>> On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
>>> Are there known issues with using autogenerate with multi-tenant 
>>> (schema_translate_map)?
>> 
>> it's complicated and not one-size-fits-all, if you consider that to be an 
>> issue
>> 
>> 
>>> 
>>> My metadata doesn't have a schema and I my 
>>> schema_translate_map={None:'my_schema'}.
>>> 
>>> This works for migrations but when I use autogenerate the generated 
>>> revision file is the full schema and not the diff.
>>> 
>>> It's detecting the alembic version table in the tenant schema. If I run the 
>>> autogenerate a second time with the new revision file it fails saying Taget 
>>> database is not up to date.
>> 
>> are you setting include_schemas=True in your environment? I'd probably not 
>> do this. Can't help much more without a complete and concise working example 
>> of your env.py, please remove all extraneous details.
>> 
>> 
>>> 
>>> Thanks,
>>> 
>>> Brian
>>> 

>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy-alembic" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com?utm_medium=email_source=footer>.
>> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/fe6fddb4-25d6-4dd1-be3a-c1a174548bb4%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/fe6fddb4-25d6-4dd1-be3a-c1a174548bb4%40googlegroups.com?utm_medium=email_source=footer>.
> 
> 
> *Attachments:*
>  * env.py

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/ffbe8e89-9568-48e4-a2bc-bc5042efd620%40www.fastmail.com.


Re: Autogenerate with Multi-Tenant

2020-03-19 Thread Mike Bayer


On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
> Are there known issues with using autogenerate with multi-tenant 
> (schema_translate_map)?

it's complicated and not one-size-fits-all, if you consider that to be an issue


> 
> My metadata doesn't have a schema and I my 
> schema_translate_map={None:'my_schema'}.
> 
> This works for migrations but when I use autogenerate the generated revision 
> file is the full schema and not the diff.
> 
> It's detecting the alembic version table in the tenant schema. If I run the 
> autogenerate a second time with the new revision file it fails saying Taget 
> database is not up to date.

are you setting include_schemas=True in your environment? I'd probably not do 
this. Can't help much more without a complete and concise working example of 
your env.py, please remove all extraneous details.


> 
> Thanks,
> 
> Brian
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5011240c-ba09-4423-9a05-0e6d2481dadf%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/5eb4df7c-a4ab-4d6c-b4d7-8ffe1a2cef02%40www.fastmail.com.


Re: Invoke commands with custom env.py from library

2020-03-05 Thread Mike Bayer


On Thu, Mar 5, 2020, at 1:59 PM, Daniel Cardin wrote:
> 1. I always expected that, with any reasonably non-trivial app, I would 
> inevitably need to fall back to using their env.py. My hope was that for 
> simpler cases, since this is a testing tool, there would be a way to not 
> require any changes to their env.py by default. Ultimately I can document 
> what the recommended changes to a typical env.py would be, and that's not the 
> end of the world.
> 
> 2. I did try to change version_locations, but unless I'm mistaken what I'm 
> asking for maybe seems like an unsupported usecase. Attempting to target the 
> library env with "script_location" and the local versions/ with 
> "version_locations" still seems to look at the wrong location. Given how 
> ScriptDirectory is constructed and and then `run_env` directly looks for 
> "env.py", it seems like what im asking for is just not an expected usecase.

it's true, it is an assumption that env.py is co-located with the versioning 
files, although looking at the source, I don't know that i see why it would 
look in the wrong location, ScriptDirectory pretty clearly looks in "dir" for 
env.py and "version_locations" for the version files.

I'm wondering why these users would not have an env.py set up? or they would, 
and your tool is attempting to do some magic thing on an existing installation 
without them making any changes? If there is truly something blocking 
ScriptDirectory from working this way, maybe I'd suggest monkeypatching, but 
script_directory and version_locations look separate to me and are handled 
separately all the way from ScriptDirectory.from_config().



> 
> On Thursday, March 5, 2020 at 12:21:54 PM UTC-5, Mike Bayer wrote:
>> 
>> 
>> On Thu, Mar 5, 2020, at 12:02 PM, Daniel Cardin wrote:
>>> So, yes I mean "commands" as in `alembic.command.upgrade()`.
>>> 
>>> The idea would be that the library defines an env.py (e.g. the important 
>>> portion of which would look something like:)
>>> ...
>>> 
>>> connectable = context.config.attributes.get("connection", None)
>>> with self.connection.connect() as connection:
>>>  alembic.context.configure(connection=connection)
>>> 
>>> with alembic.context.begin_transaction():
>>>  alembic.context.run_migrations()
>>> 
>>> which lives at `src/library/foo/env.py`. If the migrations were colocated 
>>> with the env.py, then afaik setting "script_location" to "library:foo" 
>>> would enable me to run `alembic.command.upgrade()` and it would just work. 
>>> However in this case (if possible), the migrations/versions/* would live at 
>>> an arbitrary other location (e.g. the code which has installed this 
>>> library).
>> 
>> Ok two thoughts on that:
>> 
>> 1. usually the way this goes is that the project that has the migrations 
>> does everything normally, it has its own env.py, and inside that env.py, 
>> that's where it imports *your* env.py. that is, instaed of having the usual 
>> env.py it would only have:
>> 
>> # env.py
>> 
>> from daniel_cardins.library.env import run_migrations
>> 
>> run_migrations()
>> 
>> 2. the locations of the version files vs. the env.py script are actually 
>> separate in the config. there is a script_location that it uses to find the 
>> home base where env.py is, however there is also a version_locations config 
>> that can have any number of other directories in it and these supersede 
>> script_location for the version files themselves.
>> 
>> 
>> 
>> I think in general, if the end-user has a bunch of version files set up, 
>> it's probably not a big deal that they have a stub "env.py" right there that 
>> just calls out to your library. that's something very clear that people 
>> looking at a project that uses your library can understand quickly if they 
>> already know alembic.
>> 
>> 
>> 
>> 
>>> 
>>> General workflow:
>>> * person working on project "foo" invokes some cli/command/whatever which 
>>> requests an upgrade
>>> * library does whatever requisite setup
>>> * library invokes `alembic.command.upgrade()`
>>> * upgrade() ends up routing the code through the library's `env.py`
>>> * the context of the migration command is targeting the project "foo"'s 
>>> local migrations/versions folder
>>> 
>>> The specifics of the above are just based on my knowledge of alembic, so if 
>>> there's another process i could be doing

Re: Invoke commands with custom env.py from library

2020-03-05 Thread Mike Bayer


On Thu, Mar 5, 2020, at 12:02 PM, Daniel Cardin wrote:
> So, yes I mean "commands" as in `alembic.command.upgrade()`.
> 
> The idea would be that the library defines an env.py (e.g. the important 
> portion of which would look something like:)
> ...
> 
> connectable = context.config.attributes.get("connection", None)
> with self.connection.connect() as connection:
>  alembic.context.configure(connection=connection)
> 
> with alembic.context.begin_transaction():
>  alembic.context.run_migrations()
> 
> which lives at `src/library/foo/env.py`. If the migrations were colocated 
> with the env.py, then afaik setting "script_location" to "library:foo" would 
> enable me to run `alembic.command.upgrade()` and it would just work. However 
> in this case (if possible), the migrations/versions/* would live at an 
> arbitrary other location (e.g. the code which has installed this library).

Ok two thoughts on that:

1. usually the way this goes is that the project that has the migrations does 
everything normally, it has its own env.py, and inside that env.py, that's 
where it imports *your* env.py. that is, instaed of having the usual env.py it 
would only have:

# env.py

from daniel_cardins.library.env import run_migrations

run_migrations()

2. the locations of the version files vs. the env.py script are actually 
separate in the config. there is a script_location that it uses to find the 
home base where env.py is, however there is also a version_locations config 
that can have any number of other directories in it and these supersede 
script_location for the version files themselves.



I think in general, if the end-user has a bunch of version files set up, it's 
probably not a big deal that they have a stub "env.py" right there that just 
calls out to your library. that's something very clear that people looking at a 
project that uses your library can understand quickly if they already know 
alembic.




> 
> General workflow:
> * person working on project "foo" invokes some cli/command/whatever which 
> requests an upgrade
> * library does whatever requisite setup
> * library invokes `alembic.command.upgrade()`
> * upgrade() ends up routing the code through the library's `env.py`
> * the context of the migration command is targeting the project "foo"'s local 
> migrations/versions folder
> 
> The specifics of the above are just based on my knowledge of alembic, so if 
> there's another process i could be doing where env.py isn't "invoked" so much 
> as the above code block is just called normally, then that's ideal.
> 
> On Thursday, March 5, 2020 at 9:36:09 AM UTC-5, Mike Bayer wrote:
>> 
>> 
>> On Thu, Mar 5, 2020, at 8:08 AM, Daniel Cardin wrote:
>>> I am attempting to write a library which invokes alembic commands, while 
>>> referencing the migrations of a separate package which has installed said 
>>> library.
>>> 
>>> The intent here, is for the library to invoke the alembic commands with an 
>>> env.py defined in that package. This seems to work through 
>>> config.get("script_location", "package_name:foldername")
>>> but then obviously expects the actual migrations to be colocated at the 
>>> same location.
>>> 
>>> My guess would be, if this is possible at all, that there'd be something I 
>>> could put in the env.py which would reconfigure it to execute the 
>>> `context.run_migrations()` migration context (and therefore search path, 
>>> back at the original call site.
>>> 
>>> I realize that this won't always work, given that env.py is often likely 
>>> customized enough such that a generic one wouldn't be able to execute them, 
>>> but per your cookbook suggestions about programmatic invocation of 
>>> commands, this sort of thing requires the user to opt into changing their 
>>> env.py to use
>>> connectable = context.config.attributes.get("connection", None)
>>> in order to make use of a connection handed to it.
>> 
>> I'm not following all the moving pieces here, like when you say "commands", 
>> i assume you mean the commands in alembic.command, , like 
>> alembic.command.upgrade(). when you say "an env.py defined in that package", 
>> I guess you mean in the separate package that is using your library. 
>> 
>> this doesn't seem different from what Alembic itself does, "that package" 
>> would have a directory where the env.py and its migration files are present? 
>> So... I'm not really sure what you're trying to do beyond call an Alembic 
>> command that is against a partic

Re: Invoke commands with custom env.py from library

2020-03-05 Thread Mike Bayer


On Thu, Mar 5, 2020, at 8:08 AM, Daniel Cardin wrote:
> I am attempting to write a library which invokes alembic commands, while 
> referencing the migrations of a separate package which has installed said 
> library.
> 
> The intent here, is for the library to invoke the alembic commands with an 
> env.py defined in that package. This seems to work through 
> config.get("script_location", "package_name:foldername")
> but then obviously expects the actual migrations to be colocated at the same 
> location.
> 
> My guess would be, if this is possible at all, that there'd be something I 
> could put in the env.py which would reconfigure it to execute the 
> `context.run_migrations()` migration context (and therefore search path, back 
> at the original call site.
> 
> I realize that this won't always work, given that env.py is often likely 
> customized enough such that a generic one wouldn't be able to execute them, 
> but per your cookbook suggestions about programmatic invocation of commands, 
> this sort of thing requires the user to opt into changing their env.py to use
> connectable = context.config.attributes.get("connection", None)
> in order to make use of a connection handed to it.

I'm not following all the moving pieces here, like when you say "commands", i 
assume you mean the commands in alembic.command, , like 
alembic.command.upgrade(). when you say "an env.py defined in that package", I 
guess you mean in the separate package that is using your library. 

this doesn't seem different from what Alembic itself does, "that package" would 
have a directory where the env.py and its migration files are present? So... 
I'm not really sure what you're trying to do beyond call an Alembic command 
that is against a particular directory.

if OTOH by "commands" you mean migration operations, like you want to call 
op.create_table() yourself, OK, but I don't really understand enough to 
formulate an answer for you.




> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/5612f1d1-9e93-46ed-9be7-0db362b815a8%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/68640cf0-9297-4a55-9c26-7c54d7edae62%40www.fastmail.com.


Re: Alembic: Change the order of the columns for a table

2020-03-04 Thread Mike Bayer


On Wed, Mar 4, 2020, at 10:27 AM, David Siller wrote:
> Hello Mike,
> 
> and thank you very much for the solution. It is working flawlessly for 
> ordering the columns.
> 
> For others finding this thread: Mike created an entry in the Alembic cookbook 
> <https://alembic.sqlalchemy.org/en/latest/cookbook.html#apply-custom-sorting-to-table-columns-within-create-table>.
> 
> There is just one (minor) flaw, where I currently don't know how to fix it, 
> but I can adjust the generated script manually:
> I'm using the naming schema for constraints and indexes as described in the 
> documentation <https://alembic.sqlalchemy.org/en/latest/naming.html>. While 
> the names for e.g. foreign keys, indexes or primary key constraints are kept 
> with your solution, UniqueConstraints are strangely duplicated, once with the 
> provided name and once just with the column name. This happens just for 
> UniqueConstraints. So you have e.g. something like the following in the 
> generated script:
> 
> sa.UniqueConstraint("iso_name"),
> sa.UniqueConstraint("iso_name", name=op.f("uq_country_iso_name")),
> 
> So "uq_country_iso_name" in this case is the name provided by the naming 
> schema, while the first UniqueConstraint is generated as duplication.
> 
> Maybe any hint on what I'm doing wrong? Or maybe a bug in the 
> Column.copy-method (although the Column.constraints-set on the copied column 
> seems to be empty; also the copied columns-list as in your example contains 
> the correct UniqueConstraint with the correct name; so maybe it happens in 
> the CreateTableOp)?

are you using the unique=True flag on the Column itself? that would be why. It 
would probably work to set col.unique=False, col.index=False before applying 
the Column to the new operation so that these don't double up for the 
constraints that we are already getting from the table. let me know if that 
works and we can update the demo.


> 
> Thanks again for your solution
> David
> 
> 
> 
> Am Dienstag, 3. März 2020 15:54:17 UTC+1 schrieb Mike Bayer:
>> 
>> 
>> On Tue, Mar 3, 2020, at 3:36 AM, David Siller wrote:
>>> Hello,
>>> 
>>> first and foremost: thank you for SQLAlchemy and Alembic. I've worked with 
>>> a lot of ORMs, but only with these two I feel very comfortable and I'm 
>>> doing a lot of crazy stuff with it.
>>> 
>>> The current problem that I have: I'm currently creating a lot of tables 
>>> with a lot of mixins, as most of them have e.g. an id-column or columns for 
>>> created_at-dates etc. However it seems that I can't control the order of 
>>> the columns, resulting in e.g. the primary-key-id-column to show up in the 
>>> middle of the other columns or the created_at-column at the beginning and 
>>> the created_by-column at the end, especially when I autogenerate the 
>>> versioning scripts.
>> 
>> Declarative with mixins would hopefully be able to do a little better than 
>> that, if the issue is just amongst a small handful of columns maybe there's 
>> a way to fix it at that level. This ordering is actually controlled by an 
>> attribute on every column called "_creation_order". It gets set when the 
>> column is first created based on a global counter and that is how the 
>> sorting of the columns works within Declarative; it sorts on that attribute 
>> and sends the Column objects to Table in that order. 
>> 
>> also, I don't know how the column ordering that comes from the mappings 
>> would be different if you rendered metadata.create_all() versus using 
>> alembic autogenerate, it's the same Table objects.
>> 
>> 
>> 
>>> 
>>> 
>>> 
>>> But alembic has also a method in this case (as always ;-)). So I created a 
>>> Rewriter-method for the CreateTableOps in the env.py-file, re-order the 
>>> columns in the correct way and reassign this ordered list of columns to 
>>> op.columns. Unfortunately this doesn't work. Somehow it either uses 
>>> whatever is already the to_table()-method (?) or ... something else. So I 
>>> tried to create a new operation in the rewriter with the ordered list and 
>>> returned this operation instead. But then I get an the error: 
>>> sqlalchemy.exc.ArgumentError: Column object 'id' already assigned to Table 
>>> 'user'.
>> 
>> 
>> so the CreateTableOp and a lot of the other ops have an "_orig" attribute 
>> that points to the "original" object being autogenerated. For this object, 
>> it's "_orig_table". These objects did not anticipate being rewr

Re: Alembic: Change the order of the columns for a table

2020-03-03 Thread Mike Bayer


On Tue, Mar 3, 2020, at 3:36 AM, David Siller wrote:
> Hello,
> 
> first and foremost: thank you for SQLAlchemy and Alembic. I've worked with a 
> lot of ORMs, but only with these two I feel very comfortable and I'm doing a 
> lot of crazy stuff with it.
> 
> The current problem that I have: I'm currently creating a lot of tables with 
> a lot of mixins, as most of them have e.g. an id-column or columns for 
> created_at-dates etc. However it seems that I can't control the order of the 
> columns, resulting in e.g. the primary-key-id-column to show up in the middle 
> of the other columns or the created_at-column at the beginning and the 
> created_by-column at the end, especially when I autogenerate the versioning 
> scripts.

Declarative with mixins would hopefully be able to do a little better than 
that, if the issue is just amongst a small handful of columns maybe there's a 
way to fix it at that level. This ordering is actually controlled by an 
attribute on every column called "_creation_order". It gets set when the column 
is first created based on a global counter and that is how the sorting of the 
columns works within Declarative; it sorts on that attribute and sends the 
Column objects to Table in that order. 

also, I don't know how the column ordering that comes from the mappings would 
be different if you rendered metadata.create_all() versus using alembic 
autogenerate, it's the same Table objects.



> 
> 
> 
>  But alembic has also a method in this case (as always ;-)). So I created a 
> Rewriter-method for the CreateTableOps in the env.py-file, re-order the 
> columns in the correct way and reassign this ordered list of columns to 
> op.columns. Unfortunately this doesn't work. Somehow it either uses whatever 
> is already the to_table()-method (?) or ... something else. So I tried to 
> create a new operation in the rewriter with the ordered list and returned 
> this operation instead. But then I get an the error: 
> sqlalchemy.exc.ArgumentError: Column object 'id' already assigned to Table 
> 'user'.


so the CreateTableOp and a lot of the other ops have an "_orig" attribute that 
points to the "original" object being autogenerated. For this object, it's 
"_orig_table". These objects did not anticipate being rewritten at the level at 
which you are attempting to do. So to make the columns-in-place get recognized, 
I have to delete _orig_table:

op._orig_table = None

then it tries to build the Table again, as we want it to do. but then we have 
the same problem which is that these columns are from your existing mapping and 
they are already associated with a table. 

There's a bunch of ways to go here but none of them would be something people 
could normally figure out without asking me here. I think the cleanest way is 
to copy the columns using their copy() method, and then to avoid dealing with 
_orig_table, make a new CreateTableOp:

@writer.rewrites(ops.CreateTableOp)
def order_columns(context, revision, op):

 special_names = {"id": -100, "created_at": 1001, "updated_at": 1002}

 cols_by_key = [
 (
 special_names.get(col.key, index)
 if isinstance(col, Column)
 else 2000,
 col.copy(),
 )
 for index, col in enumerate(op.columns)
 ]

 columns = [
 col for idx, col in sorted(cols_by_key, key=lambda entry: entry[0])
 ]
 return ops.CreateTableOp(op.table_name, columns, schema=op.schema, **op.kw)


let me just stick this in the recipes now because people wouldn't know to do 
this.




> 
> The code I'm using is the following:
> 
> from operator import itemgetter
> from alembic.autogenerate import rewriter
> from alembic.operations.ops import CreateTableOp
> from sqlalchemy.sql.schema import Column
> 
> 
> writer = rewriter.Rewriter()
> 
> 
> @writer.rewriter(CreateTableOp)
> def order_columns(context, revision, op):
> """Reorder the columns before creating a table."""
>  preordered = []
> for col in op.columns:
>  k = 0 # key used for ordering later on
> if not isinstance(col, Column):
>  k = 99 # All constraints or indexes should stay at the end of the definition
> elif col.primary_key and col.name=='id':
>  k = 1
> # + a lot of other ordering constraints for other columns
> else:
>  k= 2 # All non-id-columns
>  preordered.append((k, col)) # Creating my ordered list
> 
> # Now sorting the list and extracting only the column-objects.
> # This list is indeed correctly sorted, just what I want
>  ordered_column_list = [itm[1] for itm in sorted(preordered, 
> key=itemgetter(0))]
> 
> # Creating a new operation and returning it is not working, as it results in 
> an error:
> # Returning: ArgumentError: Column object '...' already assigned to Table 
> '...'
> # new_op = CreateTableOp(op.table_name, ordered_column_list, schema=op.schema)
> # return new_op
> 
> # Reassigning the ordered column list is not working either, it seems to be 
> ignored:
>  op.columns = ordered_column_list
> return op
> 
> [...]
> 
> def run_migrations_online():
> [...]
> with connectable.connect() as 

Re: Re: Re: Re: Integrate PyHive and Alembic

2020-02-27 Thread Mike Bayer


On Thu, Feb 27, 2020, at 2:49 PM, Ke Zhu - k...@us.ibm.com wrote:
> On Wed, 2020-02-26 at 11:07 -0500, Mike Bayer wrote:
>> oh, that issue is talking about rowcount. Alembic does not need rowcount to 
>> function correctly. I see that Alembic is doing this now, however there is a 
>> dialect-level flag called "supports_sane_rowcount", if this were False, 
>> Alembic should be checking this and skipping that particular check.
> 
> I see. it would be great if Alembic will respect the "support_sane_rowcount" 
> specified in a SQLAlchemy dialect when updating/deleting revision. If this is 
> a suggested design, I can patch it via a PR.

yes.


> 
> According to specific dialect, it just need to use this option instead of 
> returning "-1". I can document this in the issue 
> https://github.com/dropbox/PyHive/issues/315

the pyhive dialect advertises the attribute correctly right?

> 
>> is the "rowcount" check the only issue here? this part can be changed on the 
>> Alembic side.
> 
> Besides the above issue, I want to get input on  
> https://github.com/dropbox/PyHive/issues/314 as well. How could I use 
> SQLAlchemy API to create table like what this sql does:
> CREATE TABLE hello_acid (key int, value int)
STORED AS ORC TBLPROPERTIES ('transactional'='true');
> I've no idea how to make `STORED AS and TBLPROPERTIES` part for a Hive table. 
> Now I just use SQLAlchemy compiler to append them for CreateTable.

There's a few paths to make this work depending on where and how far you want 
to go, or at least how this works.


Assuming these are *optional* arguments for the tables that you create with the 
pyhive dialect, the official way is that the dialect can be made to support 
these (I thought we had a generic "suffixes" section right now, but we don't). 
The way they work can be seen in the Postgresql dialect. start with the 
keywords you want, here I might call it pyhive_stored_as="ORC" 
pyhive_tblproperties="('transactional'='true'). then they'd be set up as 
"stored_as" and "tblproperties" in the dialect's construct_arguments like 
Postgresql does here: 

https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/base.py#L2407

then you consume these in post_create_table:

https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/base.py#L2037


if you need Alembic's alembic_version to have these keywords, im not sure we 
have a keyword for that right now however you can use metadata events to 
intercept when alembic_version is built up, or you can intercept the compiler 
as you're doing now.

if alternatively all pyhive tables need these keywords unconditionally, then 
you'd just implement post_create_table in the compiler to add them in.



> 
> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/ab42854af49a98c83fd3693c92dbd9ad5c59b9df.camel%40us.ibm.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/ab42854af49a98c83fd3693c92dbd9ad5c59b9df.camel%40us.ibm.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/a1d41899-547d-4399-95ab-bb4c31c04feb%40www.fastmail.com.


Re: How to enable verbosity on “alembic upgrade head”?

2020-02-27 Thread Mike Bayer
SQL logging will render all the SQL being emitted which should be enough to see 
everything the script is doing against the database. that is, if you ran 
op.alter_column(), you'd see "ALTER COLUMN..." in the log. not sure what else 
you are looking to see.

do you think you're having a deadlock in Python and not against the database? 
there's no good way to debug that without using something like pdb or embedding 
print statements. that's a very unusual kind of issue in a tool like alembic 
and there are no known issues which could cause this. 



On Thu, Feb 27, 2020, at 1:12 PM, doi zece wrote:
> OK I managed to get some logging (it seems my alembic env.py was missing the 
> fileConfig(config.config_file_name)). But still I'm not getting what i want.
> 
> Basically i want a long entry for every single operation in the migration 
> script. And this, if possible, without having to add log statements all over 
> the place. Is that possible ? 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/b0dc3a7e-cb00-456c-9e51-51dc56294380%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/44b071d5-15ff-4013-9c34-fe18bcc3b8db%40www.fastmail.com.


Re: Re: Re: Integrate PyHive and Alembic

2020-02-26 Thread Mike Bayer


On Tue, Feb 25, 2020, at 11:06 AM, Ke Zhu - k...@us.ibm.com wrote:
> According to Apache Hive, Update/Delete can only be performed on tables that 
> supports ACID. see  
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Update
> 
> > this is a database that can change schema structures but not 
> > insert/update/delete rows? is there some kind of SQL layer that has to be 
> > in use that isn't there when you create structures?
> 
> Yes! Basically it separate data store and data model so that you can update 
> table schema as metadata while the actual data can be loaded/streamed into 
> external data store w/o SQL interface.
> 
> > that wouldn't be very easy but also it would be way better to store the 
> > version info in the target DB itself.
> 
> totally understand this. I see this design as advantage as well just like 
> other schema management tool like sqitch. I'm just looking for possiblilities 
> since I've seen it uses the same connection to execute schema changes and 
> versioning change in a migration context.
> 
> > I don't know anything about Hive/Presto, but they *are* databases so I'd 
> > assume you can put data in them
> 
> Yes to Hive, when using PyHive, it just needs to fix  
> https://github.com/dropbox/PyHive/issues/315

oh, that issue is talking about rowcount. Alembic does not need rowcount to 
function correctly. I see that Alembic is doing this now, however there is a 
dialect-level flag called "supports_sane_rowcount", if this were False, Alembic 
should be checking this and skipping that particular check.

is the "rowcount" check the only issue here? this part can be changed on the 
Alembic side.


> 
> While, the answer will be no to PrestoSQL which is just a SQL query engine 
> that delegates data model and data store to query targets 
> (MySQL/Postgres/Kafka/Elasticsearch etc) via connectors.
> 
> On Mon, 2020-02-24 at 18:28 -0500, Mike Bayer wrote:
>> 
>> 
>> On Mon, Feb 24, 2020, at 3:44 PM, Ke Zhu - k...@us.ibm.com wrote:
>>> Mike,
>>> 
>>> Thanks for the pointers. I've figured out the programming part and 
>>> discovers more things when integrating PyHive:
>>> 
>>> 1. It requires the table alembic_version to be transactional so that it can 
>>> do Update/Delete when upgrade/downgrade. which is challege for Hive3 which 
>>> has limited ACID support. Since I could not figure out a way to program a 
>>> transactional table via SQLAlechemy API (see 
>>> https://github.com/dropbox/PyHive/issues/314), it ended up a decoration to 
>>> patch sqlalchemy.schema.CreateTable by appending table properties.
>> 
>> it's not a hard requirement that there's a DB transaction in use, you could 
>> run alembic in an "autocommit" mode and that shouldn't cause any problem. 
>> you just won't be able to roll back if something fails. im not completely 
>> sure what you mean by "the table to be transactional so that it can do 
>> update/delete" but from Alembic's point of view it just needs to run 
>> INSERT/UPDATE/DELETE but there doesn't have to be any ACID guarantees.
>> 
>> 
>>> 2. PyHive doesn't fully support Update/Delete for Hive/Presto yet. it's 
>>> easy to patch PyHive but the key problem is it doesn't support 
>>> transactional DDL like rollback in Hive3.
>>> 
>>> Although I've managed to get `alembic upgrade` and `alembic downgrade` to 
>>> work on Hive 3, it's still not a fully transactional experience (e.g., 
>>> changed schema sccessfully but failed to update table alembic_version).
>> 
>> so..this is a database that can change schema structures but not 
>> insert/update/delete rows? im not following. is there some kind of SQL layer 
>> that has to be in use that isn't there when you create structures?
>> 
>> 
>>> 
>>> I wonder if there's any design direction in Alembic to allowing storing the 
>>> version table `alembic_version` in another db when dealing with non-RDBMS 
>>> SQL engine (Hive/Presto). e.g., supporting a postgres db to store table 
>>> alembic while delivering the actual changes to Hive. I had a PoC to using 
>>> multi-db template to manage the table `alembic_version` in a RDBMS while 
>>> denying any operation on table `alembic_version` in Hive/Presto. it works 
>>> now but does it sound right?
>> 
>> that wouldn't be very easy but also it would be way better to store the 
>> version info in the target DB itself. I don't know anything about 
>> Hive/Presto, but they *are* databases so I'd assume you can put da

Re: Re: Integrate PyHive and Alembic

2020-02-24 Thread Mike Bayer


On Mon, Feb 24, 2020, at 3:44 PM, Ke Zhu - k...@us.ibm.com wrote:
> Mike,
> 
> Thanks for the pointers. I've figured out the programming part and discovers 
> more things when integrating PyHive:
> 
> 1. It requires the table alembic_version to be transactional so that it can 
> do Update/Delete when upgrade/downgrade. which is challege for Hive3 which 
> has limited ACID support. Since I could not figure out a way to program a 
> transactional table via SQLAlechemy API (see 
> https://github.com/dropbox/PyHive/issues/314), it ended up a decoration to 
> patch sqlalchemy.schema.CreateTable by appending table properties.

it's not a hard requirement that there's a DB transaction in use, you could run 
alembic in an "autocommit" mode and that shouldn't cause any problem. you just 
won't be able to roll back if something fails. im not completely sure what you 
mean by "the table to be transactional so that it can do update/delete" but 
from Alembic's point of view it just needs to run INSERT/UPDATE/DELETE but 
there doesn't have to be any ACID guarantees.


> 2. PyHive doesn't fully support Update/Delete for Hive/Presto yet. it's easy 
> to patch PyHive but the key problem is it doesn't support transactional DDL 
> like rollback in Hive3.
> 
> Although I've managed to get `alembic upgrade` and `alembic downgrade` to 
> work on Hive 3, it's still not a fully transactional experience (e.g., 
> changed schema sccessfully but failed to update table alembic_version).

so..this is a database that can change schema structures but not 
insert/update/delete rows? im not following. is there some kind of SQL layer 
that has to be in use that isn't there when you create structures?


> 
> I wonder if there's any design direction in Alembic to allowing storing the 
> version table `alembic_version` in another db when dealing with non-RDBMS SQL 
> engine (Hive/Presto). e.g., supporting a postgres db to store table alembic 
> while delivering the actual changes to Hive. I had a PoC to using multi-db 
> template to manage the table `alembic_version` in a RDBMS while denying any 
> operation on table `alembic_version` in Hive/Presto. it works now but does it 
> sound right?

that wouldn't be very easy but also it would be way better to store the version 
info in the target DB itself. I don't know anything about Hive/Presto, but they 
*are* databases so I'd assume you can put data in them.


> 
> On Sat, 2020-01-25 at 18:19 -0500, Mike Bayer wrote:
>> 
>> 
>> On Fri, Jan 24, 2020, at 1:56 PM, Ke Zhu wrote:
>>> Just discovered this post when trying to do exact same thing (besides 
>>> planning to support one more dialect).
>>> 
>>> > Anywhere in your hive dialect, simply put the above code that you have 
>>> > (using the correct imports of course). 
>>> 
>>> Does it mean it must introduce dependency to alembic (since it uses 
>>> alembic.ddl.impl.DefaultImpl) in a package (.e.g, pyHive) that supports 
>>> sqlalchemy interfaces?
>> 
>> well you have to put it in a try/except ImportError block so that if alembic 
>> isn't installed, it silently passes. there's a github issue to add support 
>> for real entrypoints but it hasn't been that critical.
>> 
>>> 
>>> If not, is there any guidance to support this at alembic level in a 
>>> plug-gable way? E.g., declare a HiveImpl class in `env.py` of a project 
>>> uses alembic?
>> 
>> you could put one in your env.py also but if you are the person working on 
>> the dialect you can have this built in, see the example in 
>> sqlalchemy-redshift: 
>> https://github.com/sqlalchemy-redshift/sqlalchemy-redshift/blob/master/sqlalchemy_redshift/dialect.py#L27
>>  
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_sqlalchemy-2Dredshift_sqlalchemy-2Dredshift_blob_master_sqlalchemy-5Fredshift_dialect.py-23L27=DwMFaQ=jf_iaSHvJObTbx-siA1ZOg=e0JaCS5rMieNhsG1YWcybg=froqtQlu2FU5RhgkzLXlily86IEGWTyf6iZZV2219n8=0MY0coM-F1OHHH0XURBJmXbbzbboExEXzrx_GekfzqA=>
>> 
>> 
>> 
>>> 
>>> PS: I raised this question 
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_59887588_how-2Dto-2Dadd-2Dnew-2Ddialect-2Dto-2Dalembic-2Dbesides-2Dbuilt-2Din-2Ddialects=DwMFaQ=jf_iaSHvJObTbx-siA1ZOg=e0JaCS5rMieNhsG1YWcybg=froqtQlu2FU5RhgkzLXlily86IEGWTyf6iZZV2219n8=5iBXdz3bhfClgt2qA9bV1Q-KzEuG4P2n0KsHFx6w4VI=>
>>>  in stackoverflow but raised this group is a better place to get help.
>>> 
>>> 
>>> On Friday, February 10, 2017 at 9:45:38 AM UTC-5, mike bayer wrote:
>>>> 
>>>> 
>>>> On 02/10/2017 07:41 AM, Alexander Peletz wrote: 
>&g

Re: Migrate alembic_version table from Public to tenant schema

2020-02-18 Thread Mike Bayer


On Tue, Feb 18, 2020, at 1:17 PM, Brian Hill wrote:
> Is there a way to move the alembic_version table in the Public schema 
> (postgres) to a specific schema (mult-tenant) as part of a migration?

I highly doubt this is possible in the general case without the raw SQL in your 
env.py, because as the migration runs, the environment needs to update the 
table. if it moves schemas, then it won't be updating the correct table 
anymore. You would have to manipulate the internal state of the 
MigrationContext within the migration script so that this doesnt happen, 
however, the migration is also occurring inside of a transaction so it's very 
likely that you wouldn't actually be able to fully drop the old table in every 
case and there could be other side effects of this as well, and I cant 
guarantee this internal manipulation will always work for new Alembic releases.


> 
> I want alembic to read the inital version from Public.alembic_version and 
> then write the new upgraded version to the specific schema.alembic_version.
> 
> The only way I can get it to work is executing raw sql after the migration, 
> and this won't work for future migrations.
> 
> with context.begin_transaction():
> context.run_migrations()
> context.execute(f'alter table Public.alembic_version set schema {DB_SCHEMA}')


I think what you can do here is check for alembic_version in the old schema and 
in the new one and then move the table after the migrations but only if it 
needs to be moved. Basically have the migrations always run with the schema 
whereever it is to start with, then move it after the fact.

you'd have to detect which schema to use before you configure the context, 
however, since you need to pass it the schema for the alembic_version table.



> 
> Do I have to flatten/rebase my versions after this?
> 
> Thanks,
> 
> Brian
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/9ceb8556-e2c3-4631-84cf-8ba636c31a24%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/96a4236d-2f57-4291-9332-f9f02c88694b%40www.fastmail.com.


Re: Using enums with schema_translate_map

2020-02-17 Thread Mike Bayer
Here is your SQL output, enum is created correctly:

CREATE TYPE test_schema.enum1 AS ENUM ('One', 'Two')


however table does not refer to the correct enum:

CREATE TABLE test_schema.table1 (
id SERIAL NOT NULL,
type1 enum1,
PRIMARY KEY (id)
)


this is SQLAlchemy bug https://github.com/sqlalchemy/sqlalchemy/issues/5158 
will be fixed in 1.3.14


On Mon, Feb 17, 2020, at 2:51 PM, Mike Bayer wrote:
> mm nope we have plenty of tests for this here:
> 
> https://github.com/sqlalchemy/sqlalchemy/blob/master/test/dialect/postgresql/test_compiler.py#L223
> 
> 
> 
> On Mon, Feb 17, 2020, at 2:49 PM, Mike Bayer wrote:
>> it's possible also that PG Enum CREATE TYPE doesn't work with 
>> schema_translate_map, would need to evaluate that on the SQLAlchemy side.
>> 
>> On Mon, Feb 17, 2020, at 2:45 PM, Mike Bayer wrote:
>>> schema_translate_map is not yet supported with all Alembic ops:
>>> 
>>> https://github.com/sqlalchemy/alembic/issues/555
>>> 
>>> you will need to fill in the "schema" parameter explicitly when you call 
>>> upon op.create_table()
>>> 
>>> 
>>> 
>>> On Mon, Feb 17, 2020, at 12:47 PM, Brian Hill wrote:
>>>> I'm having trouble using enums in conjunction with schema_translate_map 
>>>> for postgres migrations.
>>>> 
>>>> My model, single table, single enum.
>>>> 
>>>> import enum
>>>> from sqlalchemy import MetaData, Enum, Column, Integer
>>>> from sqlalchemy.ext.declarative import declarative_base
>>>> 
>>>> 
>>>> metadata = MetaData()
>>>> Base = declarative_base(metadata=metadata)
>>>> 
>>>> 
>>>> 
>>>> 
>>>> class Enum1(enum.Enum):
>>>> One = 1
>>>> Two = 2
>>>> 
>>>> 
>>>> 
>>>> 
>>>> class Table1(Base):
>>>>  __tablename__ = 'table1'
>>>>  id = Column(Integer, primary_key=True)
>>>>  type1 = Column(Enum(Enum1))
>>>> 
>>>> Version file.
>>>> 
>>>> 
>>>> """initial revision
>>>> 
>>>> 
>>>> Revision ID: 844dd0269c1b
>>>> Revises: 
>>>> Create Date: 2020-02-17 12:23:31.125308
>>>> 
>>>> 
>>>> """
>>>> from alembic import op
>>>> import sqlalchemy as sa
>>>> 
>>>> 
>>>> 
>>>> 
>>>> # revision identifiers, used by Alembic.
>>>> revision = '844dd0269c1b'
>>>> down_revision = None
>>>> branch_labels = None
>>>> depends_on = None
>>>> 
>>>> 
>>>> 
>>>> 
>>>> def upgrade():
>>>> # ### commands auto generated by Alembic - please adjust! ###
>>>>  op.create_table('table1',
>>>>  sa.Column('id', sa.Integer(), nullable=False),
>>>>  sa.Column('type1', sa.Enum('One', 'Two', name='enum1'), nullable=True),
>>>>  sa.PrimaryKeyConstraint('id')
>>>> )
>>>> # ### end Alembic commands ###
>>>> 
>>>> 
>>>> 
>>>> 
>>>> def downgrade():
>>>> # ### commands auto generated by Alembic - please adjust! ###
>>>>  op.drop_table('table1')
>>>> # ### end Alembic commands ###
>>>> 
>>>> 
>>>> Run migration funtion from env.py (creating foo schema):
>>>> 
>>>> 
>>>> 
>>>> def run_migrations_online():
>>>>  connectable = engine_from_config(
>>>>  config.get_section(config.config_ini_section),
>>>>  prefix="sqlalchemy.",
>>>>  poolclass=pool.NullPool,
>>>> )
>>>> 
>>>> 
>>>> with connectable.connect() as connection:
>>>> 
>>>> 
>>>> # use different schema
>>>>  connection.execute(f'create schema if not exists foo')
>>>>  connection = connection.execution_options(
>>>>  schema_translate_map={None: 'foo'}
>>>> )
>>>> 
>>>> 
>>>>  context.configure(
>>>>  connection=connection,
>>>>  target_metadata=target_metadata,
>>>>  include_schema=True,
>>>> )
>>>> 
>>>> 
>>>> with context.begin_transaction():
>>>>  context.run_migrations()
>>>> 
>>>> 
>>>> I get the following error when i run *al

Re: Using enums with schema_translate_map

2020-02-17 Thread Mike Bayer
mm nope we have plenty of tests for this here:

https://github.com/sqlalchemy/sqlalchemy/blob/master/test/dialect/postgresql/test_compiler.py#L223



On Mon, Feb 17, 2020, at 2:49 PM, Mike Bayer wrote:
> it's possible also that PG Enum CREATE TYPE doesn't work with 
> schema_translate_map, would need to evaluate that on the SQLAlchemy side.
> 
> On Mon, Feb 17, 2020, at 2:45 PM, Mike Bayer wrote:
>> schema_translate_map is not yet supported with all Alembic ops:
>> 
>> https://github.com/sqlalchemy/alembic/issues/555
>> 
>> you will need to fill in the "schema" parameter explicitly when you call 
>> upon op.create_table()
>> 
>> 
>> 
>> On Mon, Feb 17, 2020, at 12:47 PM, Brian Hill wrote:
>>> I'm having trouble using enums in conjunction with schema_translate_map for 
>>> postgres migrations.
>>> 
>>> My model, single table, single enum.
>>> 
>>> import enum
>>> from sqlalchemy import MetaData, Enum, Column, Integer
>>> from sqlalchemy.ext.declarative import declarative_base
>>> 
>>> 
>>> metadata = MetaData()
>>> Base = declarative_base(metadata=metadata)
>>> 
>>> 
>>> 
>>> 
>>> class Enum1(enum.Enum):
>>> One = 1
>>> Two = 2
>>> 
>>> 
>>> 
>>> 
>>> class Table1(Base):
>>>  __tablename__ = 'table1'
>>>  id = Column(Integer, primary_key=True)
>>>  type1 = Column(Enum(Enum1))
>>> 
>>> Version file.
>>> 
>>> 
>>> """initial revision
>>> 
>>> 
>>> Revision ID: 844dd0269c1b
>>> Revises: 
>>> Create Date: 2020-02-17 12:23:31.125308
>>> 
>>> 
>>> """
>>> from alembic import op
>>> import sqlalchemy as sa
>>> 
>>> 
>>> 
>>> 
>>> # revision identifiers, used by Alembic.
>>> revision = '844dd0269c1b'
>>> down_revision = None
>>> branch_labels = None
>>> depends_on = None
>>> 
>>> 
>>> 
>>> 
>>> def upgrade():
>>> # ### commands auto generated by Alembic - please adjust! ###
>>>  op.create_table('table1',
>>>  sa.Column('id', sa.Integer(), nullable=False),
>>>  sa.Column('type1', sa.Enum('One', 'Two', name='enum1'), nullable=True),
>>>  sa.PrimaryKeyConstraint('id')
>>> )
>>> # ### end Alembic commands ###
>>> 
>>> 
>>> 
>>> 
>>> def downgrade():
>>> # ### commands auto generated by Alembic - please adjust! ###
>>>  op.drop_table('table1')
>>> # ### end Alembic commands ###
>>> 
>>> 
>>> Run migration funtion from env.py (creating foo schema):
>>> 
>>> 
>>> 
>>> def run_migrations_online():
>>>  connectable = engine_from_config(
>>>  config.get_section(config.config_ini_section),
>>>  prefix="sqlalchemy.",
>>>  poolclass=pool.NullPool,
>>> )
>>> 
>>> 
>>> with connectable.connect() as connection:
>>> 
>>> 
>>> # use different schema
>>>  connection.execute(f'create schema if not exists foo')
>>>  connection = connection.execution_options(
>>>  schema_translate_map={None: 'foo'}
>>> )
>>> 
>>> 
>>>  context.configure(
>>>  connection=connection,
>>>  target_metadata=target_metadata,
>>>  include_schema=True,
>>> )
>>> 
>>> 
>>> with context.begin_transaction():
>>>  context.run_migrations()
>>> 
>>> 
>>> I get the following error when i run *alembic upgrade head*:
>>> 
>>> 
>>> sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type 
>>> "enum1" does not exist
>>> LINE 4: type1 enum1, 
>>> ^
>>> 
>>> 
>>> [SQL: 
>>> CREATE TABLE foo.table1 (
>>>  id SERIAL NOT NULL, 
>>>  type1 enum1, 
>>>  PRIMARY KEY (id)
>>> )
>>> 
>>> 
>>> ]
>>> (Background on this error at: http://sqlalche.me/e/f405)
>>> 
>>> It works when I comment out schema_translate_map.
>>> 
>>> I feel like I'm missing something fundamental about using 
>>> schema_translate_map for multi-tenant/schema.
>>> 

>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy-alembic

Re: Using enums with schema_translate_map

2020-02-17 Thread Mike Bayer
it's possible also that PG Enum CREATE TYPE doesn't work with 
schema_translate_map, would need to evaluate that on the SQLAlchemy side.

On Mon, Feb 17, 2020, at 2:45 PM, Mike Bayer wrote:
> schema_translate_map is not yet supported with all Alembic ops:
> 
> https://github.com/sqlalchemy/alembic/issues/555
> 
> you will need to fill in the "schema" parameter explicitly when you call upon 
> op.create_table()
> 
> 
> 
> On Mon, Feb 17, 2020, at 12:47 PM, Brian Hill wrote:
>> I'm having trouble using enums in conjunction with schema_translate_map for 
>> postgres migrations.
>> 
>> My model, single table, single enum.
>> 
>> import enum
>> from sqlalchemy import MetaData, Enum, Column, Integer
>> from sqlalchemy.ext.declarative import declarative_base
>> 
>> 
>> metadata = MetaData()
>> Base = declarative_base(metadata=metadata)
>> 
>> 
>> 
>> 
>> class Enum1(enum.Enum):
>> One = 1
>> Two = 2
>> 
>> 
>> 
>> 
>> class Table1(Base):
>>  __tablename__ = 'table1'
>>  id = Column(Integer, primary_key=True)
>>  type1 = Column(Enum(Enum1))
>> 
>> Version file.
>> 
>> 
>> """initial revision
>> 
>> 
>> Revision ID: 844dd0269c1b
>> Revises: 
>> Create Date: 2020-02-17 12:23:31.125308
>> 
>> 
>> """
>> from alembic import op
>> import sqlalchemy as sa
>> 
>> 
>> 
>> 
>> # revision identifiers, used by Alembic.
>> revision = '844dd0269c1b'
>> down_revision = None
>> branch_labels = None
>> depends_on = None
>> 
>> 
>> 
>> 
>> def upgrade():
>> # ### commands auto generated by Alembic - please adjust! ###
>>  op.create_table('table1',
>>  sa.Column('id', sa.Integer(), nullable=False),
>>  sa.Column('type1', sa.Enum('One', 'Two', name='enum1'), nullable=True),
>>  sa.PrimaryKeyConstraint('id')
>> )
>> # ### end Alembic commands ###
>> 
>> 
>> 
>> 
>> def downgrade():
>> # ### commands auto generated by Alembic - please adjust! ###
>>  op.drop_table('table1')
>> # ### end Alembic commands ###
>> 
>> 
>> Run migration funtion from env.py (creating foo schema):
>> 
>> 
>> 
>> def run_migrations_online():
>>  connectable = engine_from_config(
>>  config.get_section(config.config_ini_section),
>>  prefix="sqlalchemy.",
>>  poolclass=pool.NullPool,
>> )
>> 
>> 
>> with connectable.connect() as connection:
>> 
>> 
>> # use different schema
>>  connection.execute(f'create schema if not exists foo')
>>  connection = connection.execution_options(
>>  schema_translate_map={None: 'foo'}
>> )
>> 
>> 
>>  context.configure(
>>  connection=connection,
>>  target_metadata=target_metadata,
>>  include_schema=True,
>> )
>> 
>> 
>> with context.begin_transaction():
>>  context.run_migrations()
>> 
>> 
>> I get the following error when i run *alembic upgrade head*:
>> 
>> 
>> sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type 
>> "enum1" does not exist
>> LINE 4: type1 enum1, 
>> ^
>> 
>> 
>> [SQL: 
>> CREATE TABLE foo.table1 (
>>  id SERIAL NOT NULL, 
>>  type1 enum1, 
>>  PRIMARY KEY (id)
>> )
>> 
>> 
>> ]
>> (Background on this error at: http://sqlalche.me/e/f405)
>> 
>> It works when I comment out schema_translate_map.
>> 
>> I feel like I'm missing something fundamental about using 
>> schema_translate_map for multi-tenant/schema.
>> 

>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy-alembic" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy-alembic/f8513719-4dd0-4bb2-b769-ff874e2017f2%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/sqlalchemy-alembic/f8513719-4dd0-4bb2-b769-ff874e2017f2%40googlegroups.com?utm_medium=email_source=footer>.
> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/1ffea337-b61f-4712-9b90-c73056895e45%40www.fastmail.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/1ffea337-b61f-4712-9b90-c73056895e45%40www.fastmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/078033be-3f96-4ebe-8be7-b23e3842c312%40www.fastmail.com.


Re: Using enums with schema_translate_map

2020-02-17 Thread Mike Bayer
schema_translate_map is not yet supported with all Alembic ops:

https://github.com/sqlalchemy/alembic/issues/555

you will need to fill in the "schema" parameter explicitly when you call upon 
op.create_table()



On Mon, Feb 17, 2020, at 12:47 PM, Brian Hill wrote:
> I'm having trouble using enums in conjunction with schema_translate_map for 
> postgres migrations.
> 
> My model, single table, single enum.
> 
> import enum
> from sqlalchemy import MetaData, Enum, Column, Integer
> from sqlalchemy.ext.declarative import declarative_base
> 
> 
> metadata = MetaData()
> Base = declarative_base(metadata=metadata)
> 
> 
> 
> 
> class Enum1(enum.Enum):
> One = 1
> Two = 2
> 
> 
> 
> 
> class Table1(Base):
>  __tablename__ = 'table1'
>  id = Column(Integer, primary_key=True)
>  type1 = Column(Enum(Enum1))
> 
> Version file.
> 
> 
> """initial revision
> 
> 
> Revision ID: 844dd0269c1b
> Revises: 
> Create Date: 2020-02-17 12:23:31.125308
> 
> 
> """
> from alembic import op
> import sqlalchemy as sa
> 
> 
> 
> 
> # revision identifiers, used by Alembic.
> revision = '844dd0269c1b'
> down_revision = None
> branch_labels = None
> depends_on = None
> 
> 
> 
> 
> def upgrade():
> # ### commands auto generated by Alembic - please adjust! ###
>  op.create_table('table1',
>  sa.Column('id', sa.Integer(), nullable=False),
>  sa.Column('type1', sa.Enum('One', 'Two', name='enum1'), nullable=True),
>  sa.PrimaryKeyConstraint('id')
> )
> # ### end Alembic commands ###
> 
> 
> 
> 
> def downgrade():
> # ### commands auto generated by Alembic - please adjust! ###
>  op.drop_table('table1')
> # ### end Alembic commands ###
> 
> 
> Run migration funtion from env.py (creating foo schema):
> 
> 
> 
> def run_migrations_online():
>  connectable = engine_from_config(
>  config.get_section(config.config_ini_section),
>  prefix="sqlalchemy.",
>  poolclass=pool.NullPool,
> )
> 
> 
> with connectable.connect() as connection:
> 
> 
> # use different schema
>  connection.execute(f'create schema if not exists foo')
>  connection = connection.execution_options(
>  schema_translate_map={None: 'foo'}
> )
> 
> 
>  context.configure(
>  connection=connection,
>  target_metadata=target_metadata,
>  include_schema=True,
> )
> 
> 
> with context.begin_transaction():
>  context.run_migrations()
> 
> 
> I get the following error when i run *alembic upgrade head*:
> 
> 
> sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) type 
> "enum1" does not exist
> LINE 4: type1 enum1, 
> ^
> 
> 
> [SQL: 
> CREATE TABLE foo.table1 (
>  id SERIAL NOT NULL, 
>  type1 enum1, 
>  PRIMARY KEY (id)
> )
> 
> 
> ]
> (Background on this error at: http://sqlalche.me/e/f405)
> 
> It works when I comment out schema_translate_map.
> 
> I feel like I'm missing something fundamental about using 
> schema_translate_map for multi-tenant/schema.
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/f8513719-4dd0-4bb2-b769-ff874e2017f2%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/1ffea337-b61f-4712-9b90-c73056895e45%40www.fastmail.com.


Re: Integrate PyHive and Alembic

2020-01-25 Thread Mike Bayer


On Fri, Jan 24, 2020, at 1:56 PM, Ke Zhu wrote:
> Just discovered this post when trying to do exact same thing (besides 
> planning to support one more dialect).
> 
> > Anywhere in your hive dialect, simply put the above code that you have 
> > (using the correct imports of course). 
> 
> Does it mean it must introduce dependency to alembic (since it uses 
> alembic.ddl.impl.DefaultImpl) in a package (.e.g, pyHive) that supports 
> sqlalchemy interfaces?

well you have to put it in a try/except ImportError block so that if alembic 
isn't installed, it silently passes. there's a github issue to add support for 
real entrypoints but it hasn't been that critical.

> 
> If not, is there any guidance to support this at alembic level in a 
> plug-gable way? E.g., declare a HiveImpl class in `env.py` of a project uses 
> alembic?

you could put one in your env.py also but if you are the person working on the 
dialect you can have this built in, see the example in sqlalchemy-redshift: 
https://github.com/sqlalchemy-redshift/sqlalchemy-redshift/blob/master/sqlalchemy_redshift/dialect.py#L27



> 
> PS: I raised this question 
> <https://stackoverflow.com/questions/59887588/how-to-add-new-dialect-to-alembic-besides-built-in-dialects>
>  in stackoverflow but raised this group is a better place to get help.
> 
> 
> On Friday, February 10, 2017 at 9:45:38 AM UTC-5, mike bayer wrote:
>> 
>> 
>> On 02/10/2017 07:41 AM, Alexander Peletz wrote: 
>> > Hello, 
>> > 
>> > I would like to use Alembic to manage my Hive Metastore. I have 
>> > installed, PyHive, SqlAlchemy, and Alembic. I am able to create a 
>> > functional engine object using the 'hive' dialect in sqlalchemy, however 
>> > I cannot get Alembic to recognize this dialect. The problem appears to 
>> > be a lack of a HiveImpl class in the Alembic package. I attempted to 
>> > resolve this by creating an alembic/ddl/hive.py module and pasting the 
>> > following code into that module: 
>> > 
>> > 
>> > from .impl import DefaultImpl 
>> > 
>> > class HiveImpl(DefaultImpl): 
>> > __dialect__ = 'hive' 
>> 
>> 
>> 
>> you don't actually have to create a "hive.py" file. Anywhere in your 
>> hive dialect, simply put the above code that you have (using the correct 
>> imports of course). The DefaultImpl uses a metaclass that will allow 
>> the "hive" name to be available to alembic as a result of this class 
>> being created. 
>> 
>> 
>> > 
>> > 
>> > 
>> > 
>> > I simply want to be able to execute raw SQL against a Hive instance (no 
>> > ORM implementation needed) and I was hoping to use Alembic to manage the 
>> > minimum upgrade/downgrade functionality. Are there any simple edits I 
>> > can make to the Alembic source code to allow me to achieve this goal? 
>> > 
>> > 
>> > 
>> > Thanks, 
>> > 
>> > Alexander 
>> > 
>> > -- 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/59b308d9-7a9f-4038-bb52-f578c2c9cb69%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy-alembic/59b308d9-7a9f-4038-bb52-f578c2c9cb69%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/6679dd74-dbba-4371-9094-82b5bde48012%40www.fastmail.com.


Re: Automating charset encoding/collation conversion for MySQL db.

2019-12-04 Thread Mike Bayer


On Tue, Dec 3, 2019, at 7:50 PM, Jens Troeger wrote:
> Hello,
> 
> Using a MySQL database keeping Unicode strings under control turned out to be 
> a bit of a challenge. I could have sworn that character encoding and 
> collation  are 
> set up correctly, i.e.
> 
> +--+--+
> | @@character_set_database | @@collation_database |
> +--+--+
> | utf8 | utf8_general_ci |
> +--+--+
> 
> Alas, that turned out not to be the case. So now I’m looking at a few 
> databases where random tables and columns are back to latin1 character set. 
> Trying to get to a solution that takes a db, checks & switches its 
> charset/encoding then iterates over all tables, checks & switches their 
> charset/encoding then iterates over the columns and (for appropriate types) 
> checks & switches their charset/encoding.
> 
> I can do that by issuing SQL statements directly:
> 
> SELECT @@character_set_database, @@collation_database;
> 
> then if need be
> 
> ALTER DATABASE db_name CHARACTER SET "utf8" COLLATE "utf8_general_ci";
> 
> Similarly for tables (link 
> ) and table 
> columns (link ). 
> 
> However, I wonder if there is a more elegant way with SQLAlchemy than issuing 
> SQL directly?


you can get lists of all table names as well as the MySQL-specific options 
using the inspector:

https://docs.sqlalchemy.org/en/13/core/reflection.html#fine-grained-reflection-with-inspector

from there you still need to conn.execute("ALTER ...").


> 
> I have also noticed that some string types use Unicode 
> 
>  (which maps to varchar ) 
> and others use UnicodeText 
> 
>  (which maps to mediumtext 
> ). I vaguely remember that 
> there were size constraints, I think the maximum of 65k across all columns of 
> a table whereas mediumtext doesn’t have that 65k size constraint? 

mysql documentation would help you there



> 
> 
> Disregarding that, converting from UnicodeText to Unicode should not impact 
> the value, right? While cleaning up the character set encoding and collation, 
> I thought that it may be good to get the string types right.


this would be an ALTER TABLE and modern MySQL versions should be fairly robust 
as far as warning if text is being truncated ( I would create a restorable dump 
of all affected databases first), however it might be a slow running operation 
so I would expect the database may be unavailable to other applications while 
some of these operations run.



> 
> Much thanks!
> Jens
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/7f8346fa-91bc-45ac-b4f1-0c6086fb0352%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/4bc2e489-9ebd-4986-bd47-fde93b7b4cc4%40www.fastmail.com.


Re: extending enum type natively in postgres using alembic

2019-10-31 Thread Mike Bayer


On Thu, Oct 31, 2019, at 8:45 AM, Alexander wrote:
> Dear colleagues,
> 
> I would like to extend enum type in postgres using alembic and currently i 
> have to do the following:
> 
> name = 'my_type'
> old_enum = sa.dialects.postgresql.ENUM('value1', 'value2', name=name)
> new_enum = sa.dialects.postgresql.ENUM('value1', 'value2', 'value3', 
> name=name) 
> 
> op.execute(f'ALTER TYPE {name} RENAME TO _{name}')
> new_enum.create(op.get_bind())
> op.execute(f'ALTER TABLE table1 ALTER COLUMN col1 TYPE {name} USING 
> col1::text::{name}')
> op.execute(f'DROP TYPE _{name}')
> 
> Since 9.1 postgres supports adding new values to enum natively (added queries 
> below), is it possible to perform such request in alembic?
> ALTER TYPE enum_type ADD VALUE 'new_value'; -- appends to list
> ALTER TYPE enum_type ADD VALUE 'new_value' BEFORE 'old_value';
> ALTER TYPE enum_type ADD VALUE 'new_value' AFTER 'old_value';


you are using op.execute() already to emit the "ALTER TYPE" command directly. 
Just emit the "ALTER TYPE .. ADD VALUE" commands via op.execute() instead. 

I happen to recall that there is a restriction on "ADD VALUE" that it can't run 
in a transaction block, and even though you didn't note this, I would assume 
this is the actual problem you are having (if this is the case, please include 
details like these, that is, what you tried and why it is not working, as we 
would not usually know what the problem you are having is otherwise). 

To deal with this restriction Alembic has a fairly recent feature as of 1.2.0 
called autocommit_block:

https://alembic.sqlalchemy.org/en/latest/api/runtime.html#alembic.runtime.migration.MigrationContext.autocommit_block


the documentation above refers to the specific example of emitting "ADD VALUE" 
on PostgreSQL.





> 
> -- 
> Kind regards,
> Alexander.
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/CAMXowCahmsmL4VFsFCWy6L1HmoM_OuA7F6C1s3H27UCFnoa8jA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/a80e1792-82c6-4685-8ca4-3851141f872f%40www.fastmail.com.


Re: Snowflake odities and ReplaceableObject

2019-10-11 Thread Mike Bayer
I think it would be simplest to use a regular expression to pull out the symbol 
names from the declared function in order to write out the DROP text.

Otherwise, just add additional arguments to ReplaceableObject. It's a recipe so 
you should change it directly, I wouldn't create new superclasses or anything 
like that. 




On Fri, Oct 11, 2019, at 2:58 AM, Scott wrote:
> I am using alembic with Snowflake.
> 
> In terms of DDL migrations I need to cater for SECURED VIEWS, it is pretty 
> trivial to extend ReplaceableObject to deal with this.
> 
> The other thing I am finding is that Snowflake insists that DROP FUNCTION be 
> provided with a list of the argument types along with the function name, so 
> with a function:
> 
>  CREATE FUNCTION myfunc(TABLENAME VARCHAR)
>  RETURNS VARCHAR
>  
> 
> this does not work:
> 
>  DROP FUNCTION myfunc
> 
> I need to go
> 
>  DROP FUNCTION myfunc(VARCHAR)
> 
> But I cannot go
> 
>  DROP FUNCTION myfunc(TABLENAME VARCHAR)
> 
> So in terms of using ReplaceableObject, to CREATE I need:
> 
> ReplaceableObject(
>  "myfunc(TABLENAME VARCHAR)",
>  """
>  RETURNS VARCHAR
> LANGUAGE javascript
> ...
> 
> But to DROP I need:
> 
> ReplaceableObject(
>  "myfunc(VARCHAR)",
>  """
>  RETURNS VARCHAR
> LANGUAGE javascript
> ...
> 
> I can hack something together with a superclass, but though I would mention 
> here in case someone has a cleaner solution.
> 
> Cheers, Scott
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/38d5e79a-2e50-4924-a77a-c2def597a132%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/95987d77-6110-49ae-b5d0-f7c4a199c1dc%40www.fastmail.com.


Re: Paritioned Table Migration

2019-08-29 Thread Mike Bayer


On Thu, Aug 29, 2019, at 8:55 AM, Stephan Gerhard wrote:
> Hi,
> 
> I am wondering whether it is possible to use Alembic to define a migration 
> where I add a partitioned table - a feature that is supported in Postgres 12 
> - e.g. using range partitions.
> https://www.2ndquadrant.com/en/blog/partitioning-enhancements-in-postgresql-12/
> 
> How would I write this as an Alembic migration?

you would use op.execute():

e.g.

def upgrade():
 op.execute(
 "CREATE TABLE data_values_4_10 PARTITION OF data_values "
 "FOR VALUES FROM (4) TO (10)"
 )

https://alembic.sqlalchemy.org/en/latest/ops.html?highlight=op%20execute#alembic.operations.Operations.execute



> 
> Thanks,
> Stephan
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/daf95468-30c4-4a3b-be7b-1cb158ef31cd%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/be1a5503-e136-4f66-ba25-35f6bc3ebb40%40www.fastmail.com.


Re: How to rename column on MySQL without deleting existing data?

2019-08-27 Thread Mike Bayer


On Tue, Aug 27, 2019, at 6:15 AM, Mike wrote:
> *Setup:*
> mysql> SELECT version();
> 
> * 5.7.27-0ubuntu0.18.04.1
> * Python 3.6.8
> * SQLAlchemy 1.3.6
> * Alembic 1.0.11
> 
> 
> *models.py:*
> class Experiments(db.Model):
>  id = db.Column(db.Integer, primary_key=True)
>  # country = db.Column(db.String(100))
>  countryland = db.Column(db.String(100))
> 
>  insert_date = db.Column(db.DateTime, index=True,
>  default=datetime.utcnow)
> 
> 
> *SQL data:*
> INSERT INTO experiments (country)
> VALUES ('US'),
>  ('Canada'),
>  ('Mexico'),
>  ('Brazil'),
>  ('Argentina'),
>  ('Spain'),
>  ('Portugal'),
>  ('France'),
>  ('Germany'),
>  ('UK');
> 
> 
> *migration file:*
> def upgrade():
>  # ### commands auto generated by Alembic - please adjust! ###
> 
>  # op.add_column('experiments', sa.Column('countryland', 
> sa.String(length=100), nullable=True))
>  # op.drop_column('experiments', 'country')
> 
>  op.alter_column('experiments', column_name='country', 
> new_column_name='countryland')


hi there -

for MySQL the alter_column() operation requires that you pass the existing 
information about the column as well:

op.alter_column('table", column_name='a', new_column_name='b', 
existing_type=String, existing_nullable=False, existing_server_default=...)

see 
https://alembic.sqlalchemy.org/en/latest/ops.html?highlight=alter_column#alembic.operations.Operations.alter_column
 for the full list of parameters.



> 
>  # ### end Alembic commands ###
> 
> 
> 
> *Problem:*
> I want to rename the column name from *"country"* to *"countryland"*. But I'm 
> getting this error in my terminal can anybody help me please?
> 
> $ flask db upgrade
> 
> [2019-08-27 11:43:26,155] INFO in __init__: microblog startup
> INFO [alembic.runtime.migration] Context impl MySQLImpl.
> INFO [alembic.runtime.migration] Will assume non-transactional DDL.
> INFO [alembic.runtime.migration] Running upgrade 1f866a4fa5a0 -> 
> 69e7cfcf87f1, Testing column rename 02.
> ERROR [root] Error: All MySQL CHANGE/MODIFY COLUMN operations require the 
> existing type.
> 
> 
> Electronically yours,
> Mike
> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/0e9b482c-0f31-4492-b1b8-df97bc84adbd%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/72aeffce-ffa7-4fed-83ef-d7a0e92d6f1f%40www.fastmail.com.


Re: Alembic: Varying database names per environment?

2019-08-19 Thread Mike Bayer


On Mon, Aug 19, 2019, at 7:24 AM, Scott wrote:
> On Monday, August 19, 2019 at 9:58:19 AM UTC+10, Mike Bayer wrote:
>> 
>> 
>> On Sun, Aug 18, 2019, at 6:50 PM, Scott wrote:
>>> Looking to use Alembic to manage migrations.
>>> 
>>> We currently have different database names in each environment, so for dev, 
>>> test and prod we have db_dev, db_test and db_prod respectively.
>>> 
>>> Is this database naming scheme going to be compatible with Alembic or am I 
>>> better off looking to drop the environment suffix?
>> 
>> these are three different URLs and if the concern is putting them into one 
>> alembic.ini file, easy enough using separate sections: 
>> https://alembic.sqlalchemy.org/en/latest/cookbook.html#run-multiple-alembic-environments-from-one-ini-file
>> 
>> though usually staging and production DBs have a password you're looking to 
>> keep private, and you'd have a separate alembic.ini on your staging and prod 
>> servers. but either way it's all doable
> 
> Thanks for your reply.
> 
> The databases in question will in fact contain the same schema, table, view, 
> etc. objects. We develop in dev, then promote code and database changes to 
> test and then on to prod. This seems like a pretty straightforward use case 
> for Alembic; each DB will have its own version and when we promote code from 
> dev to test and then on to prod the relevant head would be retrieved from git 
> (along with application code) and can be applied to the target database in 
> order to bring it up to the correct version.
> 
> In our case however, with manual deployment we included a variable in the 
> database name and change this per environment. So when we promote code we 
> need the changes we made to db_*dev*.schema1.table1 to be made to 
> db_*test*.schema1.table1. I think this is different concept to the what 
> "sections" provides.

So when we talk about "db_dev.schema.table", the first element in that path is 
usually the database name that is part of what you put in the database URL, 
like:

dbtype://user:pass@hostname/db_dev

I don't recall if you said you were using Microsoft SQL Server, where we do 
support a lesser used case where the "schema" given in the Table / Metadata 
definition has the dot within it, e.g.:

Table('foo', metadata, , schema="db_dev.schema1")

so you'd need to confirm that this is the pattern you're doing.

If it were me, I'd try to avoid having the dotted db/schema in the 
Table/Metadata itself and opt for the database URL to instead establish a 
connection that provides the correct database name default to DDL operations. 
If this is SQL server, you'd have different logins which provide for this (see 
https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-login-transact-sql?view=sql-server-2017
 ).


However SQLAlchemy supports compound "database.schema" tokens, and if you want 
these to be dynamic you would use the schema translation feature: 
https://docs.sqlalchemy.org/en/13/core/connections.html#translation-of-schema-names
 in conjunction with some additional directives in your env.py to set this up, 
and the env.py would then interact either with an -x option 
(https://alembic.sqlalchemy.org/en/latest/api/runtime.html?#alembic.runtime.environment.EnvironmentContext.get_x_argument
 ) or you would read the config directly, defining a variable name of your 
choice within separate alembic.ini sections.

With schema translation, your Python code refers to a fixed name inside of the 
"schema" tokens for tables- the alternate db name / schema name is substituted 
in at runtime using the execution_options(translate_map) feature. 

If you have an arbitrary number of "schema" names inside each database, like 
tenant schemas or something where you can't easily code the "translate_map" to 
have explicit entries for each schema, there are still more techniques where 
the functionality of "translate_map" can be implemented using events that would 
use your own regular-expression scheme against the DDL and SQL statements in 
order to affect the database.schemaname rendering.



> 
> If I was going to manually create the upgrade/downgrade scripts every time I 
> could continue to use a variable to compute the database name, but I could 
> never use autogenerate as this would bring in a specific database name and I 
> would no longer be able to move my code between environments.
> 
> I suspect the safest approach will be if we drop the environment suffix from 
> our table names. This will be easier all around.

I would consider alternately that the database connection would set up the 
default database name "db_dev" / "db_test" as implicitly used when you refer to 
a table a

Re: Alembic: Varying database names per environment?

2019-08-18 Thread Mike Bayer


On Sun, Aug 18, 2019, at 6:50 PM, Scott wrote:
> Looking to use Alembic to manage migrations.
> 
> We currently have different database names in each environment, so for dev, 
> test and prod we have db_dev, db_test and db_prod respectively.
> 
> Is this database naming scheme going to be compatible with Alembic or am I 
> better off looking to drop the environment suffix?

these are three different URLs and if the concern is putting them into one 
alembic.ini file, easy enough using separate sections: 
https://alembic.sqlalchemy.org/en/latest/cookbook.html#run-multiple-alembic-environments-from-one-ini-file

though usually staging and production DBs have a password you're looking to 
keep private, and you'd have a separate alembic.ini on your staging and prod 
servers. but either way it's all doable







> 
> Thanks, Scott
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/e61e70a9-c2dc-4880-a839-f36272cbfad7%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/c05347e5-80eb-49e7-b3c6-d6f7d29f4fe5%40www.fastmail.com.


Re: Python 3.7 - RuntimeError: generator raised StopIteration

2019-06-28 Thread Mike Bayer


On Fri, Jun 28, 2019, at 4:01 PM, gbr wrote:
> I've upgraded my application to Python 3.7 and to the latest version of 
> Alembic which triggers an exception when `context.get_current_revision()` is 
> called.
> 
> ```
>  File "app.py", line 395, in check_database_version
>  current_rev = context.get_current_revision()
>  File "~/env-py3.7/lib/python3.7/site-packages/alembic/runtime/migration.py", 
> line 239, in get_current_revision
>  heads = self.get_current_heads()
>  File "~/env-py3.7/lib/python3.7/site-packages/alembic/runtime/migration.py", 
> line 289, in get_current_heads
>  row[0] for row in self.connection.execute(self._version.select())
>  File "~/env-py3.7/lib/python3.7/site-packages/alembic/runtime/migration.py", 
> line 289, in 
>  row[0] for row in self.connection.execute(self._version.select())
> RuntimeError: generator raised StopIteration
> ```
> 
> I think this is due to PEP 479.
>> PEP 479  is enabled for all code 
>> in Python 3.7, meaning that StopIteration 
>>  exceptions 
>> raised directly or indirectly in coroutines and generators are transformed 
>> into RuntimeError 
>>  exceptions. 
>> (Contributed by Yury Selivanov in bpo-32670 
>> .)
> 
> I changed `ResultProxy.__iter__()` to:


> 
> ```
> class ResultProxy
>  ...
>  def __iter__(self):
>  while True:
>  row = self.fetchone()
>  if row is None:
>  # raise StopIteration
>  return
>  else:
>  yield row

The source code for ResultProxy is in SQLAlchemy, not Alembic. The 
"StopIteration" you seem to be referring to was removed four years ago in 
https://github.com/sqlalchemy/sqlalchemy/commit/6ab120558078bdcbfbe06d2ca55bd7a0d417bbb4
 , so I would suggest upgrading to a modern version of SQLAlchemy.


> ```
> 
> which seems to resolve the problem. Is this a problem with Alembic and an 
> incompatibility with Python 3.7, and is this expected to be fixed soon?
> 
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/8314dcfd-57c5-4848-b2eb-748711af8f80%40googlegroups.com
>  
> .
>  For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/4aa1ee57-3925-4bbb-9a16-6bb4ccace4a4%40www.fastmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: merging old versions

2019-06-20 Thread Mike Bayer


On Thu, Jun 20, 2019, at 1:45 PM, Michael Merickel wrote:
> I think the basic idea is to create a database and codebase in the state of 
> the target revision. Then autogenerate a migration from nothing to that 
> revision - just like you would do when starting to use alembic from an 
> existing schema. From there you can change the slug on it so that it works as 
> the down_revision of later migrations and clear out the old unused migrations 
> that you're replacing.
> 
> - Michael
> 
> On Thu, Jun 20, 2019 at 2:37 AM Chris Withers  wrote:
>> Hi All,
>> 
>>  I have some versions that make use of the third party package I no 
>>  longer use, how do I collapse down alembic revisions that have already 
>>  been executed everywhere?
>> 
>>  I found 
>> https://stackoverflow.com/questions/34491914/alembic-how-to-merge-all-revision-files-to-one-file
>>  
>>  but that doesn't feel right...


I think that's what the stackoverflow answer says too. At the moment that might 
be the most expedient approach. However I can see that Alembic might benefit 
from having a special option to autogenerate a model into Python code assuming 
no database to start with. Feel free to propose though Alembic is suffering 
from lack of contributors right now.


>> 
>>  Chris
>> 
>>  -- 
>>  You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy-alembic" group.
>>  To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com 
>> .
>>  To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy-alembic/933cd6aa-0e35-8716-3725-56947157103b%40withers.org.
>>  For more options, visit https://groups.google.com/d/optout.
> 
> 
> -- 
> 
> Michael
> 

> --
>  You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy-alembic/CAKdhhwFSmC4yBgCuYSNDS%2BLXUSBUYbbE8tCnjQdYyp_NWfHo8Q%40mail.gmail.com
>  
> .
>  For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/374bd5a9-6320-4677-8ed8-842d21eb3738%40www.fastmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: How to handle `default` column values before/after commit()

2019-05-06 Thread Mike Bayer
On Mon, May 6, 2019 at 12:06 AM  wrote:
>
> Suppose the following code:
>
> # We define a base for all DB objects, currently empty.
> class _Base:
> pass
>
> Base = declarative_base(cls=_Base, metadata=MetaData(naming_convention=…))
>
> # Then the objects.
> class User(Base):
> __tablename__ = "users"
>
> id = Column(UUID(), default=uuid.uuid4, primary_key=True)
> …
>
> For the majority of code this works well and the id is initialized whenever 
> the object is committed. However, there are cases when I need to get a hold 
> of a new object’s id and that happens before the commit. In such cases the id 
> is not set yet, and I have extra code which sets the id manually.
>
> That feels crummy to me.
>
> Now I wonder if that’s poor implementation because a new object should always 
> be committed before use, or if I should perhaps expand the _Base class, for 
> example:
>
> class _Base:
>
> def __init__(self, *args, **kwargs):
> super().__init__(args, kwargs)
> if hasargs(self, "id"):
> self.id = uuid.uuid4()
>
> That way, every object would have an id assigned and if the object is loaded 
> from the db then that initial id would be overwritten. Not pretty either, but 
> maybe less crummy than the current implementation.
>
> What are your thoughts?

yes but here is the most canonical approach:

import uuid

from sqlalchemy import Column
from sqlalchemy import event
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class HasUUID(object):
  id = Column(UUID(), default=uuid.uuid4, primary_key=True)

@event.listens_for(HasUUID, "init", propagate=True)
def init(obj, arg, kw):
obj.id = uuid.uuid4()

class User(HasUUID, Base):
__tablename__ = 'user'


u1 = User()

print(u1.id)



Alternatively, you can write an event that scans for columns that
contain a Python level default and invokes them.  I think I did this
for someone once but it's apparently not in the wiki, but in any case
doing this using class-level inheritance is IMO the cleanest way since
your User class takes on the role of HasUUID explicitly.






>
> Much thanks!
> Jens
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Error while running “superset db upgrade” after changing metadata DB to vertica

2019-03-04 Thread Mike Bayer
Hi there -

I'm not familiar with "Superset", nor does Alembic have built in
support for the "Vertica" database.It looks like the Vertica
SQLAlchemy drive doesn't have Alembic support implemented which at the
very least would be a small stub DefaultImpl subclass..   You would
need to consult with the developers of the Vertica SQLAlchemy driver
to get help with your application.


On Mon, Mar 4, 2019 at 4:42 PM rishi reddy  wrote:
>
> I am trying to change the metadata DB to Vertica from sqlite and I am facing 
> an issue while running "superset db upgrade" as part of bringing up Superset 
> application
>
> Steps to reproduce:
>
> 1) Installed Superset using pip (pip install superset)
>
> 2) Created "superset_config.py" with below content:
>
> #-
> # Superset specific config
> #-
> ROW_LIMIT = 5000
>
> SUPERSET_WEBSERVER_PORT = 8088
> #-
>
> #-
> # Flask App Builder configuration
> #-
> # Your App secret key
> SECRET_KEY = '\2\1thisismyscretkey\1\2\e\y\y\h'
>
> # The SQLAlchemy connection string to your database backend
> # This connection defines the path to the database that stores your
> # superset metadata (slices, connections, tables, dashboards, ...).
> # Note that the connection information to connect to the datasources
> # you want to explore are managed directly in the web UI
> #SQLALCHEMY_DATABASE_URI = 'sqlite:path/to/superset.db'
> SQLALCHEMY_DATABASE_URI = 
> 'vertica+vertica_python://dbadmin:password@x.x.x.x/vertica9'
> # Flask-WTF flag for CSRF
> WTF_CSRF_ENABLED = True
> # Add endpoints that need to be exempt from CSRF protection
> WTF_CSRF_EXEMPT_LIST = []
> # A CSRF token that expires in 1 year
> WTF_CSRF_TIME_LIMIT = 60 * 60 * 24 * 365
>
> # Set this API key to enable Mapbox visualizations
> MAPBOX_API_KEY = ''
>
> 3) Added superset_config.py path to PYTHONPATH
>
> export 
> PYTHONPATH=/usr/local/lib/python3.6/site-packages/superset/:/usr/local/bin/python3
>
> 4) Installed SqlAlchemy Vertica drivers. pip install sqlalchemy-vertica-python
>
> 5) Ran "fabmanager create-admin --app superset" which successfully created 
> admin user in vertica
>
> 6) While running "superset db upgrade", I am seeing below error:
>
> [root@user ~]# superset db upgrade
> Loaded your LOCAL configuration at 
> [/usr/local/lib/python3.6/site-packages/superset/superset_config.py]
> Traceback (most recent call last):
>   File "/usr/local/bin/superset", line 15, in 
> cli()
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 722, in 
> __call__
> return self.main(*args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/flask/cli.py", line 557, in 
> main
> return super(FlaskGroup, self).main(*args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 697, in 
> main
> rv = self.invoke(ctx)
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in 
> invoke
> return _process_result(sub_ctx.command.invoke(sub_ctx))
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in 
> invoke
> return _process_result(sub_ctx.command.invoke(sub_ctx))
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 895, in 
> invoke
> return ctx.invoke(self.callback, **ctx.params)
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in 
> invoke
> return callback(*args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/click/decorators.py", line 17, 
> in new_func
> return f(get_current_context(), *args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/flask/cli.py", line 412, in 
> decorator
> return __ctx.invoke(f, *args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 535, in 
> invoke
> return callback(*args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/flask_migrate/cli.py", line 
> 134, in upgrade
> _upgrade(directory, revision, sql, tag, x_arg)
>   File "/usr/local/lib/python3.6/site-packages/flask_migrate/__init__.py", 
> line 95, in wrapped
> f(*args, **kwargs)
>   File "/usr/local/lib/python3.6/site-packages/flask_migrate/__init__.py", 
> line 280, in upgrade
> command.upgrade(config, revision, sql=sql, tag=tag)
>   File "/usr/local/lib/python3.6/site-packages/alembic/command.py", line 276, 
> in upgrade
> script.run_env()
>   File "/usr/local/lib/python3.6/site-packages/alembic/script/base.py", line 
> 475, in run_env
> util.load_python_file(self.dir, "env.py")
>   File "/usr/local/lib/python3.6/site-packages/alembic/util/pyfiles.py", line 
> 90, in load_python_file
> module = load_module_py(module_id, path)
>   File "/usr/local/lib/python3.6/site-packages/alembic/util/compat.py", line 

Re: UserDefinedType CREATE INDEX hook

2019-01-24 Thread Mike Bayer
On Thu, Jan 24, 2019 at 4:17 AM Nikola Radovanovic  wrote:
>
> Hi all,
> I have a need to use PostGIS extension in order to get support for objects 
> like Point and Polygon provided by geoalchemy2 in PostgreSql.
>
> Now, the issue I encounter is that while migrating model which uses 
> Point/Polygon (classes derived from UserDefinedType) - geoalchemy2 tries to 
> create index for those although I dont set index=True when declaring 
> properties in model; nor I have instruction to create it in migration script. 
> Is there a way to 'override' this behavior somehow - like setting index to 
> true in definition and manually add index creation in migration script? Or at 
> least to find out where this request comes from, so I can track this further?

My first impression is that if geoalchemy2 is creating indexes that go
along with datatypes, they should allow this to be customizable.Is
geoalchemy2 still maintained?  you should at least see if they can
support a flag to turn this off if someone wants to customize things.

I haven't looked at the GIS stuff in years but what are these indexes
that you don't want which ga2 needs to create? If you just need a
simple datatype insturction, I would just skip using the ga2 types in
your migration script and just use a simple UserDefinedType that you
make yourself which emits POINT/POLYGON whatever in your migration
script, you just need that string name if you don't need anything
else.  All the other things that ga2 does involve expression activity
which you also don't need in migration scripts.

>
> Thank you in advance.
>
> Best regards
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Can I ignore it when the result of the generation is empty?

2019-01-11 Thread Mike Bayer
one thing to note with "empty" migrations is that autogenerate does
not pick up things like changes of server default or datatypes unless
you set some flags to turn it on,
https://alembic.sqlalchemy.org/en/latest/api/runtime.html?highlight=compare_server_default#alembic.runtime.environment.EnvironmentContext.configure.params.compare_server_default
 
https://alembic.sqlalchemy.org/en/latest/api/runtime.html?#alembic.runtime.environment.EnvironmentContext.configure.params.compare_type

so keeping that in mind, if you want to skip generating empty
migrations, use the recipe at:
https://alembic.sqlalchemy.org/en/latest/cookbook.html#don-t-generate-empty-migrations-with-autogenerate


On Fri, Jan 11, 2019 at 4:33 AM  wrote:
>
> Can I ignore it when the result of the generation is empty?
> I use it in docker, there is a startup command to upgrade the database 
> version to the latest.
> The database version is updated at startup and a record update version_num is 
> generated.
> Sorry for my bad English
>
> from alembic import op
> import sqlalchemy as sa
>
>
> # revision identifiers, used by Alembic.
> revision = 'c9bc72b06651'
> down_revision = '713561aabb78'
> branch_labels = None
> depends_on = None
>
>
> def upgrade():
> # ### commands auto generated by Alembic - please adjust! ###
> pass
> # ### end Alembic commands ###
>
>
> def downgrade():
> # ### commands auto generated by Alembic - please adjust! ###
> pass
> # ### end Alembic commands ###
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Multi-tenant and alembic question(s)

2018-12-23 Thread Mike Bayer
On Sun, Dec 23, 2018 at 7:40 AM Nikola Radovanovic  wrote:
>
> Hi,
> First of all, thank you for these great tools making my developer's life so 
> much easier and pleasant.
>
> I need suggestion/help on how to utilize multi-tenant schemes in Alembic. I 
> have few projects under the development that requires multi-tenancy in such 
> way that there will be some common/shared data stored in general(public) 
> schema shared by multiple schemes. In each schema there is same set of tables 
> - no exception. I followed some docs and managed to find this one (but its 
> rather old and not sure if it is still valid). Also, using some other links 
> (cant recall which exactly) I managed to find a similar solution (see the 
> attached env.py), but in which case, there is alembic migration version table 
> in each schema. This is not big issue, but it left me with the feeling of 
> incompleteness, like I am missing something important. I would like to use 
> pure SqlAlchemy/Alembic without Flask.

hi there -

the env.py you attached is mostly the right idea, I see the Flask
import which I would think you can remove, if you know what it is you
need from the current_app.extensions['migrate'].configure_args map, I
don't know what they put in there so maybe just add a print()
statement to see what they are adding and we can go over what options
might be useful.

you do want the alembic version table to be local per schema because
in theory you can add a new tenant which is now by definition at the
base version, which means different schemas can be on different
versions simultaneously.

if you wanted one alembic_version table for the whole thing you'd need
to devise a way to get each upgrade() method in an individual revision
to run across all tenants, which can be done but is more complicated
and kind of more scary because you either need one giant transaction
for all the tenants, or if you are committing per schema then you can
easily have a situation where things are out of sync, e.g.
alembic_version table does not represent the actual state of all
tenants.

for the "public" schema it's likely best to keep that as a totally
separate set of version files, so that an individual Alembic version
script deals only with one kind of database and has just a single
upgrade() / downgrade() method.  if you wanted to write version
scripts that can affect both the public and all the tenant schemas
you'd again need to add some more switching around within env.py as we
see in 
https://github.com/sqlalchemy/alembic/blob/master/alembic/templates/multidb/env.py#L104.


>
> Anyone has a suggestion?
>
> Thank you all in advance
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: many to many relation with foreign key to composite primary key

2018-12-13 Thread Mike Bayer
>> roles_id BIGINT NOT NULL,
>> PRIMARY KEY (users_id, roles_id),
>> FOREIGN KEY(users_id) REFERENCES users (id),
>> FOREIGN KEY(roles_id) REFERENCES roles (id)
>> )
>>
>> CREATE TABLE roles_permissions (
>> roles_id BIGINT NOT NULL,
>> permissions_id BIGINT NOT NULL,
>> PRIMARY KEY (roles_id, permissions_id),
>> FOREIGN KEY(roles_id) REFERENCES roles (id),
>> FOREIGN KEY(permissions_id) REFERENCES permissions (id)
>> )
>>
>> CREATE TABLE users_permissions (
>>     users_id BIGINT NOT NULL,
>> permissions_id BIGINT NOT NULL,
>> PRIMARY KEY (users_id, permissions_id),
>> FOREIGN KEY(users_id) REFERENCES users (id),
>> FOREIGN KEY(permissions_id) REFERENCES permissions (id)
>> )
>>
>> My mistake is to have mixed in my mind the information of the object mapping 
>> and the relational constraints of the database.
>> But why Alembic can’t create this model in auto-generation mode?
>>
>> Best regards
>>
>> Le jeu. 13 déc. 2018 à 14:44, Mike Bayer  a écrit :
>>>
>>> On Thu, Dec 13, 2018 at 7:11 AM patrick payet  wrote:
>>> >
>>> > Hi Mike,
>>> > Thanks for your answer.
>>> > My problem it’s a classical problem for application development, I want 
>>> > to make authentication and authorisation for a web application.
>>> > I want to understand why Alembic can’t create the model in autogeneration 
>>> > mode and how modify this model to resolve this problem.
>>> > With this model I have a join table to provide a many-to-many 
>>> > relationship between users and roles (users group) for example.
>>> > This allows me to access all roles for one user. For this, I declare a 
>>> > variable roles in users class and I want to make a foreign key with this 
>>> > join table
>>> > to reach the roles.
>>> > This joins table has a composite primary key (users.id foreign key and 
>>> > roles.id foreign key)  and I don’t know how to make the link (foreign 
>>> > key) with alembic for the  users.roles and this composite primary key.
>>>
>>> the link is the user_roles table and the FOREIGN KEY constraints it
>>> has between users and roles tables. you can't locate a unique row
>>> in user_roles given only the user table because it has no column that
>>> refers to user_roles.roles_id.at least I'm trying to parse your
>>> words as given.  however CREATE TABLE statements will show exactly
>>> what you mean in case it is something different that just happens to
>>> sound the same when described.
>>>
>>> When I declare just one element of this primary key, I have the error
>>> quoted previously ( there is no unique constraint matching given keys
>>> for referenced table "users_roles").
>>> > I will send you the CREATE TABLE statements by email in a moment.
>>>
>>> yup
>>>
>>>
>>>
>>> > Best regards,
>>> > Patrick
>>> >
>>> > Le mer. 12 déc. 2018 à 19:40, Mike Bayer  a 
>>> > écrit :
>>> >>
>>> >> On Wed, Dec 12, 2018 at 1:01 PM patrick payet  wrote:
>>> >> >
>>> >> > I had a SQLAlchemy model like -
>>> >> > 
>>> >> > class User(DeclarativeBase):
>>> >> > __tablename__ = 'users'
>>> >> >
>>> >> > id = Column(BigInteger, primary_key=True)
>>> >> > email = Column(String(100), unique=True, nullable=False)
>>> >> > name = Column(String(100), nullable=False)
>>> >> > hashed_password = Column(String(100), nullable=False)
>>> >> > is_admin = Column(BOOLEAN, default=False)
>>> >> > is_active = Column(BOOLEAN, default=True)
>>> >> > created = Column(DateTime, default=datetime.now)
>>> >> > modified = Column(DateTime, default=datetime.now, 
>>> >> > onpudate=datetime.datetime.now)
>>> >> > roles = relationship('Role', secondary=users_roles, 
>>> >> > back_populates='users', cascade="all, delete-orphan")
>>> >> > permissions = relationship('Permission', 
>>> >> > secondary=users_permissions, back_populates='users',
>>> >> >cascade="all

Re: many to many relation with foreign key to composite primary key

2018-12-13 Thread Mike Bayer
On Thu, Dec 13, 2018 at 7:11 AM patrick payet  wrote:
>
> Hi Mike,
> Thanks for your answer.
> My problem it’s a classical problem for application development, I want to 
> make authentication and authorisation for a web application.
> I want to understand why Alembic can’t create the model in autogeneration 
> mode and how modify this model to resolve this problem.
> With this model I have a join table to provide a many-to-many relationship 
> between users and roles (users group) for example.
> This allows me to access all roles for one user. For this, I declare a 
> variable roles in users class and I want to make a foreign key with this join 
> table
> to reach the roles.
> This joins table has a composite primary key (users.id foreign key and 
> roles.id foreign key)  and I don’t know how to make the link (foreign key) 
> with alembic for the  users.roles and this composite primary key.

the link is the user_roles table and the FOREIGN KEY constraints it
has between users and roles tables. you can't locate a unique row
in user_roles given only the user table because it has no column that
refers to user_roles.roles_id.at least I'm trying to parse your
words as given.  however CREATE TABLE statements will show exactly
what you mean in case it is something different that just happens to
sound the same when described.

When I declare just one element of this primary key, I have the error
quoted previously ( there is no unique constraint matching given keys
for referenced table "users_roles").
> I will send you the CREATE TABLE statements by email in a moment.

yup



> Best regards,
> Patrick
>
> Le mer. 12 déc. 2018 à 19:40, Mike Bayer  a écrit :
>>
>> On Wed, Dec 12, 2018 at 1:01 PM patrick payet  wrote:
>> >
>> > I had a SQLAlchemy model like -
>> > 
>> > class User(DeclarativeBase):
>> > __tablename__ = 'users'
>> >
>> > id = Column(BigInteger, primary_key=True)
>> > email = Column(String(100), unique=True, nullable=False)
>> > name = Column(String(100), nullable=False)
>> > hashed_password = Column(String(100), nullable=False)
>> > is_admin = Column(BOOLEAN, default=False)
>> > is_active = Column(BOOLEAN, default=True)
>> > created = Column(DateTime, default=datetime.now)
>> > modified = Column(DateTime, default=datetime.now, 
>> > onpudate=datetime.datetime.now)
>> > roles = relationship('Role', secondary=users_roles, 
>> > back_populates='users', cascade="all, delete-orphan")
>> > permissions = relationship('Permission', secondary=users_permissions, 
>> > back_populates='users',
>> >cascade="all, delete-orphan")
>> >
>> > def __repr__(self):
>> > return "" % 
>> > (self.name, self.email, self.hashed_password)
>> >
>> >
>> > class Role(DeclarativeBase):
>> > __tablename__ = 'roles'
>> >
>> > id = Column(BigInteger, primary_key=True)
>> > name = Column(String(100), unique=True, nullable=False)
>> > users = relationship('User', secondary=users_roles, 
>> > back_populates='roles',cascade="all, delete-orphan")
>> > permissions = relationship('Permission', secondary=roles_permissions, 
>> > back_populates='roles',
>> >cascade="all, delete-orphan")
>> >
>> > def __repr__(self):
>> > return "" % self.name
>> >
>> >
>> > class Permission(DeclarativeBase):
>> > __tablename__ = 'permissions'
>> >
>> > id = Column(BigInteger, primary_key=True)
>> > name = Column(String(100), unique=True, nullable=False)
>> > description = Column(String(255))
>> > users = relationship('User', secondary=users_permissions, 
>> > back_populates='permissions',
>> >  cascade="all, delete-orphan")
>> > roles = relationship('Role', secondary=roles_permissions, 
>> > back_populates='permissions',
>> >  cascade="all, delete-orphan")
>> >
>> > def __repr__(self):
>> > return "" % (self.name, 
>> > self.description)
>> > 
>> > However, Alembic is not generating the correct upgrade, i try to make it
>> > 
>> >
>> > def upgrade():
>> > # ### commands auto generated by Alembic - please adjust! ###
>> > # op.drop_table('users')
>> > # ### 

Re: many to many relation with foreign key to composite primary key

2018-12-12 Thread Mike Bayer
On Wed, Dec 12, 2018 at 1:01 PM patrick payet  wrote:
>
> I had a SQLAlchemy model like -
> 
> class User(DeclarativeBase):
> __tablename__ = 'users'
>
> id = Column(BigInteger, primary_key=True)
> email = Column(String(100), unique=True, nullable=False)
> name = Column(String(100), nullable=False)
> hashed_password = Column(String(100), nullable=False)
> is_admin = Column(BOOLEAN, default=False)
> is_active = Column(BOOLEAN, default=True)
> created = Column(DateTime, default=datetime.now)
> modified = Column(DateTime, default=datetime.now, 
> onpudate=datetime.datetime.now)
> roles = relationship('Role', secondary=users_roles, 
> back_populates='users', cascade="all, delete-orphan")
> permissions = relationship('Permission', secondary=users_permissions, 
> back_populates='users',
>cascade="all, delete-orphan")
>
> def __repr__(self):
> return "" % 
> (self.name, self.email, self.hashed_password)
>
>
> class Role(DeclarativeBase):
> __tablename__ = 'roles'
>
> id = Column(BigInteger, primary_key=True)
> name = Column(String(100), unique=True, nullable=False)
> users = relationship('User', secondary=users_roles, 
> back_populates='roles',cascade="all, delete-orphan")
> permissions = relationship('Permission', secondary=roles_permissions, 
> back_populates='roles',
>cascade="all, delete-orphan")
>
> def __repr__(self):
> return "" % self.name
>
>
> class Permission(DeclarativeBase):
> __tablename__ = 'permissions'
>
> id = Column(BigInteger, primary_key=True)
> name = Column(String(100), unique=True, nullable=False)
> description = Column(String(255))
> users = relationship('User', secondary=users_permissions, 
> back_populates='permissions',
>  cascade="all, delete-orphan")
> roles = relationship('Role', secondary=roles_permissions, 
> back_populates='permissions',
>  cascade="all, delete-orphan")
>
> def __repr__(self):
> return "" % (self.name, 
> self.description)
> 
> However, Alembic is not generating the correct upgrade, i try to make it
> 
>
> def upgrade():
> # ### commands auto generated by Alembic - please adjust! ###
> # op.drop_table('users')
> # ### end Alembic commands ###
>
> op.create_table('roles',
> sa.Column('id', sa.BIGINT(), autoincrement=True, 
> nullable=False),
> sa.Column('name', sa.String(100), autoincrement=False, 
> nullable=False),
> sa.Column('users', BigInteger),
> sa.Column('permissions', BigInteger),
> sa.PrimaryKeyConstraint('id'),
> sa.UniqueConstraint('name', name='roles_name_key')
> # sa.ForeignKeyConstraint(['users'], 
> ['users_roles.users_id'], ondelete='CASCADE'),
> # sa.ForeignKeyConstraint(['permissions'], 
> ['roles_permissions.permissions_id'], ondelete='CASCADE')
> )
> op.create_table('permissions',
> sa.Column('id', sa.BIGINT(), autoincrement=True, 
> nullable=False),
> sa.Column('name', sa.String(100), autoincrement=False, 
> nullable=False),
> sa.Column('description', sa.String(255)),
> sa.Column('users', BigInteger),
> sa.Column('roles', BigInteger),
> sa.PrimaryKeyConstraint('id')
> # sa.ForeignKeyConstraint(['users'], 
> ['users_permissions.users_id'], ondelete='CASCADE'),
> # sa.ForeignKeyConstraint(['roles'], 
> ['role_permissions.roles_id'], ondelete='CASCADE')
> )
> op.create_table('users_roles',
> sa.Column('users_id', BigInteger, 
> sa.ForeignKey('users.id'), primary_key=True),
> sa.Column('roles_id', BigInteger, 
> sa.ForeignKey('roles.id'), primary_key=True)
> )
> op.create_table('users_permissions',
> sa.Column('users_id', BigInteger, 
> sa.ForeignKey('users.id'), primary_key=True),
> sa.Column('permissions_id', BigInteger, 
> sa.ForeignKey('permissions.id'), primary_key=True)
> )
> op.create_table('roles_permissions',
> sa.Column('roles_id', BigInteger, 
> sa.ForeignKey('roles.id'), primary_key=True),
> sa.Column('permissions_id', BigInteger, 
> sa.ForeignKey('permissions.id'), primary_key=True)
> )
> op.drop_constraint('users_name_key', 'users')
> op.add_column('users', sa.Column('is_admin', sa.BOOLEAN, 
> autoincrement=False, nullable=False, default=False))
> op.add_column('users', sa.Column('is_active', sa.BOOLEAN, 
> autoincrement=False, nullable=False, default=True))
> op.add_column('users', sa.Column('created', sa.DateTime, 
> 

Re: What would be the proper way to implement a post migration hook ?

2018-12-12 Thread Mike Bayer
On Wed, Dec 12, 2018 at 10:40 AM  wrote:
>
> Dear Alembic User Community,
>
> I am looking for suggestion for the best way to address the following problem:
>
> We have a use case where we'd like to make sure some SQL queries are always 
> executed after any set migration.
> Our particular use case is with PostgreSQL. We have several users who can 
> create new tables and types in a database.
> But we want to make sure the ownership of objects is a group all those users 
> belong to. But we'd rather make that
> transparent so it does not get forgotten in some migrations.
>
> So far we managed to do this by overriding the do_run_migrations() function 
> in the env.py (cf. code below).
> That has some good and bad qualities:
>
> - The "hook" runs once whether we pass 1 or more migrations;
> - The migrations are applied, even if this last step fails;
> - It does not work in "online" mode unless we force a commit (offline mode w. 
> `--sql` does emit a commit afterwards);
>
> So I wondered if someone would have encountered similar use cases and if 
> there would be other / better ways to achieve this ?
> In particular, a better way to know what is the command being invoked, cause 
> you don't want to apply this "hook" for commands other
> than upgrade and downgrade.
>
> Here is the code snippet:
>
> def do_run_migrations(context):
> with context.begin_transaction():
> context.run_migrations()
> command_name = context.config.cmd_opts.cmd[0].__name__
> if command_name in ('downgrade', 'upgrade', ):
> context.execute('REASSIGN OWNED BY "{user}" TO "schema-admin"'
> .format(user=settings.ALEMBIC_DB.username))
> context.execute('COMMIT')  # Does not work "online", unless we do 
> this

There's support for running a hook after each migration:

https://alembic.sqlalchemy.org/en/latest/api/runtime.html?highlight=on_version_apply#alembic.runtime.environment.EnvironmentContext.configure.params.on_version_apply

although to use that hook and figure out how to just run for the
"last" step would require comparing the step given to the last step
requested.

if you are looking to run this only once at the end, what you're doing
should not require that "COMMIT" unless you are running
transaction_per_migration=True.  Tried it here and with
transactional_ddl=True the COMMIT is emitted just once at the end of
the block, so I cannot reproduce your "does not work in 'online'" case
without transaction_per_migration=True.

More generally, if what you're doing is a step that should be before
*any* commit, which is what it seems like, use a commit event:

https://docs.sqlalchemy.org/en/latest/core/events.html?highlight=connectionevent#sqlalchemy.events.ConnectionEvents.commit

@event.listens_for(connection, "commit")
def on_commit(conn):
conn.execute("ddl...")


>
>
> Regards,
> Nicolas.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: alembic autogenerated migrations do not see current tables

2018-11-21 Thread Mike Bayer
you need to have include_schemas=True in your env.py:

https://alembic.zzzcomputing.com/en/latest/api/runtime.html?highlight=include_schemas#alembic.runtime.environment.EnvironmentContext.configure.params.include_schemas


On Wed, Nov 21, 2018 at 11:49 AM Riccardo Magliocchetti
 wrote:
>
> Hello,
>
> for some reason at each migration alembic does not find any table or index and
> creates them from scratch. Assuming this is not something other people are
> seeing, the only thing peculiar to our setup is that we are using postgresql
> schemas.
>
> e.g. I've added a deleted column and the diff between the autogenerated
> migrations is something like this (edited):
>
> --- 103ed8fb3bca_.py2018-11-21 15:28:38.117308742 +0100
> +++ 96adf3b68cc9_.py2018-11-21 17:08:02.132233306 +0100
> @@ -1,27 +1,23 @@
>   """empty message
>
> -Revision ID: 103ed8fb3bca
> -Revises:
> -Create Date: 2018-11-21 09:58:27.765255
> +Revision ID: 96adf3b68cc9
> +Revises: 6204f8cedea2
> +Create Date: 2018-11-21 16:08:02.128288
>
> [...]
>
> # revision identifiers, used by Alembic.
> -revision = '103ed8fb3bca'
> -down_revision = None
> +revision = '96adf3b68cc9'
> +down_revision = '6204f8cedea2'
> @@ -95,6 +90,7 @@
>   sa.Column('xmin', sa.Integer(), server_default=FetchedValue(),
> nullable=True, system=True),
>   sa.Column('name', sa.UnicodeText(), nullable=False),
>   sa.Column('organization_id', sa.Integer(), nullable=False),
> +sa.Column('deleted', sa.Boolean(), nullable=True),
>   sa.ForeignKeyConstraint(['organization_id'],
> ['organization.organization.id'], ),
>   sa.PrimaryKeyConstraint('id'),
>   sa.UniqueConstraint('id', 'organization_id'),
>
> Any hint on what could be wrong? We are using alembic 1.0.2.
>
> Thanks in advance
>
> --
> Riccardo Magliocchetti
> @rmistaken
>
> http://menodizero.it
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Support for "redo"

2018-11-14 Thread Mike Bayer
On Wed, Nov 14, 2018 at 6:07 AM  wrote:
>
> Hi all!
>
> Is there support in Alembic for anything like the "redo" feature present in 
> Ruby on Rails' Active Record Migrations tool? I often find myself running an 
> "upgrade", followed by a "downgrade" and then finally another upgrade, in 
> order to be sure the migration's rollback function is properly implemented. 
> With the said tool there is a command-line switch that does a similar thing 
> (i.e. downgrading a certain number of migrations and applying them again):
>
> $ rails db:migrate:redo STEP=3
> Source: https://edgeguides.rubyonrails.org/active_record_migrations.html
>
> I know that this would be a very minor helper, considering a simple bash 
> script can do the trick. Nevertheless one may find it useful to find this in 
> the docs and maybe even adapt their current development workflow? I'd like to 
> know what your thoughts are on this.

Usually, this falls under the realm of "testing" and what people want
are automated fixtures that run through all the upgrades individually,
and the downgrades also if supported.   We have one of these in
Openstack which provides that the developer create a test case for
every upgrade.   A set of test fixtures for Alembic would be a nice to
have but is not something I have the resources to develop right now.

As for a manual flag that doesn't seem like that common of a use case
to me.   A lot of folks don't even implement downgrades since they are
not that practical for production systems, and in any case, this is
not really a complete test that the migration worked, only that it
didn't raise any errors.






>
> Regards,
> Samuel.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Alembic op.alter_column deletes all rows in the database table in error

2018-10-08 Thread Mike Bayer
can you provide SQLAlchemy statement output please, set "level = INFO"
under [logger_sqlalchemy] in alembic.ini

op.alter_column() does not emit DELETE and nothing in Alembic emits
the DELETE statement anywhere except upon the alembic_version table
itself.


On Mon, Oct 8, 2018 at 10:31 AM Richard  wrote:
>
> I'm using postgres version 10, alembic 1.0.0, sqlalchemy 1.2.12. I'm aware 
> ALTER COLUMN in SQL doesn't delete rows, but op.alter_column is doing that.
>
> My customer & daily reading models look like:
>
>
> class Customer(DeclarativeBase):
>  __tablename__ = 'customers'
>  id = Column(Integer, primary_key=True)
>  electricity_readings = relationship(
>  'ElectricityMeterReading', cascade='all,delete-orphan',
>  backref=backref('customer', cascade='all')
>  )
>  gas_readings = relationship(
>  'GasMeterReading', cascade='all,delete-orphan',
>  backref=backref('customer', cascade='all')
>  )
>  daily_smart_meter_readings = relationship(
>  'DailyMeterReading',
>  cascade='all,delete-orphan',
>  backref=backref('customer', cascade='all')
>  )
>
> class DailyMeterReading(DeclarativeBase):
> __tablename__ = 'daily_smart_meter_readings'
> id = Column(Integer, primary_key=True)
> customer_pk = Column(
> Integer, ForeignKey('customers.id'), nullable=False, index=True
> )
> reading = Column(Float, nullable=False)
> reading_at = Column(UtcDateTime, nullable=False, index=True)
>
>
>
>
>
>
>
> On Monday, October 8, 2018 at 3:26:11 PM UTC+1, Mike Bayer wrote:
>>
>> Hi there -
>>
>> I have no idea what you are seeing.an actual ALTER COLUMN
>> operation does not delete rows.  Of course, if you are using SQLite
>> and batch mode, that might affect things, but you have not specified
>> this.   Please specify complete information including log output,
>> stack traces, database in use, sample schema, etc.
>>
>> On Mon, Oct 8, 2018 at 9:36 AM Richard  wrote:
>> >
>> >
>> > Note that if I do the same op.alter_column on another table which has a 
>> > customer FK, it works fine and does not delete all the rows.
>> >
>> > On Monday, October 8, 2018 at 1:50:38 PM UTC+1, Richard wrote:
>> >>
>> >> I have an alembic migration which renames a FK column on a table from 
>> >> 'customer_id' to 'customer_pk'.
>> >>
>> >> I used to have more in the migration file but narrowed it down to this 
>> >> code causing all the rows to be deleted.
>> >>
>> >> def upgrade():
>> >> op.alter_column(
>> >> 'daily_smart_meter_readings', column_name='customer_id',
>> >> new_column_name='customer_pk',
>> >> )
>> >>
>> >>
>> >> I'm using alembic==1.0.0 and Python 3.6.4.
>> >>
>> >> Is there something wrong with the above code or is this a bug in the 
>> >> library?
>> >>
>> >> Thanks
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "sqlalchemy-alembic" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Alembic op.alter_column deletes all rows in the database table in error

2018-10-08 Thread Mike Bayer
Hi there -

I have no idea what you are seeing.an actual ALTER COLUMN
operation does not delete rows.  Of course, if you are using SQLite
and batch mode, that might affect things, but you have not specified
this.   Please specify complete information including log output,
stack traces, database in use, sample schema, etc.

On Mon, Oct 8, 2018 at 9:36 AM Richard  wrote:
>
>
> Note that if I do the same op.alter_column on another table which has a 
> customer FK, it works fine and does not delete all the rows.
>
> On Monday, October 8, 2018 at 1:50:38 PM UTC+1, Richard wrote:
>>
>> I have an alembic migration which renames a FK column on a table from 
>> 'customer_id' to 'customer_pk'.
>>
>> I used to have more in the migration file but narrowed it down to this code 
>> causing all the rows to be deleted.
>>
>> def upgrade():
>> op.alter_column(
>> 'daily_smart_meter_readings', column_name='customer_id',
>> new_column_name='customer_pk',
>> )
>>
>>
>> I'm using alembic==1.0.0 and Python 3.6.4.
>>
>> Is there something wrong with the above code or is this a bug in the library?
>>
>> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: File logger fails when used with a Gunicorn/Pyramid .ini file.

2018-09-26 Thread Mike Bayer
On Wed, Sep 26, 2018 at 4:43 AM  wrote:
>
> I haven’t used env.py before, you’re talking about this: 
> https://pypi.org/project/env.py/ , correct?

env.py is an integral part of your Alembic project space and you are
using it.  Please see the tutorial at
https://alembic.zzzcomputing.com/en/latest/tutorial.html#the-migration-environment
. This should likely also be a feature so
https://bitbucket.org/zzzeek/alembic/issues/509/add-support-for-the-here-s-token-when
has been added.

>
> On Wednesday, September 26, 2018 at 10:06:31 AM UTC+10, Mike Bayer wrote:
>>
>> oh, you know that's in the logging.  logging.fileConfig() is used for
>> that and it's actually in your env.py.  Just add "defaults" to that
>> line in your env.py.
>>
>> https://docs.python.org/2/library/logging.config.html#logging.config.fileConfig
>>
>>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Migrating PEP-435 Enums

2018-09-24 Thread Mike Bayer
you don't gain much since it only works on Postgresql anyway.Also,
the syntax you suggested wouldn't work, because Postgresql needs to
know the name of the enumeration.

This is part of why all the "enum" issues for alembic are just open.
  The way PG does it vs. MySQL are immensely different, and then none
of the other databases have an ENUM type.Your request for an
"op.alter_column()" directive is basically asking for those issues to
be done.   I'm on a long term search for code contributors who can
work on that stuff, ENUM is going to be very hard to work front to
back in all cases.






On Mon, Sep 24, 2018 at 2:49 PM Alex Rothberg  wrote:
>
> is there no way to get this alter statement without writing raw sql?
> e.g. something like: op.alter_column("my_table", "my_column", 
> existing_type=ENUM(...), type_=ENUM()) ?
>
> On Monday, September 24, 2018 at 2:36:52 PM UTC-4, Mike Bayer wrote:
>>
>> Postgresql ENUMs are entirely different from any other database so it
>> matters a lot.  For PG, you'd want to be doing op.execute("ALTER TYPE
>> myenum ..."), full syntax is at
>> https://www.postgresql.org/docs/9.1/static/sql-altertype.html
>> On Mon, Sep 24, 2018 at 12:45 PM Alex Rothberg  wrote:
>> >
>> > Assuming that I am using the PEP-435 enum feature in SQLA, e.g.:
>> > class InvitationReason(str, enum.Enum):
>> > ORIGINAL_ADMIN = "ORIGINAL_ADMIN"
>> > FIRM_USER = "FIRM_USER"
>> > ...
>> >
>> > reason = db.Column(db.Enum(InvitationReason), nullable=False)
>> >
>> > and I want to add / change the values in the enum. I know that alembic 
>> > won't auto generate the migration. Given that, what is the simplest way to 
>> > specify the migration by hand? I am using postgres, if that matters.
>> >
>> > --
>> > You received this message because you are subscribed to the Google Groups 
>> > "sqlalchemy-alembic" group.
>> > To unsubscribe from this group and stop receiving emails from it, send an 
>> > email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: File logger fails when used with a Gunicorn/Pyramid .ini file.

2018-09-24 Thread Mike Bayer
On Sun, Sep 23, 2018 at 9:53 PM  wrote:
>
> Hello,
>
> In my project.ini file I have configured logging to use a file logger as 
> follows:
>
> [loggers]
> keys = root, …, alembic
>
> [handlers]
> keys = console, file
>
> [formatters]
> keys = generic
>
> [logger_root]
> level = INFO
> handlers = console
> qualname =
>
> [logger_alembic]
> level = INFO
> handlers =
> qualname = alembic
>
> [handler_console]
> class = StreamHandler
> args = (sys.stderr,)
> level = NOTSET
> formatter = generic
>
> [handler_file]
> class = FileHandler
> args = ("%(here)s/project.log", "a")
> level = INFO
> formatter = generic
>
> Using this .ini file within my project works fine: gunicorn reads the config 
> and expands its values. In particular, it will add the "here" variable when 
> expanding the "args" value in gunicorn/glogging.py (source)
>
> defaults = CONFIG_DEFAULTS.copy()
> defaults['__file__'] = cfg.logconfig
> defaults['here'] = os.path.dirname(cfg.logconfig)
> fileConfig(cfg.logconfig, defaults=defaults, disable_existing_loggers=False)
>
> Alembic, however, instantiates a plain fileConfig without passing extra 
> defaults. And that causes the following failure:
>
> configparser.InterpolationMissingOptionError: Bad value substitution: option 
> 'args' in section 'handler_file' contains an interpolation key 'here' which 
> is not a valid option name. Raw value: '("%(here)s/project.log", "a)'
>
> So now I could just hardcode a path in the .ini file and that would work. But 
> the "here" is actually quite handy and so I was wondering if a PR would be 
> acceptable? Or is there another suggested solution for this?

Looking at the source code, we already have "here":

 if self.config_file_name:
here = os.path.abspath(os.path.dirname(self.config_file_name))
else:
here = ""
self.config_args['here'] = here
file_config = SafeConfigParser(self.config_args)
if self.config_file_name:
file_config.read([self.config_file_name])
else:
file_config.add_section(self.config_ini_section)
return file_config

so...how are you getting this .ini file over to alembic?





>
> Thanks!
> Jens
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: online vs offline produce different "python stack traces" and general question

2018-09-07 Thread Mike Bayer
you can change the nullability of a column with op.alter_column().
There's no need to create a separate constraint.

On Thu, Sep 6, 2018 at 8:07 PM, HP3  wrote:
> Sorry:
>
> nullable = False
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: online vs offline produce different "python stack traces" and general question

2018-09-06 Thread Mike Bayer
On Thu, Sep 6, 2018 at 1:14 PM, HP3  wrote:
> Hello,
>
> As I try alembic (for the first time), I ran into the following stack traces
> when attempting to conduct a "nasty" migration.
>
> I call it "nasty" because the original model was based on sqlalchemy v0.9 +
> python 2.7 and the new model is sqlalchemy v1.2 + python 3.6.
> ON TOP OF THAT, it's a massive table and model restructuring. Backend is
> postgres.
>
> I am thinking that I can leverage alembic to do the heavy lifting of
> creating/dropping stuff while I can interject my own bulk_inserts and
> bulk_updates and preserve the existing data.  Something like:
>
> // versions/deadbeef_nasty.py
>
> def upgrade():
> op.drop_table(...)
> op.drop_table(...)
> ...
>
> # Here is where I am planning to add my bulk_inserts and bulk_updates
>
> op.add_column(...)
> op.drop_column(...)
> ...
>
>
>
> Is this a reasonable approach?

yes


>
> With regards to the stack traces, they differ between online (first stack)
> and offline (second). Hence, this post.
> Does that make sense?


youre getting two totally different error conditions.  In the second
case, something is up with the naming_convention you've passed to
MetaData somewhere, this should be a dictionary value.  If that's not
the case then please confirm the version of SQLAlchemy in use.  0.9 is
very old so you might need to upgrade or at least test that the
SQLAlchemy version resolves.


>
> Question: are online migrations run within a single transaction or not?

this depends on if the database supports transactional DDL (Postgresql
does) and what you have the transaction_per_migration flag set
towards: 
http://alembic.zzzcomputing.com/en/latest/api/runtime.html?highlight=transaction_per_migration#alembic.runtime.environment.EnvironmentContext.configure.params.transaction_per_migration
defaults to False so all migrations are in one transaction for
Postgresql.



>
> Thanks!!!
>
>
> $ alembic upgrade head
>
> /Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/psycopg2/__init__.py:144:
> UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in
> order to keep installing from binary please use "pip install
> psycopg2-binary" instead. For details see:
> .
>
>  """)
>
> INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
>
> INFO [alembic.runtime.migration] Will assume transactional DDL.
>
> INFO [alembic.runtime.migration] Running upgrade 55ebe08ba589 ->
> b6c3cd0ef4cb, version 1
>
> Traceback (most recent call last):
>
>  File "/Users/hp3/.pyenv/versions/server_py2/bin/alembic", line 11, in
> 
>
>  sys.exit(main())
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/config.py",
> line 486, in main
>
>  CommandLine(prog=prog).main(argv=argv)
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/config.py",
> line 480, in main
>
>  self.run_cmd(cfg, options)
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/config.py",
> line 463, in run_cmd
>
>  **dict((k, getattr(options, k, None)) for k in kwarg)
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/command.py",
> line 254, in upgrade
>
>  script.run_env()
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/script/base.py",
> line 427, in run_env
>
>  util.load_python_file(self.dir, 'env.py')
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/util/pyfiles.py",
> line 81, in load_python_file
>
>  module = load_module_py(module_id, path)
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/util/compat.py",
> line 135, in load_module_py
>
>  mod = imp.load_source(module_id, path, fp)
>
>  File "alembic/env.py", line 72, in 
>
>  run_migrations_online()
>
>  File "alembic/env.py", line 67, in run_migrations_online
>
>  context.run_migrations()
>
>  File "", line 8, in run_migrations
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/runtime/environment.py",
> line 836, in run_migrations
>
>  self.get_context().run_migrations(**kw)
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/runtime/migration.py",
> line 330, in run_migrations
>
>  step.migration_fn(**kw)
>
>  File
> "/Users/hp3/Documents/python_workspace/Plannotate2_Cloud/src/server/alembic/versions/b6c3cd0ef4cb_version_1.py",
> line 38, in upgrade
>
>  op.drop_table('groups_history')
>
>  File "", line 8, in drop_table
>
>  File "", line 3, in drop_table
>
>  File
> "/Users/hp3/.pyenv/versions/2.7.14/envs/server_py2/lib/python2.7/site-packages/alembic/operations/ops.py",
> line 1187, in drop_table
>
>  operations.invoke(op)
>
>  File

Re: Run some migration revisions outside of a transaction

2018-08-10 Thread Mike Bayer
On Fri, Aug 10, 2018 at 2:00 PM, Michał Bultrowicz
 wrote:
> Hey!
>
> What approach would you recommend if we want to run some revisions outside
> of a transaction, but then run the other ones normally?
>
> Let me illustrate that. Let's say we have revisions 1,2,3,4,5. The database
> is currently at revision 1, so we need to run 2-5. The problem is that 4 is
> an operation that can't be in a transation, specifically CREATE INDEX
> CONCURRENTLY from Postgres (synchronous creation of the index would block
> writes to a table, which is unacceptable in production).
> So we'd like to run, 2 and 3 in a transaction, then do 4 outside of a
> transactionand wait for it, then run 5 in a transaction.


so you want to set transaction_per_migration:

http://alembic.zzzcomputing.com/en/latest/api/runtime.html?highlight=transaction_per_migration#alembic.runtime.environment.EnvironmentContext.configure.params.transaction_per_migration

you're still safe if 1,2,3 or 5 fail since they are in their own
transaction and it will stop at that migration if a problem happens.

then in migration 4, inside the migration script get a separate
connection, which has to be an ".autocommit' connection with psycopg2:

with 
op.get_bind().engine.connect().execution_options(isolation_level='AUTOCOMMIT')
as conn:
   conn.execute("CREATE INDEX CONCURRENTLY")







>
> In run_migrations_online (we don't use offline) there's this bit
> with sql_engine.connect() as connection:
> with context.begin_transaction():
> context.run_migrations()
>
> Can we get access to the list of revisions here and bundle them up however
> we want?
>
> Best regards,
> Michał Bultrowicz
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: autogenerate revision fails in a multi-db project generated with: `alembic init --template multidb alembic`

2018-08-10 Thread Mike Bayer
On Fri, Aug 10, 2018 at 6:47 AM,   wrote:
> Hello
>
> I'm trying to use alembic to generate a migration file in a multi-db
> environment. I first created the project using `alembic init --template
> multidb alembic`
> and then hack the `env.py` script (didn't touch to `run_migration_online`)
> to suit my needs.
>
> Here is the error I got after invoking `alembic --raiseerr revision
> --autogenerate -m 'First revision`:

have you also received the alembic.ini from that run, or was there a
previous alembic.ini present that prevented the new one from being
genreated?   the alembic.ini for multidb requires a "databases"
configuration:

databases = engine1, engine2

[engine1]
sqlalchemy.url = driver://user:pass@localhost/dbname

[engine2]
sqlalchemy.url = driver://user:pass@localhost/dbname2


I can reproduce your error if that section is missing.   Restore it,
and also put valid URLs in there, and then it works.If this is not
how you are configuring, then you need to alter the env.py to locate
your database URLs somehow.





>
> [...]
> Generating
> /home/rdebroiz/Quantivly/services/alembic/versions/201808-10-T12-30-48-first_revision-cb8c6da27574.py
> ... FAILED
> Traceback (most recent call last):
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/util/pyfiles.py",
> line 15, in template_to_file
> output = template.render_unicode(**kw).encode(output_encoding)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/mako/template.py",
> line 471, in render_unicode
> as_unicode=True)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/mako/runtime.py",
> line 838, in _render
> **_kwargs_for_callable(callable_, data))
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/mako/runtime.py",
> line 873, in _render_context
> _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/mako/runtime.py",
> line 899, in _exec_template
> callable_(context, *args, **kwargs)
> File "alembic_script_py_mako", line 62, in render_body
> File "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/re.py", line 212,
> in split
> return _compile(pattern, flags).split(string, maxsplit)
> TypeError: expected string or bytes-like object
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "/home/rdebroiz/.virtualenvs/quantivly/bin/alembic", line 11, in
> 
> sys.exit(main())
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/config.py",
> line 486, in main
> CommandLine(prog=prog).main(argv=argv)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/config.py",
> line 480, in main
> self.run_cmd(cfg, options)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/config.py",
> line 463, in run_cmd
> **dict((k, getattr(options, k, None)) for k in kwarg)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/command.py",
> line 180, in revision
> revision_context.generate_scripts()
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/command.py",
> line 179, in 
> script for script in
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/autogenerate/api.py",
> line 480, in generate_scripts
> yield self._to_script(generated_revision)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/autogenerate/api.py",
> line 412, in _to_script
> **template_args)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/script/base.py",
> line 574, in generate_revision
> **kw
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/script/base.py",
> line 439, in _generate_template
> **kw
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/util/messaging.py",
> line 47, in status
> ret = fn(*arg, **kw)
> File
> "/home/rdebroiz/.virtualenvs/quantivly/lib/python3.6/site-packages/alembic/util/pyfiles.py",
> line 24, in template_to_file
> "template-oriented traceback." % fname)
> alembic.util.exc.CommandError: Template rendering failed; see
> /tmp/tmph4dvwm5j.txt for a template-oriented traceback.
>
>
> I kept the original `script.py.mako` template:
>
> <%!
> import re
>
> %>"""${message}
>
> Revision ID: ${up_revision}
> Revises: ${down_revision | comma,n}
> Create Date: ${create_date}
>
> """
> from alembic import op
> import sqlalchemy as sa
> ${imports if imports else ""}
>
> # revision identifiers, used by Alembic.
> revision = ${repr(up_revision)}
> down_revision = ${repr(down_revision)}
> branch_labels = ${repr(branch_labels)}
> depends_on = ${repr(depends_on)}
>
>
> def upgrade(engine_name):
> globals()["upgrade_%s" % engine_name]()
>
>
> def downgrade(engine_name):
> globals()["downgrade_%s" % engine_name]()
>

Re: integrate with cherrypy / sqlalchemy

2018-08-01 Thread Mike Bayer
On Tue, Jul 31, 2018 at 3:12 AM, Amin M  wrote:
> Thanks Mike, really helped me out.
>
> I've done the following, in env.py
>
> import sys
> import os
>
> project_root = os.path.abspath('./')  # get Cherrypy root
> sys.path.insert(0, project_root)  # add it to Path, so we can import models,
> and read Cherrypy configurations
>
>
> def get_models():
> """
> get cherrypy models (SQLAlchemy)
> :return:
> """
> from models.data_model import BASE
> from models.comment import Comment
> from models.file import File
> from models.user import User
> return [BASE.metadata]
>
>
> def get_db_configurations():
> """
> read database configurations from cherrypy
> :return:
> """
> from helpers.config import config_data
>
> sqlalchemy_url = '%s://%s:%s@%s:%s/%s' % (
> config_data['database']['type'],
> config_data['database']['user'],
> config_data['database']['password'],
> config_data['database']['host'],
> config_data['database']['port'],
> config_data['database']['db'],
> )
>
> return sqlalchemy_url
>
>
> Works like a charm, though I would love to know more about
> "now for the import above to work, your Python application needs to be
> part of the Python environment.  Usually folks build their web
> application with a setup.py file and then they install it into a local
> virtual environment.  If you aren't doing that, you may have to alter
> your PYTHONPATH environment variable outside, or use sys.path inside
> env.py in order to add where it can locate your models for import.
> "
>
> Could you point me to documentations or examples of this? I only know what a
> venv is.

make your application into a package, here's an overview:
http://python-packaging.readthedocs.io/en/latest/




>
> Thanks a lot.
>
> On Monday, July 30, 2018 at 9:35:17 PM UTC+3, Mike Bayer wrote:
>>
>> the env.py needs to things to work:
>>
>> 1. a way of connecting to the database.   if you just want to give it
>> a database URL that is fixed, you can stick that in alembic.ini under
>> sqlalchemy.url and you are done.  If you want to use cherrypy's
>> configurational facilities, then you'd need to add code to env.py that
>> calls upon cherrypy's configuration.   But just putting the database
>> URL into alembic.ini is a quick way just to get started.
>>
>> 2. for autogenerate to work (which is optional, but everyone wants it)
>> you have to "import" your model into it, like "from myapp.models
>> import BASE", then refer to BASE.metadata which is set up as the
>> "target_metadata" for autogenerate:
>>
>> from myapp.models import BASE
>> target_metadata = BASE.metadata
>>
>> now for the import above to work, your Python application needs to be
>> part of the Python environment.  Usually folks build their web
>> application with a setup.py file and then they install it into a local
>> virtual environment.  If you aren't doing that, you may have to alter
>> your PYTHONPATH environment variable outside, or use sys.path inside
>> env.py in order to add where it can locate your models for import.
>>
>>
>> You might want to ask on cherrypy's mailing list for more specifics on
>> these two points, they should be able to tell you.
>>
>>
>>
>>
>>
>> On Mon, Jul 30, 2018 at 7:30 AM, Amin M  wrote:
>> > hello, I have a cherrypy application with the following structure
>> > controllers/
>> > models/
>> > views/
>> > app.py
>> >
>> >
>> > and my models exist in models folder.
>> > My BASE model contains the following:
>> >
>> > from sqlalchemy import create_engine
>> > from sqlalchemy.ext.declarative import declarative_base
>> > from sqlalchemy.orm import scoped_session, sessionmaker
>> >
>> > from helpers.config import config_data
>> >
>> > # base ORM model
>> > BASE = declarative_base()
>> > ENGINE = create_engine('%s://%s:%s@%s:%s/%s' %
>> >(
>> >config_data['database']['type'],
>> >config_data['database']['user'],
>> >config_data['database']['password'],
>> >config_data['database']['host'],
>> >config_data['database']['port'],
>> >conf

Re: KeyError with diamond-shaped branch-merge dependency graph

2018-05-28 Thread Mike Bayer
it's likely a bug.   It looks like you've done some investigation
already.I don't offhand have any understanding of the issue
either, I would need to build up a reproduction case and then stare at
it for several hours to figure out what's happening.So first step
is if you can make a bug report at
https://bitbucket.org/zzzeek/alembic/issues?status=new=open.
Creating a test case is elaborate here, the graph you have above is
likely enough to get started.The next step, which I can do, or you
are welcome to work on if you have the motivation, is to create a test
of this case in tests/test_version_traversal.py, similar to a test
like this one: 
https://bitbucket.org/zzzeek/alembic/src/b702e057aeaa640c03232c77405dc5415ac8a670/tests/test_version_traversal.py#lines-557

these bugs are often hard to fix, so I would need to find some time to
work on it.  if you have a workaround for now that would be good.



On Mon, May 28, 2018 at 11:13 AM, Lucas Kahlert  wrote:
> Hallo,
>
> we are using Alembic in a rather big project with various database revisions
> and stumbled over a problem with diamond-shaped branch-merges.
>
> We have the following diamond-shaped dependency tree with three branches:
>
> "master" Branch
> "addon 1" branch
> "addon 2" branch
>
> The graph looks as follows:
>
>   |||
>   |a|
>   |  . | .  |
>   | .  |  . |
>   |.   |   .|
>   x1   |   y1
>   |.   |   .|
>   | .  |  . |
>   |  . | .  |
>   |b|
>   |||
>   |||
>   x2   cy2
>
> Straight lines represent down_revision and dotted lines mean depends_on
> relationships. The addon branches should not be merged into the master
> branch. Hence, we use the depends_on directive as described in the Alembic
> docs Branch Dependencies.We have various constructs of this shape in our
> dependency graph.
>
> If we run alembic upgrade heads, we get an KeyError when applying the
> "Merge" migration:
>
>   File "/home/lucas/dev/torql/api/torql/migrations/env.py", line 115, in
> 
> run_migrations_online()
>   File "/home/lucas/dev/torql/api/torql/migrations/env.py", line 109, in
> run_migrations_online
> context.run_migrations()
>   File "", line 8, in run_migrations
>   File
> "/home/lucas/dev/torql/.venv/lib/python3.6/site-packages/alembic/runtime/environment.py",
> line 797, in run_migrations
> self.get_context().run_migrations(**kw)
>   File
> "/home/lucas/dev/torql/.venv/lib/python3.6/site-packages/alembic/runtime/migration.py",
> line 319, in run_migrations
> head_maintainer.update_to_step(step)
>   File
> "/home/lucas/dev/torql/.venv/lib/python3.6/site-packages/alembic/runtime/migration.py",
> line 486, in update_to_step
> self._delete_version(delrev)
>   File
> "/home/lucas/dev/torql/.venv/lib/python3.6/site-packages/alembic/runtime/migration.py",
> line 437, in _delete_version
> self.heads.remove(version)
> KeyError: 'a'
>
> Alembic cannot delete the branchpoint revision because it is not included in
> the heads set of the alembic.runtime.migration.HeadMaintainer instance.
>
> The problem can be solved by inserting a "Noop" revision between the Branch-
> and Mergepoint:
>
>   |||
>   |a|
>   |  . | .  |
>   | .  |  . |
>   |.   |   .|
>   x1  noop  y1
>   |.   |   .|
>   | .  |  . |
>   |  . | .  |
>   |b|
>   |||
>   |||
>   x2   cy2
>
> Can someone explain the problem to us or is this a bug in Alembic?
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: is this a bug ?target_metadata

2018-03-26 Thread Mike Bayer
all three of your files do:

   from database import Base


so that is just one Base object, one metadata object.  they are all the same.

if you wanted them separate you'd need to call upon "Base =
declarative_base()" in each module individually.



On Sat, Mar 24, 2018 at 1:11 PM,   wrote:
> myproject just like this:
>
> ├─migrate
>
> │  ├─versions
>
> │  │  └─94d0834e4282_.py
>
> │  │  └─__pycache__
>
> │  └─evn.py
>
> │  └─README
>
> │  └─evn.py
>
> │  └─cript.py.mako
>
> ├─models
> │  └─test1.py
>
> │  └─test2.py
>
> │  └─test3.py
>
> ├─__pycache__
>
> ├─alembic.ini
>
> ├─database.py
>
> └─main.py
>
>
> database.py:
>
> from sqlalchemy import create_engine
> from sqlalchemy.orm import scoped_session, sessionmaker
> from sqlalchemy.ext.declarative import declarative_base
>
> engine = create_engine('mysql+mysqlconnector://plan:plan@mysql/test',
> convert_unicode=True)
> db_session = scoped_session(sessionmaker(autocommit=False,
>  autoflush=False,
>  bind=engine))
> Base = declarative_base()
> Base.query = db_session.query_property()
>
> def init_db():
> # import all modules here that might define models so that
> # they will be registered properly on the metadata.  Otherwise
> # you will have to import them first before calling init_db()
> Base.metadata.create_all(bind=engine)
>
>
> #test1.py:
>
> from sqlalchemy import Column, Integer, String
> from database import Base
>
> class User1(Base):
> __tablename__ = 'users1'
> id1 = Column(Integer, primary_key=True)
> name1 = Column(String(50), unique=True)
> email1 = Column(String(120), unique=True)
>
> def __init__(self, name=None, email=None):
> self.name = name
> self.email = email
>
> def __repr__(self):
> return '' % (self.name1)
>
>
> #test2.py:
>
> from sqlalchemy import Column, Integer, String
> from database import Base
>
> class User2(Base):
> __tablename__ = 'users2'
> id1 = Column(Integer, primary_key=True)
> name2 = Column(String(50), unique=True)
> email2 = Column(String(120), unique=True)
>
> def __init__(self, name=None, email=None):
> self.name = name
> self.email = email
>
> def __repr__(self):
> return '' % (self.name2)
>
>
> #test3.py
>
> from sqlalchemy import Column, Integer, String
> from database import Base
>
> class User3(Base):
> __tablename__ = 'users3'
> id1 = Column(Integer, primary_key=True)
> name3 = Column(String(50), unique=True)
> email3 = Column(String(120), unique=True)
>
> def __init__(self, name=None, email=None):
> self.name = name
> self.email = email
>
> def __repr__(self):
> return '' % (self.name3)
>
>
> database.py:
>
> from sqlalchemy import create_engine
> from sqlalchemy.orm import scoped_session, sessionmaker
> from sqlalchemy.ext.declarative import declarative_base
>
> engine = create_engine('mysql+mysqlconnector://plan:plan@mysql/test',
> convert_unicode=True)
> db_session = scoped_session(sessionmaker(autocommit=False,
>  autoflush=False,
>  bind=engine))
> Base = declarative_base()
> Base.query = db_session.query_property()
>
> def init_db():
> # import all modules here that might define models so that
> # they will be registered properly on the metadata.  Otherwise
> # you will have to import them first before calling init_db()
> Base.metadata.create_all(bind=engine)
>
>
>
>
> #database.py  just set database url
>
> sqlalchemy.url = mysql+mysqlconnector://plan:plan@mysql/test
>
> #alembic.ini just set database url
>
> sqlalchemy.url = mysql+mysqlconnector://plan:plan@mysql/test
>
>
> #env.py just set target_metadata
>
> import os
> import sys
> sys.path.append(os.path.dirname(os.path.abspath(__file__)) + "/../")
> from models import test1,test2,test3
> target_metadata = test1.Base.metadata
>
>
>
> this code Autogenerating Multiple MetaData collections all set test2 test3.
>
> the documentation say:
>
> Autogenerating Multiple MetaData collections¶
>
> Thetarget_metadatacollection may also be defined as a sequenceif an
> application has multipleMetaDatacollections involved:
>
> from myapp.mymodel1 import Model1Base
> from myapp.mymodel2 import Model2Base
> target_metadata = [Model1Base.metadata, Model2Base.metadata]
>
>
> i can't use env.py\target_metadata like this:
>
> import os
> import sys
> sys.path.append(os.path.dirname(os.path.abspath(__file__)) + "/../")
> from models import test1,test2,test3
> target_metadata =
> [test1.Base.metadata,test2.Base.metadata,test3.Base.metadata]
>
>
>
> i use env.py\target_metadata like this:
>
> import os
> import sys
> sys.path.append(os.path.dirname(os.path.abspath(__file__)) + "/../")
> from models import test1,test2,test3
> target_metadata = test1.Base.metadata
>
>
> all build test1 test2 test3.
>
>
> is 

Re: mssql clustered index on non-pk field

2018-03-19 Thread Mike Bayer
so process_revision_directives is a pathway here to the extent that
you need to modify the autogenerate process such that a table like
this is generated for you automatically.

However, Alembic's normal mode of usage is that the "autogenerate"
tool is only a time-saver, and is not intended to produce a finished
result in every case.   The files it produces contain a comment to
this effect that you should "please modify as needed" the migration
script that is created.

Whether or not you need autogenerate to automate this, the first step
would be to come up with the appropriate op.create_table and
op.alter_table directives that accomplish what you need, and then
manually work these into a migration script as a proof of concept.  If
this is just one or two tables in your whole application that need to
be generated this way, you'd just leave it at that.

The "uniqueidentifier" part of this you can get through the
sqlalchemy.dialects.mssql.UNIQUEIDENTIFIER datatype.   The "ON
[PRIMARY]" part I'm not sure if the SQL server dialect supports that
syntax right now.   If you need that syntax then we'd have to alter
SQLAlchemy to support that, unless you were to render the DDL entirely
as a string.

let me know if that gets you started.


On Mon, Mar 19, 2018 at 12:08 PM, marc beirne  wrote:
> From issue here, closed as usage question:
>
> https://bitbucket.org/zzzeek/alembic/issues/485/mssql-clustered-index-non-primary
>
> Original question:
>
> Is there a solution for autogenerating in a single pass MSSQL tables with a
> primary key which isn't the clustered index, and a clustered index on a
> different column.
>
> Just defining a second clustered index column creates an exception because
> MSSQL is automatically setting the primary key to be the clustered index,
> setting index=False on that column definition doesn't help.
>
> If you manually run two revisions, one without the pks and then add the pks
> in later everything is fine but I can't see a way to configure the
> autogeneration to push constraint definitions to later statements, which
> would work, or force MSSQL not to create the clustered index on the PK.
>
>
> Original Response:
>
> if you can show me the DDL you want to emit I can help get you there.
>
> this is a usage question so far so let's pick it up on
> https://groups.google.com/forum/#!forum/sqlalchemy-alembic. thanks!
>
>
>
> This is an example of the DDL that  think someone would need to do this
>
>> CREATE TABLE [dbo].[test_temp_table](
>> [test_temp_table_clustered_id] [uniqueidentifier] NULL,
>> [test_temp_table_id] [int] NOT NULL,
>> [test_temp_column] [nvarchar](100) NULL
>> ) ON [PRIMARY]
>>
>>
>> CREATE CLUSTERED INDEX [arbitrary_name] ON [dbo].[test_temp_table]
>> (
>> [test_temp_table_clustered_id] ASC
>> )
>>
>> ALTER TABLE test_temp_table
>> ADD CONSTRAINT pk_test_temp_table
>> PRIMARY KEY (test_temp_table_id)
>
>
> The point is that instead of the primary key being declared inline in the
> table definition, it needs to be added after the clustered index is created
> because otherwise MSSQL will automatically add a clustered index on the pk
> column..
>
> Looking at the documentation, I'd think this might be possible through
>
> "process_revision_directives"
>
> But when I try and use this to split out the table generation and primary
> key addition, I need to add to the UpgrtadeOps.ops list:
> ops.CreatePrimaryKeyOp(*stuff)
>
> if I do this, the generation fails later at
> alembic.autogenerate.render._render_primary_key
> with a NotImplemented written there.
>
> I don't really understand how this project fits together so I'm not sure
> whether that is really something that's not implemented or just a crossed
> wire, or how to hack it either way.
>
> Best Regards,
>
> Marc
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


not announcing minor releases on the mailing list or twitter for now

2018-03-06 Thread Mike Bayer
Hi all -

Some of you may have noticed that several SQLAlchemy releases have
gone out in the past few weeks without my customary email on these
lists, nor without a tweet from the SQLAlchemy twitter account.

The reason for this is that I have created scripts that
comprehensively run through the many, many steps used to produce a
SQLAlchemy release, including building the documentation, changing
dates, writing the CHANGES file, applying all the tags and such, with
the goal being that for me to personally release SQLAlchemy is no
longer a life-changing event;   I can instead release as needed, e.g.
every few weeks if needed or if urgent changes have been committed
even every few days, without it becoming a major time sink and more
importantly without my needing to be "on" mentally; I can do it
automatically and not worry too much about things going wrong if I'm
tired or not able to concentrate deeply.

Previously, the burden of releasing would cause me to wait longer, and
more critically the release would continually get pushed back as more
little bug reports and pull requests kept coming in and I would want
them all "on board" so that I wouldn't have to worry about releasing
again.  This was a dysfunctional way of operating, and while we can
probably thank "slight dysfunction" as one of the reasons SQLAlchemy
even exists, making releases much easier is now allowing me to just
put out a release without waiting to be alert and while knowing I can
get to the next round of pending issues and release them just as
easily.

I am still adding a brief blog post to the site per release, and that
part is yet to be "automated", although I would like to make this part
of the automated process as well.

To that end, the production of the blog post would also perform a copy
operation out to the google groups mailing list, and also to the
SQLAlchemy twitter account.  Automating these tasks is not something I
have the time to do right now, as just getting the bulk of the release
process was enough. All of SQLAlchemy's infrastructure has been
fully scripted at this point, including all web servers and CI servers
are built and maintained using Ansible scripts, and the release
scripts use bash scripting.  Contributors who have an interest in
being part of SQLAlchemy's "infrastructure" project and who may wish
to help with automating mailing list and twitter posts can be added to
the group that has access to the source code for these tools.

thanks all for listening!

- mike


So for now, because I need to be able to push out a release quickly and without

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Vertica support for Alembic

2018-02-08 Thread Mike Bayer
I dont know anything about Vertica, however if there is a SQLAlchemy
dialect in the wild for Vertica, they can add the Alembic migration
bits on that end, which starts with a small stub implementation.   For
example see redshift at
https://github.com/sqlalchemy-redshift/sqlalchemy-redshift/blob/master/sqlalchemy_redshift/dialect.py#L23
for an example of such a stub.

On Thu, Feb 8, 2018 at 9:31 AM, Kylie Hunter  wrote:
> Hi all
>
> Are there any plans for Vertica support in Alembic? Or is there perhaps an
> alternative?
>
> Thanks,
> Kylie
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.2.1 released

2018-01-15 Thread Mike Bayer
SQLAlchemy release 1.2.1 is now available.

Release 1.2.1 fixes some minor issues for the newly released 1.2
series, including a few small regressions as well as some issues
reported with the new "selectin" relationship and polymorphic loader
features.

Changelog for 1.2.1 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_2_1

SQLAlchemy 1.2.1 is available on the Download Page at:
http://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: A migrate thinks there's always a change with an Indexed CamelCase field since 0.92 (change 421 - 4cdb25b)

2018-01-02 Thread Mike Bayer
On Tue, Jan 2, 2018 at 2:18 PM, Seaders Oloinsigh  wrote:
> This change is the culprit,
>
> https://github.com/zzzeek/alembic/commit/4cdb25bf5d70e6e8a789c75c59a2a908433674ce#diff-e9d21bbd0f4be415139b75917420ad1c
>
> I've a few auto-generated tables, that are one-to-one mappings with an
> external data feed.  For any fields that have indexes set on them, the
> mapping messes up, and thinks there's always a change.

this issue is added as
https://bitbucket.org/zzzeek/alembic/issues/472/quoting-leaking-into-index-names-due-to
and the fix is working through code review at
https://gerrit.sqlalchemy.org/#/c/zzzeek/alembic/+/619/.

>
> Consider the very simple class,
>
>
> class User(db.Model):
> id = db.Column(db.Integer, primary_key=True)
> registrationNumber = db.Column(db.Integer, index=True)
>
>
> The index for "registrationNumber" is named "ix_user_registrationNumber".
>
> *But* in compare.py, when it's running through the added, changed, and
> removed indexes, it compares "metadata_names" with "conn_names".
>
> *Before* this change, metadata_names used c.name as its "key", same as
> conn_names, but *after* this change, metadata_names use
> c.md_name_to_sql_name(autogen_context),
> whereas conn_names continues to use the "simple" c.name.
>
> I'd say for the majority of use cases, where field names are pep8 standard,
> and lower case with underscores, instead of camel case, there's no issue,
> but when you have a situation like "registrationNumber" here, when you look
> at what happens inside that md_name_to_sql_name,
>
> *Eventually*, we end up in,
>
> https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/sql/compiler.py#L3026
>
> The check for "requires_quotes" checks a number of things, but eventually
> checks if the lowercase version of the field is equal to the field name,
> i.e. in this case, is "ix_user_registrationNumber" equal to
> "ix_user_registrationnumber" which is obviously false, and that check then
> quotes that field.  This means, the key for the metadata_name turns in to
> "`ix_user_registrationNumber`", quoted, with ticks, but conn_name is still
> "ix_user_registrationNumber", not quoted, no ticks.
>
> The result of this is that, at every attempted migrate, you get
>
> INFO  [alembic.autogenerate.compare] Detected added index
> 'ix_user_registrationNumber' on '['registrationNumber']'
> INFO  [alembic.autogenerate.compare] Detected removed index
> 'ix_user_registrationNumber' on 'user'
>
> And no matter how many times you run an upgrade, you still get that result.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.2.0 released

2017-12-27 Thread Mike Bayer
SQLAlchemy release 1.2.0 is now available.

Release 1.2.0 is the first official release in the 1.2 series, after
three beta releases. The release represents the past eighteen months
of new feature development since version 1.1 was released.

The 1.2 series has a large mix of features and behavioral enhancements
spread across both the Core and ORM components, including in the area
of ORM relationship loading, the behavior of the Core IN operator, new
SQL constructs such as DELETE..FROM and MySQL's INSERT..ON DUPLICATE
KEY UPDATE, as well as a new connection pooling feature that provides
for pessimistic connection testing upon checkout.

The migration notes and changelog should cover everything to look for,
however some major changes include:

* Lazy loaders and deferred eager loaders now use the baked query
system for a major reduction in Python call overhead

* A new eager loader called "SELECT IN" loading, similar to subquery
eager loading but uses an IN clause with primary keys for
significantly faster performance in many situations

* A variant of "SELECT IN" loading for polymorphic mappings, which
allows the sub-table rows of a heterogeneous result of joined
inheritance classes to be loaded en-masse without using a LEFT OUTER
JOIN up front.

* The IN operator now allows empty lists in a performant manner,
revising a long standing restriction on these expressions

* A new "expanding IN" clause that allows for arbitrary lists of items
to be late-bound to an IN operator just like a single bound parameter

* Support for psycopg2's recently added "fast execution" helper,
allowing for batching of DML statements into larger chunks for a great
reduction in network overhead

* A new connection pool parameter pool_pre_ping, which pings
connections upon checkout, thereby eliminating the issue of stale
connections in a connection pool

* Support for table and column comments in DDL and reflection

* Major reworks of both the cx_Oracle and pyodbc MSSQL and pymssql
dialects for greater stability and performance.

There are many more behavioral improvements and changes, all of which
are backwards-compatible for but as is always the case, may be
surprising in those cases where the sudden appearance of a feature, an
assertion that wasn't present earlier, or a fix of a previously
undefined or buggy behavior may produce unexpected results or error
messages for an existing application. Users should please carefully
review the full series of migration notes at
http://www.sqlalchemy.org/docs/latest/changelog/migration_12.html to
see the full expanse of things that have changed and should always
fully test existing applications against the 1.2 series before
promoting to production.

The complete changelog for 1.2.0 as well as all the beta releases is
at http://www.sqlalchemy.org/changelog/CHANGES_1_2_0; we'd like to
thank the many contributors who helped with this release.

SQLAlchemy 1.2.0 is available on the Download Page at:
http://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: unit testing migration code

2017-12-15 Thread Mike Bayer
don't think so but haven't looked. We have a fixture in Openstack
that does that kind of thing, but it's not non-openstack friendly.   I
think you've found your next project!


On Fri, Dec 15, 2017 at 1:42 AM, Chris Withers <ch...@withers.org> wrote:
> Has anyone done anything like an equivalent of the following but for
> Alembic?
>
> https://github.com/plumdog/django_migration_testcase
>
> cheers,
>
> Chris
>
>
> On 01/12/2017 15:06, Mike Bayer wrote:
>>
>> it's kind of a PITA but in Openstack we have fixtures which actually
>> run all the alembic (or sqlalchemy-migrate) migrations on a new
>> database.  Some of the more ambitious projects even write unit tests
>> in between each migration that use inspect() to check that the
>> database state is what's expected.
>>
>> so to do things like that, you need a fixture which can:
>>
>> 1. create a new database (you probably need to produce a randomized
>> name for concurrency)
>> 2. invoke alembic to each revision individually (you can do this
>> through alembic API:
>> http://alembic.zzzcomputing.com/en/latest/api/commands.html
>> 3. have a dispatch which can call upon test cases linked to that rev,
>> like "def test_aabbccddee_does_thing_one()"
>> 4. drops the database
>>
>>
>> and...that's how you do it !
>>
>>
>>
>> On Thu, Nov 30, 2017 at 1:54 PM, Chris Withers <ch...@simplistix.co.uk>
>> wrote:
>>>
>>> Hi All,
>>>
>>> How would I add test coverage for this sort of code?
>>>
>>>
>>> https://coveralls.io/builds/14408741/source?filename=mortar_mixins%2Fmigrations.py
>>>
>>> cheers,
>>>
>>> Chris
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "sqlalchemy-alembic" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: alembic_version creation fails when in different new schema (Postgres)

2017-12-06 Thread Mike Bayer
On Wed, Dec 6, 2017 at 5:34 PM, Jules Olléon  wrote:
> Hello,
>
> I have an issue with schema creation when the alembic_version table is not
> in the default (public) schema on Postgres.
>
> We have a setup where multiple services use the same Postgres DB (to
> simplify ops work) but we are trying to isolate each service in its own
> Postgres schema.
> The `public` schema is already used by another service with its own
> alembic_version table, so I used the `version_table_schema` parameter to put
> alembic_version in the right schema (the one that belongs to my service). I
> ran into some interesting issues (but figured it out, see
> https://stackoverflow.com/questions/40577640/flask-migrate-using-different-postgres-schemas-table-args-schema-te)
> but eventually got pretty much everything working *assuming the schema
> already exists*.
>
> I'm now trying to setup tests that run all migrations on a fresh DB part on
> our CI/CD pipeline for additional confidence, and ran into this issue: for a
> fresh DB my schema doesn't exist yet, and migrations are thus failing. How
> to create the schema initially?
>
> I was hoping to use alembic for this and modified my initial migration to
> do:
>
> op.execute('CREATE SCHEMA IF NOT EXISTS data')
>
> but this is not sufficient since the alembic_version table itself is stored
> in that schema ('data'), and alembic tries to create the version table
> before running the initial migration, which fails since the schema doesn't
> exist:

so if you are using the "version_table_schema" parameter to set this
schema, that means you know what this schema is inside of your env.py.
So put "connection.execute("CREATE SCHEMA IF NOT EXISTS data")" right
there, in your env.py when you first get the connection and before you
call "context.begin_transaction()".





>
> INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
> INFO  [alembic.runtime.migration] Will assume transactional DDL.
> Traceback (most recent call last):
>   File "/usr/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line
> 1182, in _execute_context
> context)
>   File "/usr/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line
> 470, in do_execute
> cursor.execute(statement, parameters)
> psycopg2.ProgrammingError: schema "data" does not exist
> LINE 2: CREATE TABLE data.alembic_version (
>  ^
>
> I'd love to keep all DB setup within Alembic, any idea how to fix this?
> Otherwise I'll need to create the schema separately before running alembic
> migrations :(.
>
> Thanks,
> Jules
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Weird Alembic Behavior

2017-10-30 Thread Mike Bayer
On Mon, Oct 30, 2017 at 11:23 PM,   wrote:
> I have 2 migration scripts.
>
> The first 1 (base) is this
>
> from models import EntityProperty
> from contextlib import contextmanager
>
> # revision identifiers, used by Alembic.
> revision = 'ecbde8fa83e3'
> down_revision = None
> branch_labels = None
> depends_on = None
>
> from alembic import op   # noqa
> import sqlalchemy as sa  # noqa
>
>
> @contextmanager
> def session_scope():
>session = sa.orm.session.Session(bind=op.get_bind())
>try:
>yield session
>session.commit()
>except:
> session.rollback()
> raise
>finally:
> session.close()
>
>
> def _upgrade(session):
>properties = session.query(EntityProperty).filter(
>   ~EntityProperty._ptype.in_(["AI", "AO", "AV"])
>).all()
>
>for _property in properties:
>   _property._cov_increment = None
>
>
> def upgrade():
>with session_scope() as session:
> _upgrade(session)
>
>
> def downgrade():
> pass
>
>
> This script queries the EntityPropertyTable and does some op on it.
>
> The second migration script is
>
> revision = 'ab47480a7be7'
>  down_revision = u'ecbde8fa83e3'
>  branch_labels = None
>  depends_on = None
>
>  from alembic import op   # noqa
>  import sqlalchemy as sa  # noqa
>
>
> def upgrade():
># add bacnet_enable to EP
>with op.batch_alter_table(u'entityproperties', schema=None) as batch_op:
>   batch_op.execute("PRAGMA foreign_keys=OFF;")
>   batch_op.add_column(
>   sa.Column(
>   'bacnet_object', sa.Boolean(), nullable=True,
>   server_default=expression.true()))
>
>
>  def downgrade():
> with op.batch_alter_table(u'entityproperties', schema=None) as batch_op:
>   batch_op.drop_column('bacnet_object')
>
>
> This script does operate in batch mode and adds a column called
> 'bacnet_object' to the EntityProperty table.
>
> Now when I try to downgrade to base 'downgrade -r base' I get the following
> error
>
> no such column: entityproperties.bacnet_object
>
> This error is while executing script 1. The execution of script 2 does
> proceeds without any issues. When I look up the generated SQL for script1 I
> find this in the SQL statement
> Enter code here..
> entityproperties.bacnet_object AS entityproperties_bacnet_object
>
> The downgrade of script2 does indeed removes the bacnet_object column from
> the entityproperty table. Why the SQL generated for script1 is still looking
> for bacnet_object column in the EntityProperty table?

This would mean your mappings used in script #1 still include the
bacnet_object column.

You haven't given me any information on how EntityProperty gets
created but it would appear that it runs table reflection upon import.
  This import would occur before any migration script runs, so when
script 2 drops the column you now have a stale mapping.

The general antipattern here is that you're referring to your
application's models within a migration script.   Since each migration
file represents a point in time of your schema design, one that might
be very different from what the current model is, to run any SQL
queries in a migration file you should create Table metadata directly
in the migration that explicitly includes the tables and columns you
need to query against, which would always exist as of that migration
version.


>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Alembic autogenerate ignore non mapped tables.

2017-10-15 Thread Mike Bayer
On Sat, Oct 14, 2017 at 11:16 PM, Srikanth Bemineni
 wrote:
> Hi,
>
> When I auto generate a migration using alembic, it automatically drops some
> dynamically created tables which are in the DB schema. These tables are
> dynamically created by my application. These class don't inherit declarative
> base. They are added to declarative base by the application during run time
> , but not during migration creation. How can alembic ignore these tables.?

check out the include_object parameter
http://alembic.zzzcomputing.com/en/latest/api/runtime.html#alembic.runtime.environment.EnvironmentContext.configure.params.include_object
which is the catchall hook for "ignore item" rules.




>
> Srikanth B
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.2.0b3 Released

2017-10-13 Thread Mike Bayer
SQLAlchemy release 1.2.0b3 is now available.

Release 1.2.0b3 is possibly the last beta release before 1.2.0 final
is released. The release of 1.2.0b3 was delayed by two major factors.
One was the release of cx_Oracle 6.0, which is a complete rework of
the underlying client library for cx_Oracle that in turn required a
lot of work on the SQLAlchemy side to accommodate, which resulted in
SQLAlchemy's cx_Oracle dialect itself being reworked and modernized to
make more complete use of patterns that were introduced in the 5.x
series of cx_Oracle. The other was made possible by the advent of
Microsoft SQL Server for Linux, which has allowed SQL Server testing
to finally be part of of SQLAlchemy's continuous integration
environment; as a result of this testing effort the library and test
suite have had many long-standing SQL Server issues repaired.

On top of those two efforts were the usual stream of issue fixes that
are backported into the 1.1.x series as well as additional behavioral
adjustments new to the 1.2 series.

The "What's New in SQLAlchemy 1.2?" [1] document as well as the
Changelog [2] will detail these changes as well as the changes since
the 1.1 series overall. Users should carefully review these documents
when testing applications against the 1.2 series, including for the
move from 1.2.0b2 to 1.2.0b3. We'd like to thank the many contributors
who helped with this release.

SQLAlchemy 1.2.0b3 is available on the Download Page. [3]

[1] http://www.sqlalchemy.org/docs/latest/changelog/migration_12.html
[2] http://www.sqlalchemy.org/changelog/CHANGES_1_2_0b3
[3] http://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: importing from myapp

2017-10-03 Thread Mike Bayer
On Mon, Oct 2, 2017 at 9:06 PM, hatch  wrote:
> from myapp.mymodel import Base
> target_metadata = Base.metadata
>
> http://alembic.zzzcomputing.com/en/latest/autogenerate.html
>
> What is the preferred way to access myapp? As far as I can tell it's not
> normally available using this structure (doing the above gives ImportError),
> without altering sys.path. Is it expected that "myapp" is installed? Or
> another approach?

it's expected that "myapp" is installed in some way, for development I
usually just set PYTHONPATH or set up a virtualenv with pip install
-e.


>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Recommended logger for migration scripts

2017-10-02 Thread Mike Bayer
On Sun, Oct 1, 2017 at 7:41 PM,   wrote:
> I've used in my generated migration script:
>
> log = logging.getLogger(__name__)
> log.setLevel(logging.INFO)
>
> for now and that seems to work; quick glance at Alembic’s code indicates
> that’s what Alembic uses as well. The util/messaging.py is used in other
> contexts.

inside your environment's env.py, Alembic places this:

# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)

that's reading the [loggers] and other logging sections from your
alembic.ini file.   You can add your own logger name to that file so
that your own logger is usable and also configurable without changing
any source code, such as where you added log.setLevel(logging.INFO).

You can use any logger by name, like alembic's is
logging.getLogger("alembic"), and as you noted you can get one with
your own namespace using logging.getLogger(__name__).




>
> Jens
>
>
>
> On Sunday, October 1, 2017 at 5:03:31 PM UTC+10, jens.t...@gmail.com wrote:
>>
>> Hi,
>>
>> I’d like to output some logging info from my migration script, which was
>> generated by Alembic. Other than Python’s own logging module, what is the
>> recommended way to hook into Alembic’s logger to use it’s formatting?
>>
>> Thanks!
>> Jens
>>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Modeling One-to-One relationships and Unique constraints

2017-09-06 Thread Mike Bayer
On Wed, Sep 6, 2017 at 3:12 AM,   wrote:
> Hello,
>
> I am a bit puzzled over modeling One-to-One relationships. The example in
> the documentation says to use uselist flag on the relationship declaration.
> That does make sense to ensure scalars on both sides of the relationship.
> However, that's just an ORM flag and does not necessarily translate to the
> DB (e.g. using Alembic).
>
> In this context I miss the mention of a unique constraint. Could the uselist
> flag not be derived if a unique constraint was specified on the foreign key
> column? For example:
>
> class Parent(Base):
> __tablename__ = 'parent'
> id = Column(Integer, primary_key=True)
> child_id = Column(Integer, ForeignKey('child.id'), unique=True)
> child = relationship("Child", back_populates="parent")
>
> class Child(Base):
> __tablename__ = 'child'
> id = Column(Integer, primary_key=True)
> parent = relationship("Parent", back_populates="child") # uselist=False
> redundant?
>
> Here, the Parent.child_id column has a unique constraint which narrows the
> Many-to-One to a One-to-One relationship. Would this not make the uselist
> flag redundant?
>
> Or am I missing something?

One can build a one-to-one relationship *without* a unique constraint
being present, hence the uselist flag as a public accessor still has a
purpose.   Additionally, if your Parent class linked to Child using a
composite foreign key where only some of the columns had unique=True,
that again means having uselist as explicit is useful.Add to that,
sometimes the table has been reflected, and it's only in the last few
years that SQLAlchemy has had the ability to reflect unique
constraints, and even then, not on all backends.

So there's many cases where a unique constraint may or may not be
present, yet we still need the uselist flag.   None of these cases are
the very specific case you have here.   The precedent for uselist
setting itself to false implicitly would be that a standard
many-to-one, which involves foreign keys that link directly to a
primary key, and the relationship() construct performs a lot of
heuristics when it is configured in order to figure this out.  These
heuristics took years to get right.

Overall, it is optional from an ORM point of view to set unique
constraints on Table metadata, even if the backing database does have
those constraints in place.   But this is not the case for a primary
key, which is explicitly required by the ORM.   So it is easier for
the ORM to rely upon making decisions for uselist based on a primary
key constraint, which is a very reliable source of information, versus
the presence of a unique constraint, which is an unreliable source of
information (in that it is optional, might not be set, might not be
reflected, might be present in the database and/or the model and not
actually on the other side).

So to minimize confusion and try to stick to the principle of "There
should be one-- and preferably only one --obvious way to do it.",
there is no feature right now to attempt to auto-determine uselist
based on the combination of the presence of a unique constraint
combined with the correct primaryjoin geometry.




>
> Thanks!
> Jens
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.1.14 released

2017-09-05 Thread Mike Bayer
SQLAlchemy release 1.1.14 is now available.

This release includes a variety of fixes for the ORM. Two of the
issues are critical stability issues involving garbage-collection
related bugs that are more likely to be seen within the Pypy
interpreter however are not necessarily limited to this platform.

Changelog for 1.1.14 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_1_14

SQLAlchemy 1.1.14 is available on the Download Page at:
http://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Rename an existing constraint

2017-08-30 Thread Mike Bayer
On Wed, Aug 30, 2017 at 7:18 AM,   wrote:
> Given an Alembic migration, would you recommend the following code to rename
> constraints?
>
> from alembic import op
> import sqlalchemy as sa
> from srv.orm.meta import NAMING_CONVENTION # as per Pylons cookiecutter
> template
>
> def upgrade():
>
> connection = op.get_bind()
> engine = connection.engine
> metadata = sa.MetaData(naming_convention=NAMING_CONVENTION)
>
> for table_name in engine.table_names():
> table = sa.Table(table_name, metadata, autoload_with=connection)
> for fk in table.foreign_keys:
> op.drop_constraint(fk.name, table_name, type_="foreignkey")
> fk.name = None
> op.invoke(CreateForeignKeyOp.from_constraint(fk))
>
> For downgrade() we'd create a metadata without naming_convention, thus
> falling back to the db's naming (which is where we're coming from).
>
> However, how would I go about iterating over indexes, unique constraints,
> and check constraints of a table?

same idea, the table has table.indexes for the Index objects,
table.constraints in fact has all of ForeignKeyConstraint,
UniqueConstraint, CheckConstraint so you could iterate just that and
do an isinstance() to take the right course of action.


>
> Thank you!
>
>
> On Wednesday, August 30, 2017 at 7:20:32 AM UTC+10, jens.t...@gmail.com
> wrote:
>>
>> Thank you, Mike!
>>
>> I’ll take a closer look at your proposed code this week.
>>
>> I am curious though: not even MySQL has a rename feature, is that because
>> of consistency? PostgreSQL adds ALTER TABLE … RENAME CONSTRAINT with 9.2
>> though. (Boy, I keep running into issues that keep pushing me towards
>> Porstgres.)
>>
>> Jens
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Rename an existing constraint

2017-08-29 Thread Mike Bayer
Unfortunately there's no RENAME CONSTRAINT directive, so dropping and
recreating is how it has to go.

Alembic and SQLAlchemy together do have the ability to return lists of
constraints and produce CREATE instructions for them.   Recently I did
some of this related to foreign keys, and the same idea would apply to
indexes, unique constraints, CHECK constraints.   Here's two ways to
do that to give you some ideas:


from alembic.operations.ops import CreateForeignKeyOp
from alembic.migration import MigrationContext
from alembic.operations import Operations

with engine.connect() as conn:

ctx = MigrationContext.configure(conn)
op = Operations(ctx)
m = MetaData(naming_convention={...})

t = Table(table_name, m, autoload_with=conn)
for fk in t.foreign_keys:
op.drop_constraint(
fk.name, table_name, type_="foreignkey")

# give fk the name we want
fk.name = "somename"

# or name it None to get the naming convention
# fk.name = None

create_fk = CreateForeignKeyOp.from_constraint(fk)

op.invoke(create_fk)


or using the inspector, more manual / explicit:

from sqlalchemy import inspect
with engine.connect() as conn:

ctx = MigrationContext.configure(conn)
op = Operations(ctx)
insp = inspect(conn)
for fk in insp.get_foreign_keys(table_name):

for fk in fks:
op.drop_constraint(
fk['name'], fk['source_table'], type_="foreignkey")

for fk in fks:
op.create_foreign_key(
fk['name'], fk['source_table'],
fk['referred_table'],
fk['constrained_columns'],
fk['referred_columns'],
onupdate=fk['options'].get('onupdate'),
ondelete=fk['options'].get('ondelete'),
deferrable=fk['options'].get('deferrable'),
initially=fk['options'].get('initially'),
)







On Tue, Aug 29, 2017 at 7:01 AM,   wrote:
> Hello,
>
> I started out migrating my db schema forward using Alembic, and without any
> constraint naming convention. That caused constraints to be named using
> MySQL’s default naming. Alas, now I added a naming convention (see doc here)
> which is different than the db’s naming.
>
> So now I face a mix of current db-style constraint names, and new
> convention-stale constraint names. That’ll cause me a headache.
>
> What is the recommended way of renaming all existing constraint names? Is
> dropping/creating them the only way, or did I fail finding a
> “rename_constraint()” function in Alembic?
>
> Thanks!
> Jens
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: first time migration with existing database ?

2017-08-08 Thread Mike Bayer
On Tue, Aug 8, 2017 at 6:14 AM, Sandeep Srinivasa  wrote:
> hi guys,
> I'm just taking over a legacy piece of code where tables were hand created
> (SQL).
>
> I'm trying to migrate this over to alembic. I dont have (want to use) any
> python models, so this is alembic by itself.
>
> I'm unsure of two things:
> 1. How do I create the first migration - for my first migration, I want to
> snapshot the existing structure of the database. Most of the autogenerate
> takes place only if there is an existing set of python models. How do I do
> this in my case

that is a fantastic question and I don't think anyone has asked that
before.So I worked up an example, we should add it to the
"Cookbook" documentation.   This will print out just the python code
you'd stick inside of upgrade() by hand, with some more it could write
the whole migration file out at once but I think this will do what you
need:

from sqlalchemy import Column, Integer, ForeignKey, create_engine, \
String, Index, MetaData
from sqlalchemy.ext.declarative import declarative_base
from alembic.operations import ops
from alembic.autogenerate import render_python_code


def make_a_database(engine):
Base = declarative_base()

class A1(Base):
__tablename__ = 'a1'
id = Column(Integer, primary_key=True)

class A2(Base):
__tablename__ = 'a2'
id = Column(Integer, primary_key=True)

class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a1_id = Column(ForeignKey('a1.id'))
a2_id = Column(ForeignKey('a2.id'))

class C(Base):
__tablename__ = 'c'

foo = Column(String(50), primary_key=True)
bar = Column(String(10))

Index("c_bar", C.bar)

Base.metadata.create_all(engine)


def metadata_to_migration(metadata):
migration_script = ops.MigrationScript(
'initial_rev',
ops.UpgradeOps(
ops=[
ops.CreateTableOp.from_table(table)
for table in reflected_metadata.sorted_tables
]
+ [
ops.CreateIndexOp.from_index(index)
for table in reflected_metadata.sorted_tables
for index in table.indexes
]
),
ops.DowngradeOps(),
)

print(render_python_code(migration_script.upgrade_ops))


engine = create_engine("sqlite://")
make_a_database(engine)

reflected_metadata = MetaData()
reflected_metadata.reflect(engine)
metadata_to_migration(reflected_metadata)




> 2. How do I apply the migration ... without applying the migration. Since I
> need alembic to start managing my db, I realize I need to get it to create a
> "migrate_version" . However, I cant really apply the migration. How do I do
> this ?

For this use case you use the "alembic stamp" command, the section at
http://alembic.zzzcomputing.com/en/latest/cookbook.html#building-an-up-to-date-database-from-scratch
talks about this command.




>
> Thanks guys!
> regards
> sandeep
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Bypass current version check when stamping.

2017-08-05 Thread Mike Bayer
On Aug 5, 2017 9:05 AM, "Philippe Lavoie" <philippe.lavo...@gmail.com>
wrote:

Ah ok, for sure you're thinking about covering more cases than I am.


consider if you have multiple, independent branches (eg are not connected
down to the root ) and you only want to stamp for one of them.   If it sees
unfamiliar identifiers in the table it doesn't know whether or not to
delete it or leave it alone.



Thanks for your reply :)

On Sat, Aug 5, 2017 at 9:02 AM, Mike Bayer <clas...@zzzcomputing.com> wrote:

>
>
> On Aug 5, 2017 8:40 AM, "Philippe Lavoie" <philippe.lavo...@gmail.com>
> wrote:
>
> Let's say you're on branch and you have a migration, you test it out, it
> works: everything is cool. The version in `public.alembic_version` is your
> latest migration.
> Before you merge it you have to switch to a different branch to work on
> something else and you also have a migration there.
>
> You want to use `alembic stamp` to move back to the previous state, but
> `alembic` will complain that it can't find the revision stored in
> `public.alembic_version`, since it was in your other branch.
>
> The work around is to just drop the table and redo the stamp. I think it
> would be convenient for `alembic` to bypass this check, what do you think?
>
>
>
> It's unfortunately not possible in the general case because a stamp may or
> may not affect existing rows in the table depending on their relationship
> to the stamp you are specifying.  Without the version files present, it
> doesn't know anything about them.  If you're sure about the rows which go
> away, you can DELETE them (no need for DROP).
>
>
> I suppose a "purge-all" option to "stamp" could add this delete as a
> convenience but it would need to carry appropriate warnings.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy-alembic" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy-alembic/Qum7T9aZTaA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy-alembic+unsubscr...@googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Bypass current version check when stamping.

2017-08-05 Thread Mike Bayer
On Aug 5, 2017 8:40 AM, "Philippe Lavoie" 
wrote:

Let's say you're on branch and you have a migration, you test it out, it
works: everything is cool. The version in `public.alembic_version` is your
latest migration.
Before you merge it you have to switch to a different branch to work on
something else and you also have a migration there.

You want to use `alembic stamp` to move back to the previous state, but
`alembic` will complain that it can't find the revision stored in
`public.alembic_version`, since it was in your other branch.

The work around is to just drop the table and redo the stamp. I think it
would be convenient for `alembic` to bypass this check, what do you think?



It's unfortunately not possible in the general case because a stamp may or
may not affect existing rows in the table depending on their relationship
to the stamp you are specifying.  Without the version files present, it
doesn't know anything about them.  If you're sure about the rows which go
away, you can DELETE them (no need for DROP).


I suppose a "purge-all" option to "stamp" could add this delete as a
convenience but it would need to carry appropriate warnings.

-- 
You received this message because you are subscribed to the Google Groups
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.1.13 released

2017-08-03 Thread Mike Bayer
SQLAlchemy release 1.1.13 is now available.

This release modifies the just-released fix for Oracle "WITH_UNICODE"
mode based on feedback from cx_Oracle developers, to resolve a
performance regression caused by this change.

Changelog for 1.1.13 is at: https://www.sqlalchemy.org/changelog/CHANGES_1_1_13

SQLAlchemy 1.1.13 is available on the Download Page at:
https://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.0.19 released

2017-08-03 Thread Mike Bayer
SQLAlchemy release 1.0.19 is now available.

This release modifies the just-released fix for Oracle "WITH_UNICODE"
mode based on feedback from cx_Oracle developers, to resolve a
performance regression caused by this change.

Changelog for 1.0.19 is at:  https://www.sqlalchemy.org/changelog/CHANGES_1_0_19

SQLAlchemy 1.0.19 is available on the Download Page at:
https://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.1.12 released

2017-07-24 Thread Mike Bayer
SQLAlchemy release 1.1.12 is now available.

This release repairs an issue in the test suite that would prevent the
suite from passing on Python version 3.6.2. The release is made at the
same time as that of release 1.0.18 and 1.2.0b2. Also included are
fixes related to ORM use of JSON NULL values, subquery eager loading,
and a stability enhancement involving the identity map when used under
high concurrency.

Changelog for 1.1.12 is at: https://www.sqlalchemy.org/changelog/CHANGES_1_1_12

SQLAlchemy 1.1.12 is available on the Download Page at:
https://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.0.18 released

2017-07-24 Thread Mike Bayer
SQLAlchemy release 1.0.18 is now available.

This release repairs an issue in the test suite that would prevent the
suite from passing on Python version 3.6.2. The release is made at the
same time as that of release 1.1.12 and 1.2.0b2. Additionally, a
backport from release 1.1.7 allowing cx_Oracle 5.3 to work correctly
is included in this release.

Changelog for 1.0.18 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_18

SQLAlchemy 1.0.18 is available on the Download Page at:
http://www.sqlalchemy.org/download.html

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Altering the behavior of AddColumn

2017-07-07 Thread Mike Bayer
more straightforward would be a pull request that adds "mysql_after"
keyword to both op.add_column() and op.modify_column().

On Fri, Jul 7, 2017 at 1:33 AM,   wrote:
>
>
> On Tuesday, August 27, 2013 at 9:31:29 AM UTC-5, Michael Bayer wrote:
>>
>>
>> On Aug 26, 2013, at 11:49 PM, Samer Atiani  wrote:
>>
>> Hello,
>>
>> All my tables have create_date and update_date columns in them, and I like
>> to keep these columns as the last two columns in any table for convention
>> reasons. However, when you use alembic autogenerate to generate schema
>> migrations to add columns, the columns are always added to the end of the
>> table. With time, the create_date and update_date columns in my tables will
>> end up being in the middle of the column list in my MySQL database.
>>
>> So I was trying to alter this behavior by making alembic use MySQL's
>> "ALTER TABLE x ADD COLUMN y AFTER z" feature to always try to add columns
>> before create_date or update_date. The only way I could find out how is to
>> override the visit_add_column method after EnvironmentContext configuration.
>> I achieved this by changing the "run_migrations_online()" method in
>> alembic/env.py so that it looks like:
>>
>>
>> However, this feels quite brittle and is probably touching internal stuff
>> that it shouldn't touch. My question to you is: is there a better way to
>> achieve this? I looked at SQLAlchemy Core Events, but I couldn't find an
>> event that could correspond to adding columns, nor is the alembic code
>> firing any events I could see when it add columns (unlike, for example, when
>> it emits a CREATE TABLE statements).
>>
>>
>>
>> AddColumn is a SQL expression element so the standard way we want to
>> override those is to use @compiles
>> (http://docs.sqlalchemy.org/en/rel_0_8/core/compiler.html#changing-the-default-compilation-of-existing-constructs).
>> As you might have noticed, Alembic is already using that system internally
>> in order to provide compile rules for constructs.   So we want to override
>> that, however the roadblock is that the @compiles system currently doesn't
>> have a nice way of letting us override the @compiles of an existing
>> @compiles (should be added as a feature).  So there's a slight bit of
>> internals we need to get at Alembic's already present @compiles rule for
>> now, but the rest is straightforward:
>>
>> from sqlalchemy.ext.compiler import compiles
>> from alembic.ddl.base import AddColumn
>>
>> # ideally, the @compiles system would have some way of getting
>> # us the "existing" @compiles decorator, so this part is the
>> # hack
>> specs = AddColumn.__dict__.get('_compiler_dispatcher').specs
>> existing_dispatch = specs.get('mysql', specs['default'])
>>
>> @compiles(AddColumn, "mysql")
>> def add_column(element, compiler, **kw):
>> text = existing_dispatch(element, compiler, **kw)
>> if "after" in element.column.info:
>> text += " AFTER %s" % element.column.info['after']
>> return text
>>
>> from sqlalchemy import Column, Integer
>> from alembic.migration import MigrationContext
>> from alembic.operations import Operations
>>
>> ctx = MigrationContext.configure(dialect_name="mysql", opts={"as_sql":
>> True})
>> op = Operations(ctx)
>>
>> op.add_column("t", Column('y', Integer))
>>
>> op.add_column("t", Column('x', Integer, info={"after": "y"}))
>
>
>
> I'm interested in applying a similar solution to handle alter columns.
>
> The problem I'm seeing is that  MySQLImpl.alter_column() does not pass *kw
> to MySQLChangeColumn or  MySQLModifyColumn. So there doesn't seem to be a
> simple way that I can write a @compiles(MySQLChangeColumn, 'mysql')
> decorated function, and the interface would have to be something like
> element.info  instead of element.column.info too.   Also, can you explain
> why I have to use MySQLChangeColumn/MySQLModifyColumn instead of AlterColumn
> in the @compiles decorator?
>
> Would it be possible to get support for this into alembic proper?  I can get
> it to work, but I have to monkey patch MySQLImpl.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy-alembic" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy-alembic+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.1.11 Released

2017-06-19 Thread mike bayer



SQLAlchemy release 1.1.11 is now available.

Release 1.1.11 includes a series of fixes providing 
forwards-compatibility with a variety new behaviors in supported 
databases. A few other core fixes and one ORM-related fix is also included.


Changelog for 1.1.11 is at: 
http://www.sqlalchemy.org/changelog/CHANGES_1_1_11


SQLAlchemy 1.1.11 is available on the Download Page at: 
http://www.sqlalchemy.org/download.html


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Migration fails when changing to single table inheritance

2017-06-07 Thread mike bayer



On 06/07/2017 04:44 PM, Michael wrote:
Hi all, I have a class called MediaChapter(Base), which I've refactored 
into MediaBase(Base) and MediaChapter(MediaBase) When I run the 
migration, I see:


|
psycopg2.IntegrityError:insert orupdate on table "mediachapter"violates 
foreign key constraint "fk_mediachapter_id_mediabase"

DETAIL:Key(id)=(570)isnotpresent intable "mediabase".



here's the real error with the SQL:

sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) insert or 
update on table "mediachapter" violates foreign key constraint 
"fk_mediachapter_id_mediabase"

DETAIL:  Key (id)=(570) is not present in table "mediabase".
 [SQL: 'ALTER TABLE mediachapter ADD CONSTRAINT 
fk_mediachapter_id_mediabase FOREIGN KEY(id) REFERENCES mediabase (id)']


the error means that your "mediachapter" table contains an id, "570", 
which is not present in the "mediabase" table.


it looks like you are starting with a populated "mediachapter" table 
then adding a new table "mediabase".  Before you create the constraint, 
you need to run an INSERT on "mediabase" that selects from 
"mediachapter", like:


INSERT INTO mediabase (id, col1, col2, ...) SELECT id, col1, col2, .. 
FROM mediachapter






|



I'm not sure exactly what this means because I create a mediabase.id.

I did
|
ws-alembic -c kjvrvg/conf/development.ini -x packages=all revision 
--auto-m "MediaBase-MediaChapter subclassing"

|


-- SUCCESS

then
|
ws-alembic -c kjvrvg/conf/development.ini -x packages=all upgrade head
|


-- FAILED


Here are my models, MediaBase:

|
classMediaBase(Base):
#: The table in the database
 __tablename__ ="mediabase"

#: Database primary key for the row (running counter)
 id =Column(Integer,autoincrement=True,primary_key=True)

# table inheritance
 media_type =Column(String(32),nullable=False)

#: Publicly exposed non-guessable
 uuid =Column(UUID(as_uuid=True),default=uuid4)

 localizedname =Column(Unicode(128),default=None)

#: url
 url =Column(Unicode(384),default=None)

# full iso language-locale identifier i.e. zh-Hans-US
 language_id =Column(String(16),default=None)

# name of the person presenting the material
 presenter_name =Column(Unicode(64),default=None)
 source_material =Column(Unicode(128),default=None)

# table inheritance
 __mapper_args__ ={'polymorphic_on':media_type}

def__repr__(self):
"""Shell and debugger presentation."""
return'{} ({}) {} 
<{}>'.format(self.localizedname,self.language_id,str(self.uuid),self.url)


def__str__(self):
"""Python default and admin UI string presentation."""
return'{} ({}) presenter: {} source: {} 
  <{}>'.format(self.localizedname,self.language_id,self.presenter_name,self.source_material,self.url)



|


and MediaChapter:

|
classMediaChapter(MediaBase):

#: The table in the database
 __tablename__ ="mediachapter"

 __mapper_args__ ={'polymorphic_identity':'chapter'}

 id =Column(Integer,ForeignKey('mediabase.id'),primary_key=True)

#: Which chapter this media is part of
 chapter_id =Column(Integer,ForeignKey('chapter.id'))
 chapter =relationship("Chapter",back_populates="mediachapter")
|




and finally here is the auto-generated migration. I put all files in a 
github Gist.


|
https://gist.github.com/mazz/7d63e521316859f4ae852e5cea5d84eb
|



Any suggestions?
Mazz

|

"""MediaBase-MediaChapter subclassing

Revision ID: a00980918d75
Revises: e74ba4203098
Create Date: 2017-06-07 16:10:29.807437

"""

# revision identifiers, used by Alembic.
revision ='a00980918d75'
down_revision ='e74ba4203098'
branch_labels =None
depends_on =None

importdatetime
importwebsauna.system.model.columns
fromsqlalchemy.types importText# Needed from proper creation of JSON 
fields as Alembic inserts astext_type=Text() row


fromalembic importop
importsqlalchemy assa
fromsqlalchemy.dialects importpostgresql

defupgrade():
# ### commands auto generated by Alembic - please adjust! ###
 op.create_table('mediabase',
 sa.Column('id',sa.Integer(),nullable=False),
 sa.Column('media_type',sa.String(length=32),nullable=False),
 sa.Column('uuid',postgresql.UUID(as_uuid=True),nullable=True),
 sa.Column('localizedname',sa.Unicode(length=128),nullable=True),
 sa.Column('url',sa.Unicode(length=384),nullable=True),
 sa.Column('language_id',sa.String(length=16),nullable=True),
 sa.Column('presenter_name',sa.Unicode(length=64),nullable=True),
 sa.Column('source_material',sa.Unicode(length=128),nullable=True),
 sa.PrimaryKeyConstraint('id',name=op.f('pk_mediabase'))
)
 op.alter_column('group','created_at',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=websauna.system.model.columns.UTCDateTime(),
existing_nullable=True)
 op.alter_column('group','updated_at',
existing_type=postgresql.TIMESTAMP(timezone=True),
type_=websauna.system.model.columns.UTCDateTime(),
existing_nullable=True)
 

SQLAlchemy 1.1.8 Released

2017-03-31 Thread mike bayer



SQLAlchemy release 1.1.8 is now available.

Release 1.1.8 is only a few days past 1.1.7, however is being released 
early in order to deliver a few fixes requested by downstream projects. 
In particular, a regression from the 1.0.x series involving the 
sqlalchemy.ext.mutable extension is fixed.


Changelog for 1.1.8 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_1_8

SQLAlchemy 1.1.8 is available on the Download Page at: 
http://www.sqlalchemy.org/download.html


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.1.7 Released

2017-03-27 Thread mike bayer



SQLAlchemy release 1.1.7 is now available.

Release 1.1.7 includes several bugfixes, one of which was introduced in 
1.1.6 as a result of performance enhancements for joined eager loading 
which presents as a race condition under certain multithreaded 
scenarios. Fixes also include an important adjustment in the cx_Oracle 
dialect to adjust for changes in cx_Oracle release 5.3, resolution of an 
extremely small memory leak which could occur in the relatively unusual 
case that an arbitrarily high number of savepoints were established on a 
single Connection object, as well as an important fix in the "schema 
translate" feature.


Changelog for 1.1.7 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_1_7

SQLAlchemy 1.1.7 is available on the Download Page at: 
http://www.sqlalchemy.org/download.html


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SQLAlchemy 1.0.17 Released

2017-01-18 Thread mike bayer

SQLAlchemy release 1.0.17 is now available.

The 1.0.x series is well into maintenance mode and most fixes and 
enhancements are targeted only at the 1.1.x series. This release 
includes only one ORM related fix as well as repairs to deprecation 
warnings that are more prevalent with the Python 3.6 interpreter.


Changelog for 1.0.17 is at: 
http://www.sqlalchemy.org/changelog/CHANGES_1_0_17


SQLAlchemy 1.0.17 is available on the Download Page at:

http://www.sqlalchemy.org/download.html

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   >