[sqlalchemy] SQLAlchemy 1.0.1 Released
SQLAlchemy release 1.0.1 is now available. This is a quick bug-fix release that repairs several new regressions identified in the 1.0.0 release, not found during the beta period. All users of 1.0.0 are encouraged to upgrade to 1.0.1. Key elements of this release include fixes regarding the NEVER_SET symbol leaking into queries in some cases, fixes to SQLite when using DDL in conjunction with referential integrity enabled, a fix to the EXISTS construct which primarily impacts queries that use special datatypes, and repairs to the Firebird dialect regarding the new LIMIT/OFFSET features. In order to accommodate some of these fixes, there are three additional behavioral changes in 1.0.1; a new warning is emitted when using DDL with SQLite in conjunction with mutually-dependent foreign keys (e.g. a reference cycle), a new warning is emitted when running ORM relationship comparisons when the target object contains the value None for any of the Python-side column values, and a change is made regarding which data values are used within a relationship comparison that uses the != operator, in order to make the behavior consistent with that of the == operator as used in the same context. The migration notes contains updates for all three of these changes, and they are each linked directly from the changelog which should be carefully reviewed. Changelog for 1.0.1 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_1 SQLAlchemy 1.0.1 is available on the download page at: http://www.sqlalchemy.org/download.html -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
SQLAlchemy 1.0.1 Released
SQLAlchemy release 1.0.1 is now available. This is a quick bug-fix release that repairs several new regressions identified in the 1.0.0 release, not found during the beta period. All users of 1.0.0 are encouraged to upgrade to 1.0.1. Key elements of this release include fixes regarding the NEVER_SET symbol leaking into queries in some cases, fixes to SQLite when using DDL in conjunction with referential integrity enabled, a fix to the EXISTS construct which primarily impacts queries that use special datatypes, and repairs to the Firebird dialect regarding the new LIMIT/OFFSET features. In order to accommodate some of these fixes, there are three additional behavioral changes in 1.0.1; a new warning is emitted when using DDL with SQLite in conjunction with mutually-dependent foreign keys (e.g. a reference cycle), a new warning is emitted when running ORM relationship comparisons when the target object contains the value None for any of the Python-side column values, and a change is made regarding which data values are used within a relationship comparison that uses the != operator, in order to make the behavior consistent with that of the == operator as used in the same context. The migration notes contains updates for all three of these changes, and they are each linked directly from the changelog which should be carefully reviewed. Changelog for 1.0.1 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_1 SQLAlchemy 1.0.1 is available on the download page at: http://www.sqlalchemy.org/download.html -- You received this message because you are subscribed to the Google Groups sqlalchemy-alembic group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Access __tablename__ in server_defaults?
Would it somehow be possible to access the __tablename__ in server_default? What I'm looking for is something like this: class PositionMixin(object): @declared_attrdef position(cls): return Column(Integer, server_default=text(nextval('%(__tablename__)s_id_seq'))) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Access __tablename__ in server_defaults?
if you are using __tablename__ what happens if you just refer to cls.__tablename__ in that method ? On 4/23/15 3:46 PM, Jacob Magnusson wrote: Would it somehow be possible to access the __tablename__ in server_default? What I'm looking for is something like this: |class PositionMixin(object): @declared_attr def position(cls): return Column(Integer, server_default=text(nextval('%(__tablename__)s_id_seq'))) | -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Add information to reflected tables
I have a couple of tables that I want to reflect. The first is a data table where one of the columns is a foreign key to the second table. If I used SQLAlchemy declarative_base, a query might look something like this: session.query(Client.name, Suburb.label).join(Suburb) # In the Client class there is an attribute suburb_id = Column(Integer, ForeignKey(Suburb.id)) However, this foreign key is not specified in the schema (we're using postgres 9.2) but we know all the columns that look like something_id are foreign keys, so I've been defining them that way using SQLAlchemy. My problem is, although we have a fixed number of property tables (suburb, country, join_date, ...) - each data table (per client) can have a different set of columns. This hasn't been much of a problem so far, since we only have a few *types* of client data tables, so the combinations have been limited. However, I'd like to cater for changes in the future. If I reflect the table using SQLAlchemy, the resultant table will not have the ForeignKey columns compared to if I did it manually. Is there a way to add these in after reflection? Or is my only option to use reflected tables and explicit join conditions? Something like: client_table_1 = Table('client_table_1', metadata, autoload=True, autoload_with=engine, schema='client_1') session.query(client_table_1.c.name,Suburb.label).join(Suburb, client_table_1.c.suburb_id == Suburb.id) # Explicit joins only from now on -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Add information to reflected tables
On 4/23/15 5:15 PM, Jeffrey Yan wrote: I have a couple of tables that I want to reflect. The first is a data table where one of the columns is a foreign key to the second table. If I used SQLAlchemy declarative_base, a query might look something like this: | session.query(Client.name,Suburb.label).join(Suburb)# In the Client class there is an attribute suburb_id = Column(Integer, ForeignKey(Suburb.id)) | / / However, this foreign key is not specified in the schema (we're using postgres 9.2) but we know all the columns that look like something_id are foreign keys, so I've been defining them that way using SQLAlchemy. My problem is, although we have a fixed number of property tables (suburb, country, join_date, ...) - each data table (per client) can have a different set of columns. This hasn't been much of a problem so far, since we only have a few /types/ of client data tables, so the combinations have been limited. However, I'd like to cater for changes in the future. If I reflect the table using SQLAlchemy, the resultant table will not have the ForeignKey columns compared to if I did it manually. Is there a way to add these in after reflection? you can do this by instrumenting the process of the Table being built up.It's weird enough that I had to write a demo to verify it works, so here it is: from sqlalchemy import create_engine from sqlalchemy import event from sqlalchemy import MetaData, ForeignKey, Column import re e = create_engine(postgresql://scott:tiger@localhost/test) conn = e.connect() trans = conn.begin() conn.execute( create table foo (id integer primary key) ) conn.execute( create table bar (id integer primary key, foo_id integer) ) metadata = MetaData() @event.listens_for(Column, before_parent_attach) def associate_fk(column, table): # if you want to limit the event's scope; a good idea # else this will take place for Column objects everywhere if table.metadata is not metadata: return m = re.match(r(.+)_id, column.name) if m: tname = m.group(1) column.append_foreign_key(ForeignKey('%s.id' % tname)) metadata.reflect(conn) foo = metadata.tables['foo'] bar = metadata.tables['bar'] assert bar.c.foo_id.references(foo.c.id) Or is my only option to use reflected tables and explicit join conditions? Something like: | client_table_1 =Table('client_table_1',metadata,autoload=True,autoload_with=engine,schema='client_1') session.query(client_table_1.c.name,Suburb.label).join(Suburb,client_table_1.c.suburb_id ==Suburb.id)# Explicit joins only from now on | -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] query with use of index
create table posts ( id serial primary key, view_count integer default 0 not null, vote_count integer default 0 not null ); create index on posts((case view_count when 0 then 0 else vote_count::float / view_count end)); class Post(SQL_DB.meta): __table__ = sa.Table('posts', SQL_DB.meta.metadata, autoload=True) I need to query Post, ordered by vote_count/view_count with use of index. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Generating Correlated Subqueries
On Sunday, April 19, 2015 at 7:08:41 PM UTC-4, Michael Bayer wrote: If I want *all* of the Thing object, I change it to this: q = s.query(Thing, func.count(Comment.type)).\ filter(Thing.creation = datetime.date(2015, 4, 19)).\ filter(Thing.creation datetime.date(2015, 4, 26)).\ filter(Comment.target_id == Thing.id).\ filter(Comment.creation = datetime.date(2015, 4, 19)).\ filter(Comment.creation datetime.date(2015, 4, 26)).\ filter(Comment.type == 5).\ group_by(Thing.id).\ order_by(func.count(Comment.type).desc()) I get the original problem. Is there a way to accomplish that without specifying the columns by hand? That's a restriction of SQL (unless you are using MySQL with its legacy settings).The bad way is just to group_by(Thing), which will group by all of its columns. This is a poor performer and not considered to be very correct in SQL practice. The better way is to, as perhaps you were suggesting originally, use a subquery (though not correlated here); the form we'd be looking for is explained in terms of SQL in an old article I still like to link to here: http://weblogs.sqlteam.com/jeffs/archive/2005/12/14/8546.aspx. I'm enough of a fan of this form that it is part of the ORM tutorial in this example: http://docs.sqlalchemy.org/en/rel_1_0/orm/tutorial.html#using-subqueries , so you'd be looking to emulate the form seen here. Ah. I though that s.query(Thing,...) was shorthand for s.query( All the properties in Thing mapped into a Thing object..., but it must be something else. Thanks! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.