Re: [sqlalchemy] Entity name - multiple schema - relationship

2014-05-06 Thread Julien Meyer
My real database schema is a little more complex.
In reality, I have one database by company. In each database, I have 
multiple schemas who contain the same table structure.

The solution schema name execution will not work in the case when I need 
to access to more than one schema by request.

The Horizontal sharding can work : one engine by schema and set the search 
path when creating the engine. During the request processing, I can 
identify wich schema to use and with the use of set_shard on the Query 
object (not found in the documentation, normal ?), I can easely select the 
good shard to use.

But I don't know how I can make a cross schema query in this case? 

Le lundi 5 mai 2014 19:12:06 UTC+2, Michael Bayer a écrit :

 part of a feature that will make this kind of thing more direct is the 
 “schema name execution argument” feature, which is 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/2685/default-schema-as-an-execution-argument
 .

 This application is somewhat of a “multi-tenancy” application; technically 
 its horizontally partitioned but if you know “society” up front and for the 
 duration of an operation, you can just set that and be done with it.

 Assuming this is the case an easy way to do this for now is just to set 
 the “search path” on your postgresql connection before such an operation 
 proceeds.   That way when you refer to table X or Y, it will be in terms of 
 whatever search path you’ve set, see 5.7.3 at 
 http://www.postgresql.org/docs/8.1/static/ddl-schemas.html.

 There’s no need in that case to use any kind of explicit “horizontal 
 sharding”.Only if you need queries that are going to refer to multiple 
 schemas at once does the HS feature come into play (and if that were the 
 case I’d look into PG table inheritance).




 On May 5, 2014, at 8:41 AM, Julien Meyer julien...@gmail.comjavascript: 
 wrote:

 I need some help and advices to create a mapping.

 The context : 
 - Multiple schemas on postgresql (dynamic number and name) who store the 
 same tables.
 - SQLAlchemy used into a pyramid web application.

 Example :
 A table Customer and a table CustomerOrder (link by customer.id) and 
 a schema by society (not know before running)


 I read the documentation about horizontal, vertical sharding and entity 
 name but I'm a little bit confused about the good solution to solve my 
 problem.

 If I use Entity name, I don't know how to configure the relationship 
 between my two dynamic classes because I need to specify a class at 
 configuration time but i really know the real subclasses only at runtime.

 If I use the Horizontal sharding, I need to have an engine / schema (and 
 use search_path). The shard configurtion will be (or seems to be)  tricky.

 If I use the Vertical sharding, I need also an engine / schema and 
 re-configure the session several times with a new binds mapping.

 I made some google search with my context but it's not an usual case and i 
 didn't find some helpful posts

 I also posed the question on stackoverflow last year but my solution don't 
 really work : 
 http://stackoverflow.com/questions/20212165/one-entity-in-multiple-schemas-how-to-switch-schema-on-runtime

 Thanks in advance.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] exists() adding additional from clause

2014-05-06 Thread gbr
I'm running into a similar problem again. This time, correlate_except 
doesn't change anything.

Consider this example

from sqlalchemy import *
metadata = MetaData()
product = Table('product', metadata, Column('id', Integer), 
Column('other_id', Integer))
other = Table('other', metadata, Column('id', Integer))

e2 = product.alias('e2')

sel = select(
columns=[e2.c.id, func.max(e2.c.other_id)],
from_obj=other,
group_by=[e2.c.id],
correlate=False # doesn't make a difference
).correlate_except(product, e2) # doesn't make a difference either
e1 = sel.join(
e2, e2.c.other_id == other.c.id
)
print e1
# (SELECT e2.id AS id, max(e2.other_id) AS max_1 
# FROM product AS e2, other GROUP BY e2.id) JOIN product AS e2 ON 
e2.other_id = other.id

# What I want is
#
# SELECT e2.id, max(e2.other_id)
# FROM other
# JOIN product as e2 on e2.other_id = other.id
# GROUP BY e2.id


How can I get SQLA to create the query I want?


On Thursday, May 1, 2014 8:20:30 AM UTC+10, gbr wrote:

 correlate_except(table) did the trick. I thought I had tried it before, 
 but something must have gone wrong. Now it works.

 Thanks for your help.

 On Wednesday, April 30, 2014 11:53:04 PM UTC+10, Michael Bayer wrote:


 On Apr 30, 2014, at 8:37 AM, gbr doub...@directbox.com wrote: 

  For some reason, an exists() where clause which is meant to refer to an 
 outer element is pulling the outer element's table into the query. 
  
  What I need is as follows 
  
  SELECT anon1.id, anon1.value from ( 
SELECT DISTINCT ON (pp.id) pp.id AS id, pp.rev_id AS rev_id, 
 pp.deleted 
FROM prod as pp 
WHERE (( 
select max (revision_id) FROM rev1 
WHERE exists ( 
  select 1 from prod where pp.id = prod.id 
) 
  ) = pp.rev_id and pp.deleted = false 
) ORDER BY pp.id, pp.rev_id DESC 
  ) as anon1 
  
  The problem is when I create the 
  
exists().where(pp.id == prod.id) 
  
  part which renders into 

exists (select 1 from prod as prod_1, prod as pp where pp.id = 
 prod_1.id) 
  
  which is not the same any more. How can I prevent SQLA from doing so (I 
 tried from_obj argument, played around with correlate, tried 
 exists(select), but none of it worked)? Also, it seems in the inner-most 
 where clause (exist), I actually need an alias to the 2nd select (the 
 select distinct), which I only get once the query is created. How can I get 
 this translated to SQLA code? 

 when you see the “from x, y” pattern it usually means the statement is 
 referring to columns with the wrong parent object in some way, or that 
 correlation is not taking effect as expected. 

 The exists() object and the underlying select() object should always 
 “correlate” automatically, that is if the SELECT is against “x, y” and you 
 place that SELECT embedded into another query within the columns or WHERE 
 clause that is also querying “x”, “x” will be removed from the FROM list of 
 the inner select and it will use correlation. 

 To force the behavior of correlation you can use the correlate() or 
 correlate_except() methods.  Check the docs for these. Otherwise please 
 share some very basic model setups in conjunction with very simple code 
 that illustrates how you are trying to produce this query (just a “print 
 query” is suitable, no database is needed). 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Best way to set up sqlite_pragma for all applications

2014-05-06 Thread Bao Niu

I am seeking some advice on best practice regarding setting up 
sqlite_pragma across applications.

I have the following code which I currently put in each and every module 
across my applications, wherever such module uses SQLAlchemy.

 @sqlalchemy.event.listens_for(sqlalchemy.engine.Engine, connect)
 def set_sqlite_pragma(dbapi_connection, connection_record):
 cursor = dbapi_connection.cursor()
 cursor.execute(PRAGMA foreign_keys=ON)
 cursor.close()



It does the job, but is there a way to centralize this effort so I can have 
this snippet only one place for all modules?
I'm not sure if simply factoring the above snippet out and making it a 
separate module in itself will do the job, because each module will use 
independent sqlalchemy module, right? So setting ModuleA's sqlite_pragma 
doesn't have any effect on ModuleB's sqlite_pragma. Am I right here?

Hopefully some pros can give me some simple but practice advice here. Many 
thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] exists() adding additional from clause

2014-05-06 Thread gbr
I think I just figured it out. The join clause needs to go to the from_obj 
table...

sel = select(
columns=[e2.c.id, func.max(e2.c.other_id)],
from_obj=other.join(
  e2, e2.c.other_id == other.c.id
),
group_by=[e2.c.id]
)
e1 = sel


This renders the query as expected. This behaviour fairly unexpected 
though. Is this somehow mentioned in the documentation? An example like 
this would be great (if not already existing)...

On Tuesday, May 6, 2014 7:07:16 PM UTC+10, gbr wrote:

 I'm running into a similar problem again. This time, correlate_except 
 doesn't change anything.

 Consider this example

 from sqlalchemy import *
 metadata = MetaData()
 product = Table('product', metadata, Column('id', Integer), 
 Column('other_id', Integer))
 other = Table('other', metadata, Column('id', Integer))

 e2 = product.alias('e2')

 sel = select(
 columns=[e2.c.id, func.max(e2.c.other_id)],
 from_obj=other,
 group_by=[e2.c.id],
 correlate=False # doesn't make a difference
 ).correlate_except(product, e2) # doesn't make a difference either
 e1 = sel.join(
 e2, e2.c.other_id == other.c.id
 )
 print e1
 # (SELECT e2.id AS id, max(e2.other_id) AS max_1 
 # FROM product AS e2, other GROUP BY e2.id) JOIN product AS e2 ON 
 e2.other_id = other.id

 # What I want is
 #
 # SELECT e2.id, max(e2.other_id)
 # FROM other
 # JOIN product as e2 on e2.other_id = other.id
 # GROUP BY e2.id


 How can I get SQLA to create the query I want?


 On Thursday, May 1, 2014 8:20:30 AM UTC+10, gbr wrote:

 correlate_except(table) did the trick. I thought I had tried it before, 
 but something must have gone wrong. Now it works.

 Thanks for your help.

 On Wednesday, April 30, 2014 11:53:04 PM UTC+10, Michael Bayer wrote:


 On Apr 30, 2014, at 8:37 AM, gbr doub...@directbox.com wrote: 

  For some reason, an exists() where clause which is meant to refer to 
 an outer element is pulling the outer element's table into the query. 
  
  What I need is as follows 
  
  SELECT anon1.id, anon1.value from ( 
SELECT DISTINCT ON (pp.id) pp.id AS id, pp.rev_id AS rev_id, 
 pp.deleted 
FROM prod as pp 
WHERE (( 
select max (revision_id) FROM rev1 
WHERE exists ( 
  select 1 from prod where pp.id = prod.id 
) 
  ) = pp.rev_id and pp.deleted = false 
) ORDER BY pp.id, pp.rev_id DESC 
  ) as anon1 
  
  The problem is when I create the 
  
exists().where(pp.id == prod.id) 
  
  part which renders into 

exists (select 1 from prod as prod_1, prod as pp where pp.id = 
 prod_1.id) 
  
  which is not the same any more. How can I prevent SQLA from doing so 
 (I tried from_obj argument, played around with correlate, tried 
 exists(select), but none of it worked)? Also, it seems in the inner-most 
 where clause (exist), I actually need an alias to the 2nd select (the 
 select distinct), which I only get once the query is created. How can I get 
 this translated to SQLA code? 

 when you see the “from x, y” pattern it usually means the statement is 
 referring to columns with the wrong parent object in some way, or that 
 correlation is not taking effect as expected. 

 The exists() object and the underlying select() object should always 
 “correlate” automatically, that is if the SELECT is against “x, y” and you 
 place that SELECT embedded into another query within the columns or WHERE 
 clause that is also querying “x”, “x” will be removed from the FROM list of 
 the inner select and it will use correlation. 

 To force the behavior of correlation you can use the correlate() or 
 correlate_except() methods.  Check the docs for these. Otherwise please 
 share some very basic model setups in conjunction with very simple code 
 that illustrates how you are trying to produce this query (just a “print 
 query” is suitable, no database is needed). 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Best way to set up sqlite_pragma for all applications

2014-05-06 Thread Simon King
On Tue, May 6, 2014 at 10:14 AM, Bao Niu niuba...@gmail.com wrote:

 I am seeking some advice on best practice regarding setting up sqlite_pragma
 across applications.

 I have the following code which I currently put in each and every module
 across my applications, wherever such module uses SQLAlchemy.

 @sqlalchemy.event.listens_for(sqlalchemy.engine.Engine, connect)
 def set_sqlite_pragma(dbapi_connection, connection_record):
 cursor = dbapi_connection.cursor()
 cursor.execute(PRAGMA foreign_keys=ON)
 cursor.close()


In the example above, you are attaching an event listener to the
Engine *class*, which means it will be called any time any engine in
your application connects to the database. If you wanted, you could
restrict it to a single engine by attaching the event to the engine
instance instead, something like:

engine = create_engine(dburi)
@sqlalchemy.event.listens_for(engine, 'connect'):
def handle_connect(dbapi_connection, connection_record):
# your code here




 It does the job, but is there a way to centralize this effort so I can have
 this snippet only one place for all modules?
 I'm not sure if simply factoring the above snippet out and making it a
 separate module in itself will do the job, because each module will use
 independent sqlalchemy module, right? So setting ModuleA's sqlite_pragma
 doesn't have any effect on ModuleB's sqlite_pragma. Am I right here?

I'm not sure I understand what you are saying here. Within a single
process, the sqlalchemy library will only be loaded once, no matter
how many different modules import sqlalchemy. By attaching an event
handler to sqlalchemy.engine.Engine, you are asking for that code to
run for any engine in that process.


 Hopefully some pros can give me some simple but practice advice here. Many
 thanks.


There shouldn't be anything wrong with putting that event handler in a
library module, as long as you are sure to import that module from
somewhere else in your application.

People are often (rightfully) bothered by code that has import-time
side-effects. In this case, simply by importing your library module
you would be globally altering the sqlite behaviour for your process.
It might be nicer if your library module looked more like this:


def _set_sqlite_pragma(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute(PRAGMA foreign_keys=ON)
cursor.close()

def enable_sqlite_foreign_keys():
sqlalchemy.event.listen(sqlalchemy.engine.Engine, 'connect',
_set_sqlite_pragma)


...and then call the enable_sqlite_foreign_keys function somewhere in
your application setup code.

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Entity name - multiple schema - relationship

2014-05-06 Thread Michael Bayer
set_shard is a special method added by the horizontal sharding extension.

you can do cross schema queries if you organize the schema names in terms of 
which ones apply to the dynamic shard and which ones to the fixed shard, if 
that's how it works.

If OTOH you literally need to join against multiple, dynamically named shards 
at one time, then you need to spell those out explicitly.it gets more ugly 
but if you want a Table that is on the fly linked to a certain schema 
explicitly you can use table.tometadata(), see 
http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html?highlight=tometadata#sqlalchemy.schema.Table.tometadata.



On May 6, 2014, at 3:06 AM, Julien Meyer julien.mey...@gmail.com wrote:

 My real database schema is a little more complex.
 In reality, I have one database by company. In each database, I have multiple 
 schemas who contain the same table structure.
 
 The solution schema name execution will not work in the case when I need to 
 access to more than one schema by request.
 
 The Horizontal sharding can work : one engine by schema and set the search 
 path when creating the engine. During the request processing, I can identify 
 wich schema to use and with the use of set_shard on the Query object (not 
 found in the documentation, normal ?), I can easely select the good shard to 
 use.
 
 But I don't know how I can make a cross schema query in this case? 
 
 Le lundi 5 mai 2014 19:12:06 UTC+2, Michael Bayer a écrit :
 part of a feature that will make this kind of thing more direct is the 
 schema name execution argument feature, which is 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/2685/default-schema-as-an-execution-argument.
 
 This application is somewhat of a multi-tenancy application; technically 
 its horizontally partitioned but if you know society up front and for the 
 duration of an operation, you can just set that and be done with it.
 
 Assuming this is the case an easy way to do this for now is just to set the 
 search path on your postgresql connection before such an operation 
 proceeds.   That way when you refer to table X or Y, it will be in terms of 
 whatever search path you've set, see 5.7.3 at 
 http://www.postgresql.org/docs/8.1/static/ddl-schemas.html.
 
 There's no need in that case to use any kind of explicit horizontal 
 sharding.Only if you need queries that are going to refer to multiple 
 schemas at once does the HS feature come into play (and if that were the case 
 I'd look into PG table inheritance).
 
 
 
 
 On May 5, 2014, at 8:41 AM, Julien Meyer julien...@gmail.com wrote:
 
 I need some help and advices to create a mapping.
 
 The context : 
 - Multiple schemas on postgresql (dynamic number and name) who store the 
 same tables.
 - SQLAlchemy used into a pyramid web application.
 
 Example :
 A table Customer and a table CustomerOrder (link by customer.id) and a 
 schema by society (not know before running)
 
 
 I read the documentation about horizontal, vertical sharding and entity name 
 but I'm a little bit confused about the good solution to solve my problem.
 
 If I use Entity name, I don't know how to configure the relationship 
 between my two dynamic classes because I need to specify a class at 
 configuration time but i really know the real subclasses only at runtime.
 
 If I use the Horizontal sharding, I need to have an engine / schema (and 
 use search_path). The shard configurtion will be (or seems to be)  tricky.
 
 If I use the Vertical sharding, I need also an engine / schema and 
 re-configure the session several times with a new binds mapping.
 
 I made some google search with my context but it's not an usual case and i 
 didn't find some helpful posts
 
 I also posed the question on stackoverflow last year but my solution don't 
 really work : 
 http://stackoverflow.com/questions/20212165/one-entity-in-multiple-schemas-how-to-switch-schema-on-runtime
 
 Thanks in advance.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

[sqlalchemy] ORDER BY occurring multiple times

2014-05-06 Thread Gerald Thibault
I am trying to query for a model joined to another model via a one-to-many 
relationship. When i try to order this, i end up with an ordered subquery, 
but the results from the subquery are then ordered again, which results in 
a filesort (without the 2nd order by, everything is handled cleanly with 
indexes). How can i eliminate the second ORDER BY from this?

Source:

import sys

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session, relationship, backref, joinedload


e = create_engine('sqlite:tmp/test.db', echo=True)
Base = declarative_base()
Base.metadata = MetaData(e)


class Item(Base):
__tablename__ = 'items'
id = Column(Integer, primary_key=True)

class SubItem(Base):
__tablename__ = 'subitems'
id = Column(Integer, primary_key=True)
item_id = Column(Integer, ForeignKey(Item.id))

item = relationship(Item, backref='subitems')


if __name__ == '__main__':
Base.metadata.drop_all()
Base.metadata.create_all()

session = Session(e)

sites = session.query(Item) \
.options(joinedload('subitems')) \
.order_by(Item.id) \
.limit(20) \
.offset(20) \
.all()

generated query:

SELECT anon_1.items_id AS anon_1_items_id, subitems_1.id AS subitems_1_id, 
subitems_1.item_id AS subitems_1_item_id 
FROM (
SELECT items.id AS items_id 
FROM items 
ORDER BY items.id
LIMIT ? 
OFFSET ?) AS anon_1 
LEFT OUTER JOIN subitems AS subitems_1 
ON anon_1.items_id = subitems_1.item_id 
ORDER BY anon_1.items_id


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] ORDER BY occurring multiple times

2014-05-06 Thread Michael Bayer

you'd have to construct the joinedload() as a join(), create the subquery 
manually, and use contains_eager() to establish the joined rows as a collection 
load.   With some relational databases, if the combination of the inner 
subquery and the joined rows aren't subject to an ORDER BY, the ordering of the 
results is non deterministic, so in the general case this ORDER BY is necessary.


On May 6, 2014, at 3:43 PM, Gerald Thibault dieselmach...@gmail.com wrote:

 I am trying to query for a model joined to another model via a one-to-many 
 relationship. When i try to order this, i end up with an ordered subquery, 
 but the results from the subquery are then ordered again, which results in a 
 filesort (without the 2nd order by, everything is handled cleanly with 
 indexes). How can i eliminate the second ORDER BY from this?
 
 Source:
 
 import sys
 
 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import Session, relationship, backref, joinedload
 
 
 e = create_engine('sqlite:tmp/test.db', echo=True)
 Base = declarative_base()
 Base.metadata = MetaData(e)
 
 
 class Item(Base):
 __tablename__ = 'items'
 id = Column(Integer, primary_key=True)
 
 class SubItem(Base):
 __tablename__ = 'subitems'
 id = Column(Integer, primary_key=True)
 item_id = Column(Integer, ForeignKey(Item.id))
 
 item = relationship(Item, backref='subitems')
 
 
 if __name__ == '__main__':
 Base.metadata.drop_all()
 Base.metadata.create_all()
 
 session = Session(e)
 
 sites = session.query(Item) \
 .options(joinedload('subitems')) \
 .order_by(Item.id) \
 .limit(20) \
 .offset(20) \
 .all()
 
 generated query:
 
 SELECT anon_1.items_id AS anon_1_items_id, subitems_1.id AS subitems_1_id, 
 subitems_1.item_id AS subitems_1_item_id 
 FROM (
 SELECT items.id AS items_id 
 FROM items 
 ORDER BY items.id
 LIMIT ? 
 OFFSET ?) AS anon_1 
 LEFT OUTER JOIN subitems AS subitems_1 
 ON anon_1.items_id = subitems_1.item_id 
 ORDER BY anon_1.items_id
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Create database with sqlalchemy 0.8.0 and postgres

2014-05-06 Thread Michael Costello
Hello.

Setup:
  python 2.7.6
  postgres 9.3.4
  sqlalchemy 0.9.4 (also, 0.8.4)
  pg8000 1.9.8

I am attempting to create a database using sqlalchemy with the above tools 
and the following code:

from sqlalchemy import create_engine

dburl = postgresql+pg8000://user:pas...@db.foo.com:5432/postgres

engine = create_engine(dburl)

conn = engine.connect()
conn.execute(COMMIT)
conn.execute(CREATE DATABASE qux)
conn.close()

but I receive the following error:

sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
'CREATE DATABASE cannot run inside a transaction block') 'CREATE DATABASE 
qux' ()

However, running the same code against the same database but using 
sqlalchemy version 0.8.0 works.

Is there something I can do to get 0.9.4 to work for me?

-mc

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Create database with sqlalchemy 0.8.0 and postgres

2014-05-06 Thread Michael Bayer

On May 6, 2014, at 6:09 PM, Michael Costello michael7coste...@gmail.com wrote:

 Hello.
 
 Setup:
   python 2.7.6
   postgres 9.3.4
   sqlalchemy 0.9.4 (also, 0.8.4)
   pg8000 1.9.8
 
 I am attempting to create a database using sqlalchemy with the above tools 
 and the following code:
 
 from sqlalchemy import create_engine
 
 dburl = postgresql+pg8000://user:pas...@db.foo.com:5432/postgres
 
 engine = create_engine(dburl)
 
 conn = engine.connect()
 conn.execute(COMMIT)
 conn.execute(CREATE DATABASE qux)
 conn.close()
 
 but I receive the following error:
 
 sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 
 'CREATE DATABASE cannot run inside a transaction block') 'CREATE DATABASE 
 qux' ()
 
 However, running the same code against the same database but using sqlalchemy 
 version 0.8.0 works.
 
 Is there something I can do to get 0.9.4 to work for me?


I can't imagine why that would work differently on 0.8.0 because the 
transactional behavior is the same on the SQLAlchemy side.  

Running this test with the latest pg8000 1.9.8:

from sqlalchemy import create_engine

e = create_engine(postgresql+pg8000://scott:tiger@localhost/test, echo=True)
conn = e.connect()
conn.execute(COMMIT)
conn.execute(create database foo)

output on 0.9.4:

sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 'CREATE 
DATABASE cannot run inside a transaction block') 'create database foo' ()

output on 0.8.0:

sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 'CREATE 
DATABASE cannot run inside a transaction block') 'create database foo' ()

output on 0.8.4:

sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('ERROR', '25001', 'CREATE 
DATABASE cannot run inside a transaction block') 'create database foo' ()


etc.

so i think perhaps your pg8000 version has changed.

To achieve this you should use psycopg2 and use psycopg2's autocommit mode.  
See 
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#postgresql-isolation-level
 and 
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#psycopg2-transaction-isolation-level;
 with psycopg2 you can use conn.execution_options(AUTOCOMMIT).execute(CREATE 
DATABASE qux).



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.