[sqlalchemy] Re: queuepool timeout's with mod_wsgi under heavier load

2010-01-15 Thread Graham Dumpleton
FWIW, thread on mod_wsgi list about this is at:

  http://groups.google.com/group/modwsgi/browse_frm/thread/c6e65603c8e75a30

Graham

On Jan 15, 4:10 am, Michael Bayer mike...@zzzcomputing.com wrote:
 Damian wrote:
  Hi,

  Every few days, when we experience higher loads we get sqlalchemy's

  TimeoutError: QueuePool limit of size 5 overflow 10 reached,
  connection timed out, timeout 30

  Along with that I see an increase in (2-3 a minute):

  (104)Connection reset by peer: core_output_filter: writing data to the
  network

  and

   (32)Broken pipe: core_output_filter: writing data to the network

  in my apache error logs.

  Having checked over my pylons code a few times, the Session.remove()
  should always be called.  I'm worried that the broken pipe or
  connection reset by peer mean that remove isn't being called.

 sounds more like the higher load means more concurrent DB activity, PG
 then takes more time to return results, more reqs then come in,
 connections aren't available since they're all busy.   AFAIK the broken
 pipe stuff doesn't kill off the Python threads in the middle of their
 work...if they threw an exception, hopefully you have Session.remove()
 happening within a finally:.   Since you're on PG, just to a ps listing
 on your database machine or better yet use pgtop and you'll see just
 what happens when a spike comes in.





  The server is runningmod_wsgiwith apaches mpm_worker with the
  following config:

  IfModule mpm_worker_module
      StartServers         16
      MaxClients          480
      MinSpareThreads      50
      MaxSpareThreads     300
      ThreadsPerChild      30
      MaxRequestsPerChild   0
  /IfModule

  and usingmod_wsgi'sdaemon mode:

    WSGIDaemonProcess somename user=www-data group=www-data processes=4
  threads=32

  Is this somehow overkill?  The server is a well speced quad core with
  8 gigs of ram and fast hard drives.

  It also runs the database server (postgres).

  Has anyone else experienced this kind of problem?  I've cross posted
  this to both themod_wsgiand sqlalchemy mailing lists - hope that's
  ok as I believe this may be relevant to both groups.

  Thanks,
  Damian
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] MySQL Connector and BIT test case (+patch)

2010-01-15 Thread geertjanvdk
Hello,

We had a problem with BIT data types in MySQL Connector/Python. A fix
was pushed today, but I can't still make this test_bit_50 (here below)
work.
It appears that what gets back from MySQL through RowProxy(?) is
'empty'. If anyone can hit me with the cluebat to figure out what's
wrong?

Also added here below is a patch for the test case and the
mysqlconnector dialect.

I still have 6 Errors when running the MySQL dialect tests (first
fixing these). Pretty good stuff!

-Geert

==
ERROR: test.dialect.test_mysql.TypesTest.test_bit_50
--
Traceback (most recent call last):
  File /Library/Python/2.6/site-packages/nose-0.11.1-py2.6.egg/nose/
case.py, line 183, in runTest
self.test(*self.arg)
  File /Users/geert/Projects/sqlalchemy-trunk/lib/sqlalchemy/test/
testing.py, line 236, in maybe
return fn(*args, **kw)
  File /Users/geert/Projects/sqlalchemy-trunk/test/dialect/
test_mysql.py, line 338, in test_bit_50
roundtrip([0] * 8)
  File /Users/geert/Projects/sqlalchemy-trunk/test/dialect/
test_mysql.py, line 334, in roundtrip
print Found %s % list(row)
  File /Users/geert/Projects/sqlalchemy-trunk/lib/sqlalchemy/engine/
base.py, line 1575, in __iter__
yield func(row)
  File /Users/geert/Projects/sqlalchemy-trunk/lib/sqlalchemy/engine/
base.py, line 1825, in getcol
return processor(row[index])
  File /Users/geert/Projects/sqlalchemy-trunk/lib/sqlalchemy/dialects/
mysql/base.py, line 561, in process
for i in map(ord, value):
TypeError: argument 2 to map() must support iteration
  begin captured stdout  -
Storing [0, 0, 0, 0, 0, 0, 0, 0]
Expected [0, 0, 0, 0, 0, 0, 0, 0]




Index: test/dialect/test_mysql.py
===
--- test/dialect/test_mysql.py  (revision 6622)
+++ test/dialect/test_mysql.py  (working copy)
@@ -1199,11 +1199,22 @@
 def teardown_class(cls):
 metadata.drop_all()

+@testing.fails_on('mysql+mysqlconnector', 'uses pyformat')
 def test_expression(self):
 format = testing.db.dialect.paramstyle == 'format' and '%s'
or '?'
 self.assert_compile(
 matchtable.c.title.match('somstr'),
 MATCH (matchtable.title) AGAINST (%s IN BOOLEAN MODE) %
format)
+
+@testing.fails_on('mysql+mysqldb', 'uses format')
+@testing.fails_on('mysql+oursql', 'uses format')
+@testing.fails_on('mysql+pyodbc', 'uses format')
+@testing.fails_on('mysql+zxjdbc', 'uses format')
+def test_expression(self):
+format = '%(title_1)s'
+self.assert_compile(
+matchtable.c.title.match('somstr'),
+MATCH (matchtable.title) AGAINST (%s IN BOOLEAN MODE) %
format)

 def test_simple_match(self):
 results = (matchtable.select().
Index: lib/sqlalchemy/dialects/mysql/mysqlconnector.py
===
--- lib/sqlalchemy/dialects/mysql/mysqlconnector.py (revision 6622)
+++ lib/sqlalchemy/dialects/mysql/mysqlconnector.py (working copy)
@@ -66,11 +66,15 @@
 return connection.connection.get_characterset_info()

 def _extract_error_code(self, exception):
-m = re.compile(r\(.*\)\s+(\d+)).search(str(exception))
-c = m.group(1)
-if c:
-return int(c)
-else:
+try:
+return exception.orig.errno
+except AttributeError:
 return None
+
+def _compat_fetchall(self, rp, charset=None):
+return rp.fetchall()

+def _compat_fetchone(self, rp, charset=None):
+return rp.fetchone()
+
 dialect = MySQL_mysqlconnector

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Book

2010-01-15 Thread werner

Researching TurboGears stuff I came across a page on Amazon.

Sqlalchemy: Database Access Using Python (Broché)
de Mark Ramm (Auteur), Michael Bayer (Auteur)

To be available in Feb 2010.

Is it really that close?

Werner

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: using merge() with composite key

2010-01-15 Thread Kent

 you need to flip around where you put the delete-orphan rule - see my
 previous email on the subject.




In other words:
order_mapper = mapper(Order, order_table, properties=dict
(orderdetails=relation(OrderDetail,
cascade='all,delete-orphan', single_parent=True,
backref='parentorder')))
orderdetail_mapper = mapper(OrderDetail, order_detail_table)

=

Thank you very much.

That also solved the earlier problem with merge.  I don't need to
supply both portions of the composite key... it works that out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Column property vs. Python (class) property for calculated columns

2010-01-15 Thread bojanb
Thanks Mike. I must admit I don't understand why that code works, but
it does. I guess that's the Alchemy in SQLAlchemy :-)

However, I have issues with the difference in NULL value semantics
between Python and SQL. Ie. if a calculated column is defined via a
column_property as price*amount, then the result will be NULL if any
of the values is NULL. However, in Python, None*something throws a
TypeError, so the hybrid_property getter function needs to be filled
with lots of IFs.

Also, this solution can't be used for date calculations, as timedelta
objects are needed. So I guess I will stick with a mix of Python
properties and column_properties.

On Jan 14, 4:23 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 bojanb wrote:
  Let's say I want to have a simple calculated property in my class, eg.
  amount which is just qty * price.

  I can define it as a column_property in a mapper which makes it
  available in all database operations, eg. I can write session.query
  (myclass).filter_by(amount1000) which will create the correct WHERE
  clause qty*price1000.

  However, the attribute is None until the object is flushed to the
  database, ie.
  myclass.qty = 2
  myclass.price = 500
  print myclass.amount

  will return None if flush was not issued.

  If I use a Python property function to define it, it will be
  immediately available (and always up to date); however, I cannot query
  on a Python property.

 ultimately the value of this attribute is derived from other attributes
 which are mapped.   So there is a very simple and clever way to get both
 in that case which you can see if you look at
 examples/derived_attributes/attributes.py.



  Is there a way to have best of both worlds? Or should I just define
  the calculated property twice, eg. Python property named 'amount' and
  a column_property named 'db_amount' and then work with the first but
  use the second for querying?
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] finding the relation target class dynamically

2010-01-15 Thread Kent
Suppose you have a simple child-parent relationship defined as such:

parent_mapper = mapper(Parent, parent_table,
properties=dict(children=relation(Child)))

I would like to use introspection on the Parent obj to dynamically
determine that its children property is a collection of objects of
Child class.

In other words,

p = Parent()

p.children is an empty collection (list) []

Is there a way to determine what Class this collection expects to
hold?  How do I use the p object to derive that p.children expects
Child objects?

Thanks in advance if anyone can help.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Name clash in SqlSoup

2010-01-15 Thread PauloS
Hi all,

I have a postgres table with a column named query. This seems to
conflict with builtin 'query' property from the SqlSoup instance.

 print db.ahr_relat._table.c.has_key('query')
True
 print db.ahr_relat.c.has_key('query')
False
 print db.ahr_relat.query
sqlalchemy.orm.scoping.query object at 0x04FD7130

How can I circunvent this?

Thanks in advance,
--
Paulo

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Joined table inheritance without a discriminator

2010-01-15 Thread Ian
All,

I have two applications: one uses Python with Sqlalchemy and the other
uses Java with Hibernate. There is a slight mis-match between the
joined table inheritance strategy: with Hibernate a discriminator is
not required.

The Sqlalchemy documentation says, in the Joined Table Inheritance
section (http://www.sqlalchemy.org/docs/05/mappers.html#joined-table-
inheritance):

While there are some “tricks” to work around the requirement that
there be a discriminator column, they are more complicated to
configure when one wishes to load polymorphically.

What are these tricks and where are they documented. I'm not really
all that interested in polymorphic querying in the Python application.

Thanks,

Ian Johnson
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Prepared Statements in Postgresql

2010-01-15 Thread mozillalives
Hello Everyone,

I am new to both sqlalchemy and elixir, but I have been using them for
the past couple of weeks and I really like them. But I have a question
about prepared statements for Postgresql.

For one specific application, I am doing a bunch of inserts
(200,000+). From what I can tell, it looks like these are not prepared
statements. I rewrote the code to issue prepared statements and this
cuts the insertion time in half, but the code is crude. My question's
are:

Is there a way to tell sqlalchemy or the engine (which would be
psycopg2, correct?) to use prepared statements?

I've noticed some opinions online indicating that psycopg2 does not
have prepared statement support (e.g. -
http://www.cerias.purdue.edu/site/blog/post/beware_sql_injections_due_to_missing_prepared_statement_support/)
- can I plug another engine into sqlalchemy that does?

If I can't do any of the above and just need to prepare the statements
manually, is there at least a method in sqlalchemy to properly quote
my data before sending it to postgres?

Thanks,
Phil
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Book

2010-01-15 Thread Michael Bayer
werner wrote:
 Researching TurboGears stuff I came across a page on Amazon.

  Sqlalchemy: Database Access Using Python (Broché)
 de Mark Ramm (Auteur), Michael Bayer (Auteur)

 To be available in Feb 2010.

 Is it really that close?

its not.




 Werner

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Name clash in SqlSoup

2010-01-15 Thread Michael Bayer
PauloS wrote:
 Hi all,

 I have a postgres table with a column named query. This seems to
 conflict with builtin 'query' property from the SqlSoup instance.

 print db.ahr_relat._table.c.has_key('query')
 True
 print db.ahr_relat.c.has_key('query')
 False
 print db.ahr_relat.query
 sqlalchemy.orm.scoping.query object at 0x04FD7130

 How can I circunvent this?

use db.ahr_relat.c.query ?



 Thanks in advance,
 --
 Paulo

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Joined table inheritance without a discriminator

2010-01-15 Thread Michael Bayer
Ian wrote:
 All,

 I have two applications: one uses Python with Sqlalchemy and the other
 uses Java with Hibernate. There is a slight mis-match between the
 joined table inheritance strategy: with Hibernate a discriminator is
 not required.

 The Sqlalchemy documentation says, in the Joined Table Inheritance
 section (http://www.sqlalchemy.org/docs/05/mappers.html#joined-table-
 inheritance):

 While there are some “tricks” to work around the requirement that
 there be a discriminator column, they are more complicated to
 configure when one wishes to load polymorphically.

 What are these tricks and where are they documented. I'm not really
 all that interested in polymorphic querying in the Python application.

if you don't care about polymorphic querying, you should be able to leave
all the polymorphic options blank in your mapping.   you would have to
take care to ensure your queries are only against specific subclasses, or
if against a superclass contains criterion that will limit the results to
only superclass instances.

the tricks at the moment are to provide a view that produces an
effective polymorphic_on column.  It is necessary to create an OUTER
JOIN to all joined-subclass tables as well, which is how Hibernate's
version works, or alternatively to use a UNION that selects among joins. 
We have a function that can generate the UNION version.  I'm not sure
which is more efficient but they are both pretty awful.

i.e. in SQL, the OUTER JOIN version looks like:

SELECT base.*,
   (CASE WHEN subtable1.id is not null THEN 'subtable1'
CASE WHEN subtable2.id is not null THEN 'subtable2'
CASE WHEN subtable3.id is not null THEN 'subtable3'
   ) AS polymorphic_type
   subtable1.*,
   subtable2.*,
   subtable3.*
FROM base LEFT OUTER JOIN subtable1 ON base.id=subtable1.id
LEFT OUTER JOIN subtable2 ON base.id=subtable2.id
LEFT OUTER JOIN subtable3 ON base.id=subtable3.id

you'd create a selectable along the lines of :

join =
base.outerjoin(subtable1).outerjoin(subtable2).outerjoin(subtable3)
stmt = select([base, subtable1, subtable2, subtable3,
case([(subtable1.c.id!=None, subtable1), ...etc

for the UNION version the polymorphic_union function can generate the
right SQL expression here.  you'd configure the base mapper the same way
the docs describe polymorphic concrete loading, and your
polymprhic_union looks something like:

 person_join = polymorphic_union(
{
'engineer':people.join(engineers),
'manager':people.join(managers),
'person':people.select(people.c.type=='person'),
}, None, 'pjoin')









 Thanks,

 Ian Johnson
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Prepared Statements in Postgresql

2010-01-15 Thread Michael Bayer
mozillalives wrote:
 Hello Everyone,

 I am new to both sqlalchemy and elixir, but I have been using them for
 the past couple of weeks and I really like them. But I have a question
 about prepared statements for Postgresql.

 For one specific application, I am doing a bunch of inserts
 (200,000+). From what I can tell, it looks like these are not prepared
 statements. I rewrote the code to issue prepared statements and this
 cuts the insertion time in half, but the code is crude. My question's
 are:

how did you use prepared statements in Python if you don't know that
psycoopg2 uses prepared statements ?  was this in another language or did
you implement a raw socket connection to your database ?


 Is there a way to tell sqlalchemy or the engine (which would be
 psycopg2, correct?) to use prepared statements?

to efficiently execute the same statement many times, use the
executemany style of execution - the tutorial describes this at
http://www.sqlalchemy.org/docs/05/sqlexpression.html#executing-multiple-statements
.   I don't think that psycopg2 actually uses prepared statements for
this purpose but I am not sure.  The DBAPI executemany() method is used.


 I've noticed some opinions online indicating that psycopg2 does not
 have prepared statement support (e.g. -
 http://www.cerias.purdue.edu/site/blog/post/beware_sql_injections_due_to_missing_prepared_statement_support/)

the comment at the bottom of that post ultimately references a psycopg2
message from 2007 so you'd need to ask the psycopg2 folks for updated
information.  However psycopg2 can do an executemany with great
efficiency as it is using methodologies for which you'd have to ask them,
so if they don't use PG's actual prepared mechanism, its probably
unnecessary.  psycopg2 is an extremely mature and high performing product.


 - can I plug another engine into sqlalchemy that does?

there's the pg8000 engine which may or may not do this.  But its written
in pure python, is not as fast as psycopg2, and is very new and not widely
used since its author doesn't seem to promote it very much (but it is a
very well written library).



 If I can't do any of the above and just need to prepare the statements
 manually, is there at least a method in sqlalchemy to properly quote
 my data before sending it to postgres?

Despite some of the fud-like links mentioned on that blog, SQLAlchemy, as
it says on the website since the day we launched 5 years ago, always uses
bind parameters, in all cases, for all literal values, everywhere.  We do
not and have never quoted anything within SQLA as that is left up to the
services provided by the DBAPI.   DBAPI does not have prepared statement
API.  It has executemany(), for which the underlying implementation may
or may not use prepared statements + server-level bind processing as an
implementation detail.  Psycopg2 handles the quoting in this case. 
cx_oracle, OTOH, uses Oracle's native data binding facilities provided by
OCI.  DBAPI abstracts this detail away.




 Thanks,
 Phil
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Table as dictionary?

2010-01-15 Thread Nelson
Hello SQLAlchemy experts,

I'd like to view the contents of a table object as a dictionary.
Example:

s = Table('sparrow', Column('type', String(50)) , Column('weight',
Integer), ... etc)
s.type = 'African'
s.weight = 32

Then I want to see / get a dictionary:

{'type': 'African, weight: 32, ... etc}

Easy way to do that I'm missing?

Thanks
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Prepared Statements in Postgresql

2010-01-15 Thread mozillalives
Thanks for your quick response Michael.

To answer your question, this is how I was issuing the queries

conn.execute(PREPARE insert_statement(text) AS ...)
conn.execute(EXECUTE insert_statement('%s') % val)

And I'm sorry if it seemed that I was attacking sqlalchemy, I just
wasn't sure what it did and how it works with psycopg2. From what you
wrote it seems that my question is more for the psycopg2 group than
here.

Thanks for helping me out.

Phil

On Jan 15, 12:16 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 mozillalives wrote:
  Hello Everyone,

  I am new to both sqlalchemy and elixir, but I have been using them for
  the past couple of weeks and I really like them. But I have a question
  about prepared statements for Postgresql.

  For one specific application, I am doing a bunch of inserts
  (200,000+). From what I can tell, it looks like these are not prepared
  statements. I rewrote the code to issue prepared statements and this
  cuts the insertion time in half, but the code is crude. My question's
  are:

 how did you use prepared statements in Python if you don't know that
 psycoopg2 uses prepared statements ?  was this in another language or did
 you implement a raw socket connection to your database ?



  Is there a way to tell sqlalchemy or the engine (which would be
  psycopg2, correct?) to use prepared statements?

 to efficiently execute the same statement many times, use the
 executemany style of execution - the tutorial describes this 
 athttp://www.sqlalchemy.org/docs/05/sqlexpression.html#executing-multip...
 .   I don't think that psycopg2 actually uses prepared statements for
 this purpose but I am not sure.  The DBAPI executemany() method is used.



  I've noticed some opinions online indicating that psycopg2 does not
  have prepared statement support (e.g. -
 http://www.cerias.purdue.edu/site/blog/post/beware_sql_injections_due...)

 the comment at the bottom of that post ultimately references a psycopg2
 message from 2007 so you'd need to ask the psycopg2 folks for updated
 information.  However psycopg2 can do an executemany with great
 efficiency as it is using methodologies for which you'd have to ask them,
 so if they don't use PG's actual prepared mechanism, its probably
 unnecessary.  psycopg2 is an extremely mature and high performing product.

  - can I plug another engine into sqlalchemy that does?

 there's the pg8000 engine which may or may not do this.  But its written
 in pure python, is not as fast as psycopg2, and is very new and not widely
 used since its author doesn't seem to promote it very much (but it is a
 very well written library).



  If I can't do any of the above and just need to prepare the statements
  manually, is there at least a method in sqlalchemy to properly quote
  my data before sending it to postgres?

 Despite some of the fud-like links mentioned on that blog, SQLAlchemy, as
 it says on the website since the day we launched 5 years ago, always uses
 bind parameters, in all cases, for all literal values, everywhere.  We do
 not and have never quoted anything within SQLA as that is left up to the
 services provided by the DBAPI.   DBAPI does not have prepared statement
 API.  It has executemany(), for which the underlying implementation may
 or may not use prepared statements + server-level bind processing as an
 implementation detail.  Psycopg2 handles the quoting in this case.
 cx_oracle, OTOH, uses Oracle's native data binding facilities provided by
 OCI.  DBAPI abstracts this detail away.





  Thanks,
  Phil
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Re: Prepared Statements in Postgresql

2010-01-15 Thread Michael Bayer
mozillalives wrote:
 Thanks for your quick response Michael.

 To answer your question, this is how I was issuing the queries

 conn.execute(PREPARE insert_statement(text) AS ...)
 conn.execute(EXECUTE insert_statement('%s') % val)

 And I'm sorry if it seemed that I was attacking sqlalchemy, I just
 wasn't sure what it did and how it works with psycopg2. From what you
 wrote it seems that my question is more for the psycopg2 group than
 here.

 Thanks for helping me out.

Oh I wasn't upset with you, I was annoyed at the somewhat confused nature
of the comments in that blog post.   Prepared statements are fine but they
are not necessary in order to use bind parameters.   I think this
confusion arises based on people's experience with JDBC which does have
this requirement.



 Phil

 On Jan 15, 12:16 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 mozillalives wrote:
  Hello Everyone,

  I am new to both sqlalchemy and elixir, but I have been using them for
  the past couple of weeks and I really like them. But I have a question
  about prepared statements for Postgresql.

  For one specific application, I am doing a bunch of inserts
  (200,000+). From what I can tell, it looks like these are not prepared
  statements. I rewrote the code to issue prepared statements and this
  cuts the insertion time in half, but the code is crude. My question's
  are:

 how did you use prepared statements in Python if you don't know that
 psycoopg2 uses prepared statements ?  was this in another language or
 did
 you implement a raw socket connection to your database ?



  Is there a way to tell sqlalchemy or the engine (which would be
  psycopg2, correct?) to use prepared statements?

 to efficiently execute the same statement many times, use the
 executemany style of execution - the tutorial describes this
 athttp://www.sqlalchemy.org/docs/05/sqlexpression.html#executing-multip...
 .   I don't think that psycopg2 actually uses prepared statements for
 this purpose but I am not sure.  The DBAPI executemany() method is used.



  I've noticed some opinions online indicating that psycopg2 does not
  have prepared statement support (e.g. -
 http://www.cerias.purdue.edu/site/blog/post/beware_sql_injections_due...)

 the comment at the bottom of that post ultimately references a psycopg2
 message from 2007 so you'd need to ask the psycopg2 folks for updated
 information.  However psycopg2 can do an executemany with great
 efficiency as it is using methodologies for which you'd have to ask
 them,
 so if they don't use PG's actual prepared mechanism, its probably
 unnecessary.  psycopg2 is an extremely mature and high performing
 product.

  - can I plug another engine into sqlalchemy that does?

 there's the pg8000 engine which may or may not do this.  But its written
 in pure python, is not as fast as psycopg2, and is very new and not
 widely
 used since its author doesn't seem to promote it very much (but it is a
 very well written library).



  If I can't do any of the above and just need to prepare the statements
  manually, is there at least a method in sqlalchemy to properly quote
  my data before sending it to postgres?

 Despite some of the fud-like links mentioned on that blog, SQLAlchemy,
 as
 it says on the website since the day we launched 5 years ago, always
 uses
 bind parameters, in all cases, for all literal values, everywhere.  We
 do
 not and have never quoted anything within SQLA as that is left up to
 the
 services provided by the DBAPI.   DBAPI does not have prepared
 statement
 API.  It has executemany(), for which the underlying implementation
 may
 or may not use prepared statements + server-level bind processing as an
 implementation detail.  Psycopg2 handles the quoting in this case.
 cx_oracle, OTOH, uses Oracle's native data binding facilities provided
 by
 OCI.  DBAPI abstracts this detail away.





  Thanks,
  Phil
  --
  You received this message because you are subscribed to the Google
 Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Table as dictionary?

2010-01-15 Thread Kevin Horn
On Fri, Jan 15, 2010 at 11:28 AM, Nelson nelsonp...@comcast.net wrote:

 Hello SQLAlchemy experts,

 I'd like to view the contents of a table object as a dictionary.
 Example:

 s = Table('sparrow', Column('type', String(50)) , Column('weight',
 Integer), ... etc)
 s.type = 'African'
 s.weight = 32

 Then I want to see / get a dictionary:

 {'type': 'African, weight: 32, ... etc}

 Easy way to do that I'm missing?

 Thanks



You could probably set up a mapper, and then make your mapped object behave
like a dictionary by using __getitem__(), __setitem__(), etc.

I don't know enough about SQLAlchemy internals to know whether that would
interfere with anything SQLAlchemy itself is doing though.

Kevin Horn
-- 

You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To post to this group, send email to sqlalch...@googlegroups.com.

To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Table as dictionary?

2010-01-15 Thread Antoine Pitrou
Le vendredi 15 janvier 2010 à 09:28 -0800, Nelson a écrit :
 Hello SQLAlchemy experts,
 
 I'd like to view the contents of a table object as a dictionary.
 Example:
 
 s = Table('sparrow', Column('type', String(50)) , Column('weight',
 Integer), ... etc)
 s.type = 'African'
 s.weight = 32
 
 Then I want to see / get a dictionary:
 
 {'type': 'African, weight: 32, ... etc}
 
 Easy way to do that I'm missing?

s.__dict__ ?
You'll have to filter out any private attributes set by SQLAlchemy.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Mixing matching connectors and dialects in 0.6

2010-01-15 Thread Bo Shi
Hi All,

I'm attempting to get rudimentary support for a Vertica deployment
using an ODBC connector.  According to their docs, their dialect is
mostly compatible with Oracle and SQLServer dialects.  create_engine()
using 'mssql+pyodbc' seems to work but upon attempting to execute a
simple select statement, I get a programming error indicating the
following failed to run:

'SELECT user_name() as user_name;'


So it seems the dialect is getting some additional state under the
hood.  lib/sqlalchemy/dialects/mssql/pyodbc.py appears to be hooking
PyODBCConnector and MSDialect together.  Is it possible to specify
another dialect here?  If so, is there any documentation on how to do
so?
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Table as dictionary?

2010-01-15 Thread Nelson

On Jan 15, 11:15 am, Antoine Pitrou solip...@pitrou.net wrote:
 Le vendredi 15 janvier 2010 à 09:28 -0800, Nelson a écrit :

  Hello SQLAlchemy experts,

  I'd like to view the contents of a table object as a dictionary.

 s.__dict__ ?
 You'll have to filter out any private attributes set by SQLAlchemy.

Thanks, that did work. Filtering out the private attributes is
unfortunate, but can be done of course...

Actually, what I'm really trying to do here is to use the ORM to
UPDATE this table, and am playing games to get around the this
confounding issue: If I create my above table a try to merge it into
an ORM session it will do a SELECT on that row. I don't want the
SELECT, I'm going to change every column of data. The merge has a
dont_load option, but that does not seem to work.

By doing the above what I was trying to do is build my table using the
above style code and then create a Update() object to execute in the
session, thus skipping the unnecessary SELECT. Getting the dict allows
me to pass that into update.values().

Perhaps I'm even farther off the mark than it seemed earlier?!

- Nelson
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] how to do mysql insert delayed in sqlalchemy

2010-01-15 Thread Stephen
I didn't see anything in the docs or in this group so please forgive
me if this has been asked already.

If I'm using sqlalchemy with an underlying mysql db, how can I achieve
INSERT DELAYED?

I saw this changeset on the wiki:
http://www.sqlalchemy.org/trac/changeset/4236

But I don't understand how to take advantage of this through the
sqlalchemy api. Is the only way to get this through:
scoped_session(..).connection(...).execute(sql command)

Thanks,
Stephen
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Mixing matching connectors and dialects in 0.6

2010-01-15 Thread Michael Bayer
Bo Shi wrote:
 Hi All,

 I'm attempting to get rudimentary support for a Vertica deployment
 using an ODBC connector.  According to their docs, their dialect is
 mostly compatible with Oracle and SQLServer dialects.

That's funny because Oracle and SQL server are utterly, totally different
from a SQL quirks perspective.   If I were to pick two dialects in SQLA
that were *most* different from each other and also non-standard, those
would be the two.


  create_engine()
 using 'mssql+pyodbc' seems to work but upon attempting to execute a
 simple select statement, I get a programming error indicating the
 following failed to run:

 'SELECT user_name() as user_name;'


 So it seems the dialect is getting some additional state under the
 hood.  lib/sqlalchemy/dialects/mssql/pyodbc.py appears to be hooking
 PyODBCConnector and MSDialect together.  Is it possible to specify
 another dialect here?  If so, is there any documentation on how to do
 so?

you'd want to make yourself a vertica dialect module that imports the
PyODBCConnector and uses it as a mixin.   I'd suggest copying one of the
existing dialects, and probably not the SQL server one unless you know
that vertica has a lot of the transact-SQL lineage that SQL server does
(the PG and SQLite dialects are the most barebones).To run it, add a
setup.py which configures your library as a setuptools entry point, in
this case the name would be vertica+pyodbc:

from setuptools import setup

setup(name=SQLAVertica,
  description=...,
  entry_points={
 'sqlalchemy:plugins':
['vertica+pyodbc = mypackage.base:VerticaDialect']
  }


then using create_engine('vertica+pyodbc://user:p...@host/dbname') will load
in your dialect.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] how to do mysql insert delayed in sqlalchemy

2010-01-15 Thread Michael Bayer
Stephen wrote:
 I didn't see anything in the docs or in this group so please forgive
 me if this has been asked already.

 If I'm using sqlalchemy with an underlying mysql db, how can I achieve
 INSERT DELAYED?

 I saw this changeset on the wiki:
 http://www.sqlalchemy.org/trac/changeset/4236

 But I don't understand how to take advantage of this through the
 sqlalchemy api. Is the only way to get this through:
 scoped_session(..).connection(...).execute(sql command)

myinsert = table.insert().prefix_with(DELAYED)

session.execute(myinsert)



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Mixing matching connectors and dialects in 0.6

2010-01-15 Thread Bo Shi
 That's funny because Oracle and SQL server are utterly, totally different
 from a SQL quirks perspective.   If I were to pick two dialects in SQLA
 that were *most* different from each other and also non-standard, those
 would be the two.

I was a bit puzzled by this also (granted this was from some early
press release I dredged up* from google).  I'm still working through
their documentation and haven't run across any configuration that
might enable a compatability mode yet.


* improved compatibility with Oracle and SQLServer SQL dialects
  
http://www.vertica.com/company/news_and_events/Vertica-Analytics-Database-2.5-The-Most-Cost-Effective-Way-To-Modernize-Data-Warehousing

  create_engine()
 using 'mssql+pyodbc' seems to work but upon attempting to execute a
 simple select statement, I get a programming error indicating the
 following failed to run:

     'SELECT user_name() as user_name;'


 So it seems the dialect is getting some additional state under the
 hood.  lib/sqlalchemy/dialects/mssql/pyodbc.py appears to be hooking
 PyODBCConnector and MSDialect together.  Is it possible to specify
 another dialect here?  If so, is there any documentation on how to do
 so?

 you'd want to make yourself a vertica dialect module that imports the
 PyODBCConnector and uses it as a mixin.   I'd suggest copying one of the
 existing dialects, and probably not the SQL server one unless you know
 that vertica has a lot of the transact-SQL lineage that SQL server does
 (the PG and SQLite dialects are the most barebones).    To run it, add a
 setup.py which configures your library as a setuptools entry point, in
 this case the name would be vertica+pyodbc:

 from setuptools import setup

 setup(name=SQLAVertica,
      description=...,
      entry_points={
         'sqlalchemy:plugins':
            ['vertica+pyodbc = mypackage.base:VerticaDialect']
      }


 then using create_engine('vertica+pyodbc://user:p...@host/dbname') will load
 in your dialect.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] how to do mysql insert delayed in sqlalchemy

2010-01-15 Thread Stephen Mullins
Cool thank you. I take it that means there's no way to do this if I'm just
using a Base model and a session.

I had been just doing:
session.add(myobj)
session.commit()

where myobj is an instance of a model class that inherits from Base, I
hadn't been creating Table objects directly.

On Fri, Jan 15, 2010 at 11:38 AM, Michael Bayer mike...@zzzcomputing.comwrote:

 Stephen wrote:
  I didn't see anything in the docs or in this group so please forgive
  me if this has been asked already.
 
  If I'm using sqlalchemy with an underlying mysql db, how can I achieve
  INSERT DELAYED?
 
  I saw this changeset on the wiki:
  http://www.sqlalchemy.org/trac/changeset/4236
 
  But I don't understand how to take advantage of this through the
  sqlalchemy api. Is the only way to get this through:
  scoped_session(..).connection(...).execute(sql command)

 myinsert = table.insert().prefix_with(DELAYED)

 session.execute(myinsert)




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 

You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To post to this group, send email to sqlalch...@googlegroups.com.

To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] how to do mysql insert delayed in sqlalchemy

2010-01-15 Thread Stephen Mullins
Oh ok that makes sense and I can easily use this. Thank you!

On Fri, Jan 15, 2010 at 2:03 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 Stephen Mullins wrote:
  Cool thank you. I take it that means there's no way to do this if I'm
 just
  using a Base model and a session.
 
  I had been just doing:
  session.add(myobj)
  session.commit()
 
  where myobj is an instance of a model class that inherits from Base, I
  hadn't been creating Table objects directly.

 The mechanics here is that the Table object, whether you create it or
 declarative does, is used by the Mapper to generate an insert() construct
 during a flush.

 There's no hook to receive and modify this insert() construct within the
 mapper's procedure, but if you wanted this Table to use that syntax for
 all INSERT statements, a recipe I just gave someone on this list a couple
 of weeks ago is at

 http://groups.google.com/group/sqlalchemy/browse_thread/thread/a7e05537ae504d61/bf2cc95ac2ae5cd1?lnk=gstq=prefix_with#bf2cc95ac2ae5cd1

 When using declarative, the attributes on the Table are still easy to
 specify (via __table_args__) and modify (via cls.__table__) so that
 doesn't get in the way here.


 
  On Fri, Jan 15, 2010 at 11:38 AM, Michael Bayer
  mike...@zzzcomputing.comwrote:
 
  Stephen wrote:
   I didn't see anything in the docs or in this group so please forgive
   me if this has been asked already.
  
   If I'm using sqlalchemy with an underlying mysql db, how can I achieve
   INSERT DELAYED?
  
   I saw this changeset on the wiki:
   http://www.sqlalchemy.org/trac/changeset/4236
  
   But I don't understand how to take advantage of this through the
   sqlalchemy api. Is the only way to get this through:
   scoped_session(..).connection(...).execute(sql command)
 
  myinsert = table.insert().prefix_with(DELAYED)
 
  session.execute(myinsert)
 
 
 
 
  --
  You received this message because you are subscribed to the Google
  Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 sqlalchemy%2bunsubscr...@googlegroups.comsqlalchemy%252bunsubscr...@googlegroups.com
 
  .
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 
 
  -- br /
 
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.br /
 
  To post to this group, send email to sqlalch...@googlegroups.com.br /
 
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com.br
 /
 
  For more options, visit this group at
  http://groups.google.com/group/sqlalchemy?hl=en.br /
 
 


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 

You received this message because you are subscribed to the Google Groups "sqlalchemy" group.

To post to this group, send email to sqlalch...@googlegroups.com.

To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] expire_on_commit=False running many load balanced instances of app

2010-01-15 Thread Ergo
I wanted to conftrm that if we run multiple instances of web
application. then sessions in those applications are not aware of
commits that are issued by other instances sessions, right?

So expire_on_commit=True does not ensure coherency of data and just
adds overhead, am i correct ?
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] expire_on_commit=False running many load balanced instances of app

2010-01-15 Thread Michael Bayer

On Jan 15, 2010, at 9:16 PM, Ergo wrote:

 I wanted to conftrm that if we run multiple instances of web
 application. then sessions in those applications are not aware of
 commits that are issued by other instances sessions, right?
 
 So expire_on_commit=True does not ensure coherency of data and just
 adds overhead, am i correct ?

Assuming these multiple instances are all talking to the same database, then 
they certainly are aware of commits, not in the sense that they receive an 
event, but when process A commits data, process B will then see that data as 
it enters a new transaction subsequent to that of A.

expire_on_commit is essential to this process, for an application that calls 
commit() on single session and then continues to use that same session without 
closing or otherwise expiring it,  as it allows that session to re-fetch data 
from the database which may have been changed subsequent to its commit.





 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.