[sqlalchemy] Re: Python 2.6 hash behavior change

2008-03-03 Thread Patrick Hartling

On Mar 3, 2008, at 11:23 AM, Michael Bayer wrote:




On Mar 3, 2008, at 11:56 AM, Patrick Hartling wrote:



hash(Works())
hash(Works2())
# Raises TypeError with Python 2.6 because Fails is deemed  
unhashable.

hash(Fails())

Does anyone know of a workaround for this issue? So far,
sqlalchemy.schema.PrimaryKeyConstraint and sqlalchemy.schema.Column
are the two classes that have tripped me up. I have added __hash__()
methods to both, but I cannot vouch for the correctness of the
implementations.



hmm, subtle difference between the 2.5 docs:

If a class does not define a __cmp__() method it should not define a
__hash__() operation either; if it defines __cmp__() or __eq__() but
not __hash__(), its instances will not be usable as dictionary keys.
If a class defines mutable objects and implements a __cmp__() or
__eq__() method, it should not implement __hash__(), since the
dictionary implementation requires that a key's hash value is
immutable (if the object's hash value changes, it will be in the wrong
hash bucket).

and the 2.6 docs:

If a class does not define a __cmp__() or __eq__() method it should
not define a __hash__() operation either; if it defines __cmp__() or
__eq__() but not __hash__(), its instances will not be usable as
dictionary keys. If a class defines mutable objects and implements a
__cmp__() or __eq__() method, it should not implement __hash__(),
since the dictionary implementation requires that a key’s hash value
is immutable (if the object’s hash value changes, it will be in the
wrong hash bucket).

both claim that if we define __eq__() but not __hash__(), it wont be
useable as a dictionary key.  but in the case of 2.5 this seems to be
incorrect, or at least not enforced.  The only difference here is that
2.6 says  if we dont define __cmp__() or __eq__(), then we shouldn't
define __hash__() either whereas 2.5 only mentions __cmp__() in that
regard.


Subtle indeed.


We define __eq__() all over the place so that would be a lot of
__hash__() methods to add, all of which return id(self).  I wonder if
we shouldn't just make a util.Mixin called Hashable so that we can
centralize the idea.



That sounds like a reasonable approach. I have not looked much at the  
SQLAlchemy internals before today, but if I follow your idea  
correctly, I could apply that solution wherever I run into problems  
during testing.


 -Patrick


--
Patrick L. Hartling
Senior Software Engineer, Priority 5
http://www.priority5.com/



PGP.sig
Description: This is a digitally signed message part


[sqlalchemy] Re: Integrating the ORM with Trellis

2008-03-03 Thread Phillip J. Eby
At 04:04 PM 2/27/2008 -0500, Michael Bayer wrote:
do you have any interest in committing changes to the branch
yourself ?  as long as the unit tests keep running whatever you'd want
is most likely fine with meotherwise I will at least experiment
with doing away with __mro__ searching and possibly doing away with
pre_instrument_attribute.

Okay, I've run into an interesting hitch on the branch -- I think 
it's a test coverage issue.  The definition of 
'ClassState.is_instrumented' is different from that of 
_ClassStateAdapter, in a way that appears to *matter*.  That is, if 
you change ClassState.is_instrumented to work in roughly the same way 
as _ClassStateAdapter.is_instrumented, the standard tests 
fail.  Specifically, with this rather unhelpful error...  :)

Traceback (most recent call last):
   File test/orm/attributes.py, line 397, in test_collectionclasses
 assert False
AssertionError

This would appear to indicate that there is some kind of hidden 
dependency between collections support, and the presence of SA 
descriptors on a class.

Further investigation reveals that the problem comes from 
register_attribute() checking to see if an attribute is already 
instrumented, and if so, returns immediately.  It would appear that 
this really wants to know if there is actually an instrumented 
descriptor, rather than whether one has merely been registered with 
the class state, per these comments::

# this currently only occurs if two primary mappers are made for the 
same class.
# TODO:  possibly have InstrumentedAttribute check entity_name when 
searching for impl.
# raise an error if two attrs attached simultaneously otherwise

I'm pretty much stumped at this point.  Essentially, even though the 
tests don't cover this point for custom instrumentation, it means 
that this problem *will* occur with custom instrumentation, even 
without my changes.

Now, the *good* news is, the other 1190 tests all pass with my changes.  :)

I've attached my current diff against the user_defined_state 
branch.  Note that this bit is the part that makes the one test break:

  def is_instrumented(self, key, search=False):
  if search:
-return hasattr(self.class_, key) and 
isinstance(getattr(self.class_, key), InstrumentedAttribute)
+return key in self
  else:
-return key in self.class_.__dict__ and 
isinstance(self.class_.__dict__[key], InstrumentedAttribute)
+return key in self.local_attrs

In other words, it's only the change to ClassState.is_instrumented 
that breaks the test, showing that there's a meaningful difference 
here between what gets registered in the state, and what's actually 
on the class.  Which is going to be a problem for anybody rerouting 
the descriptors (the way the Trellis library will be).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



classstate_dict.patch
Description: Binary data


[sqlalchemy] Re: Integrating the ORM with Trellis

2008-03-03 Thread Michael Bayer

the bug is that unregister_attribute() is not working, which the test  
suite is using to remove and re-register new instrumentation:

class Foo(object):
pass

 attributes.register_attribute(Foo, collection,  
uselist=True, typecallable=set, useobject=True)
 assert  
attributes.get_class_state(Foo).is_instrumented(collection)
 attributes.unregister_attribute(Foo, collection)
 assert not  
attributes.get_class_state(Foo).is_instrumented(collection)

seems like unregister_attribute() is still doing the old thing of just  
del class.attr, so we'd just need to stick a hook similar to  
instrument_attribute for this on ClassState which will take over the  
job.

looks good !

On Mar 3, 2008, at 1:39 PM, Phillip J. Eby wrote:

 At 04:04 PM 2/27/2008 -0500, Michael Bayer wrote:
 do you have any interest in committing changes to the branch
 yourself ?  as long as the unit tests keep running whatever you'd  
 want
 is most likely fine with meotherwise I will at least experiment
 with doing away with __mro__ searching and possibly doing away with
 pre_instrument_attribute.

 Okay, I've run into an interesting hitch on the branch -- I think
 it's a test coverage issue.  The definition of
 'ClassState.is_instrumented' is different from that of
 _ClassStateAdapter, in a way that appears to *matter*.  That is, if
 you change ClassState.is_instrumented to work in roughly the same way
 as _ClassStateAdapter.is_instrumented, the standard tests
 fail.  Specifically, with this rather unhelpful error...  :)

 Traceback (most recent call last):
   File test/orm/attributes.py, line 397, in test_collectionclasses
 assert False
 AssertionError

 This would appear to indicate that there is some kind of hidden
 dependency between collections support, and the presence of SA
 descriptors on a class.

 Further investigation reveals that the problem comes from
 register_attribute() checking to see if an attribute is already
 instrumented, and if so, returns immediately.  It would appear that
 this really wants to know if there is actually an instrumented
 descriptor, rather than whether one has merely been registered with
 the class state, per these comments::

 # this currently only occurs if two primary mappers are made for the
 same class.
 # TODO:  possibly have InstrumentedAttribute check entity_name when
 searching for impl.
 # raise an error if two attrs attached simultaneously otherwise

 I'm pretty much stumped at this point.  Essentially, even though the
 tests don't cover this point for custom instrumentation, it means
 that this problem *will* occur with custom instrumentation, even
 without my changes.

 Now, the *good* news is, the other 1190 tests all pass with my  
 changes.  :)

 I've attached my current diff against the user_defined_state
 branch.  Note that this bit is the part that makes the one test break:

  def is_instrumented(self, key, search=False):
  if search:
 -return hasattr(self.class_, key) and
 isinstance(getattr(self.class_, key), InstrumentedAttribute)
 +return key in self
  else:
 -return key in self.class_.__dict__ and
 isinstance(self.class_.__dict__[key], InstrumentedAttribute)
 +return key in self.local_attrs

 In other words, it's only the change to ClassState.is_instrumented
 that breaks the test, showing that there's a meaningful difference
 here between what gets registered in the state, and what's actually
 on the class.  Which is going to be a problem for anybody rerouting
 the descriptors (the way the Trellis library will be).

 
 classstate_dict.patch


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sqlachemy, mod_wsgi, and apache2

2008-03-03 Thread Lukasz Szybalski
What is the proper connection string for sqlalchemy to support pool_recycle?
Is there a setting in mod_Wsgi to support this?

sqlachemy.dburi=mysql://username:[EMAIL PROTECTED]:3306/databasename ??

I found the following :

http://www.sqlalchemy.org/trac/wiki/FAQ#MySQLserverhasgoneawaypsycopg.InterfaceError:connectionalreadyclosed
MySQL server has gone away / psycopg.InterfaceError?: connection
already closed ¶

This is usually symptomatic of the database closing connections which
have been idle for some period of time. On MySQL, this defaults to
eight hours.

Use the pool_recycle setting on the create_engine() call, which is a
fixed number of seconds for which an underlying pooled connection will
be closed and then re-opened. Note that this recycling only occurs at
the point at which the connection is checked out from the pool;
meaning if you hold a connection checked out from the pool for a
duration greater than the timeout value, the timeout will not work.
(it should be noted that it is generally bad practice for a web
application to hold a single connection open globally; Connection
objects should be obtained via connect() and closed/removed from scope
as needed).

Note: With MySQLdb specifically, this error was also occuring due to
sloppy re-use of connections within SA's connection pool. As of SA
0.3.3 these issues have been fixed.



I'm trying to fix the following:
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]   File
/usr/lib/python2.4/site-packages/sqlalchemy/pool.py, line 128, in
get
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] return
self.do_get()
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]   File
/usr/lib/python2.4/site-packages/sqlalchemy/pool.py, line 362, in
do_get
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] return
self.create_connection()
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]   File
/usr/lib/python2.4/site-packages/sqlalchemy/pool.py, line 111, in
create_connection
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] return
_ConnectionRecord(self)
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]   File
/usr/lib/python2.4/site-packages/sqlalchemy/pool.py, line 149, in
__init__
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]
self.connection = self.__connect()
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]   File
/usr/lib/python2.4/site-packages/sqlalchemy/pool.py, line 174, in
__connect
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]
connection = self.__pool._creator()
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115]   File
/usr/lib/python2.4/site-packages/sqlalchemy/engine/strategies.py,
line 57, in connect
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] raise
exceptions.DBAPIError(Connection failed, e)
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] DBAPIError:
(Connection failed) (OperationalError) (2002, Can't connect to local
MySQL server through socket '/var/run/mysqld/mysqld.sock' (2))
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] raise
exceptions.DBAPIError(Connection failed, e)
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] DBAPIError:
(Connection failed) (OperationalError) (2002, Can't connect to local
MySQL server through socket '/var/run/mysqld/mysqld.sock' (2))
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] raise
exceptions.DBAPIError(Connection failed, e)
[Mon Mar 03 17:34:01 2008] [error] [client 69.210.248.115] DBAPIError:
(Connection failed) (OperationalError) (2002, Can't connect to local
MySQL server through socket '/var/run/mysqld/mysqld.sock' (2))


Ideas?
Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlachemy, mod_wsgi, and apache2

2008-03-03 Thread Michael Bayer


On Mar 3, 2008, at 9:17 PM, Lukasz Szybalski wrote:

 What is the proper connection string for sqlalchemy to support  
 pool_recycle?
 Is there a setting in mod_Wsgi to support this?

its a create_engine option, so its a keyword argument - i.e  
create_engine(url, pool_recycle=3600).

I didnt think mod_wsgi had any kind of database integration, but your  
slice of code looked like you had a config file going on.  If the  
config system is using SA's engine_from_config() function, you'd add  
sqlalchemy.pool_recycle to your config file.





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlachemy, mod_wsgi, and apache2

2008-03-03 Thread Lukasz Szybalski

On Mon, Mar 3, 2008 at 8:26 PM, Michael Bayer [EMAIL PROTECTED] wrote:


  On Mar 3, 2008, at 9:17 PM, Lukasz Szybalski wrote:

   What is the proper connection string for sqlalchemy to support
   pool_recycle?
   Is there a setting in mod_Wsgi to support this?

  its a create_engine option, so its a keyword argument - i.e
  create_engine(url, pool_recycle=3600).

  I didnt think mod_wsgi had any kind of database integration, but your
  slice of code looked like you had a config file going on.  If the
  config system is using SA's engine_from_config() function, you'd add
  sqlalchemy.pool_recycle to your config file.



I'll add
sqlalchemy.pool_recycle = 3600 to my prod.cfg (turbogears)

How is the pool recycle going to affect performance? It seems to me
that if 1000 people hit my website it will open  1000 connection to
mysql. Assuming limiter number of connections my website will shut
down.  How does that change with pool_recycle?


What happens sqlalchemy to mysql connections if if I set:
server.thread_pool = 30

Lucas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlachemy, mod_wsgi, and apache2

2008-03-03 Thread Michael Bayer


On Mar 3, 2008, at 9:36 PM, Lukasz Szybalski wrote:

 I'll add
 sqlalchemy.pool_recycle = 3600 to my prod.cfg (turbogears)

 How is the pool recycle going to affect performance?

used properly, pool recycle has a negligible effect on performance.   
every X amount of seconds (which you should configure to be like, an  
hour), a connection in the pool will be closed and a new one reopened  
on the next grab.

 It seems to me
 that if 1000 people hit my website it will open  1000 connection to
 mysql.

only if your webserver could handle 1000 simultaneous threads (it cant  
unless you have a hundred gigs of ram)but even in that case the  
pool throttles at 15 connections by default so simutaneous requests  
over that number would have to wait to get a DB connection.  You'd  
want to set pool_size and max_overflow to adjust this.

 Assuming limiter number of connections my website will shut
 down.  How does that change with pool_recycle?

pool_recycle has nothing to do with how many simultaneous requests can  
be handled, it just controls the lifecycle of connections that are  
pooled.


 What happens sqlalchemy to mysql connections if if I set:
 server.thread_pool = 30

assuming that determines how many simultaneous threads handle  
requests, the number of connections opened would most likely max at 30  
if you set the pool_size/max_overflow to allow it, and also you only  
used one connection per request (i.e. didnt open a second one to write  
to a different transaction or something).

- mike

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---