[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
from beaker import cache

from sqlalchemy import *
from sqlalchemy.orm import mapper, sessionmaker, scoped_session,
relationship
from sqlalchemy.types import *

# from examples/beaker_caching
from eps.model import caching_query


### INIT

cache_manager = cache.CacheManager()
metadata = MetaData()
engine = create_engine('postgresql+psycopg2://
LOGIN:passw...@127.0.0.1:5432/DBNAME', echo=False)
Session = scoped_session(sessionmaker(autoflush=True,
autocommit=False,
 
query_cls=caching_query.query_callable(cache_manager), bind=engine))

cache_manager.regions['default'] = {
'type': 'memory',
'lock_dir': '/tmp',
}

### END INIT


### TABLES

# groups

groups_table = Table('groups', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(255), unique=True, nullable=False)
)

class Group(object):
def __init__(self, name):
self.name = name

mapper(Group, groups_table)

# users

users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('username', String(255), unique=True, nullable=False),
Column('first_name', Unicode(255), nullable=False),
Column('last_name', Unicode(255), nullable=False),
Column('middle_name', Unicode(255), nullable=False)
)


# users_groups

users_groups_table = Table('users_groups', metadata,
Column('user_id', Integer, ForeignKey('users.id')),
Column('group_id', Integer, ForeignKey('groups.id'))
)

class User(object):
def __init__(self, username, first_name, last_name, middle_name):
self.username = username
self.first_name = first_name
self.last_name = last_name
self.middle_name = middle_name

mapper(
User,
users_table,
properties={
'groups': relationship(Group, lazy=True,
secondary=users_groups_table, backref='users')
}
)

cache_user_relationships = caching_query.RelationshipCache('default',
'by_id', User.groups)

### END TABLES


### HELPERS

def get_user(username):
return Session.query(User).\
   options(cache_user_relationships).\
   options(caching_query.FromCache('default',
'by_username')).\
   filter_by(username=username).one()

def print_groups(user):
for g in user.groups:
print g.name

### END HELPERS


### CREATE

metadata.create_all(engine)

### END CREATE


### POPULATE

u1 = User('sector119', u'A', u'B', u'C')
u1.groups = [Group('G1')]
u2 = User('sector120', u'D', u'E', u'F')
u2.groups = [Group('G2')]
Session.add_all([u1, u2])

Session.commit()

### END POPULATE


### TEST ...

u = get_user('sector119')
print '1. %s groups:' % u.username
print_groups(u)
print '2. %s groups:' % u.username
print_groups(u)

u = get_user('sector120')
print '1. %s groups:' % u.username
print_groups(u)
print '2. %s groups:' % u.username
print_groups(u)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
% python2.6 sacache.py
1. sector119 groups:
G1
2. sector119 groups:
G1
1. sector120 groups:
G1
2. sector120 groups:
G1

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
The same from dpaste.com: http://dpaste.com/289387/

It works with revision 6944:

% pwd
/home/eps/devel/src/sqlalchemy.6944

% python2.6 setup.py develop
running develop
...
Finished processing dependencies for SQLAlchemy==0.6.6dev

% hg log|head -n1
changeset:   6944:b29164cca942

% python2.6 sacache.py
1. sector119 groups:
G1
2. sector119 groups:
G1
1. sector120 groups:
G2
2. sector120 groups:
G2


Now I would try to move from 6944 to 7195 (current) changeset by
changeset and see where it would be broken.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Adding order_by, offset and limit support to Delete

2010-12-22 Thread Tarek Ziadé
Hello,

I need to add order_by, limit and offset support to Delete objects for
MySQL. Here's what I've done so far, which works. See the long paste
below, I use delete() on my tables.

I would like to know if this is the right way to do things, or if I am
missing something. I am currently using 0.6.x

Thanks !

Tarek


from sqlalchemy.sql.expression import _generative, Delete, _clone, ClauseList
from sqlalchemy import util
from sqlalchemy.sql.compiler import SQLCompiler

class CustomCompiler(SQLCompiler):

def visit_delete(self, delete_stmt):
self.stack.append({'from': set([delete_stmt.table])})
self.isdelete = True

text = DELETE FROM  + self.preparer.format_table(delete_stmt.table)

if delete_stmt._returning:
self.returning = delete_stmt._returning
if self.returning_precedes_values:
text +=   + self.returning_clause(delete_stmt,
delete_stmt._returning)

if delete_stmt._whereclause is not None:
text +=  WHERE  + self.process(delete_stmt._whereclause)

if len(delete_stmt._order_by_clause)  0:
text +=  ORDER BY  + self.process(delete_stmt._order_by_clause)

if delete_stmt._limit is not None or delete_stmt._offset is not None:
text += self.limit_clause(delete_stmt)

if self.returning and not self.returning_precedes_values:
text +=   + self.returning_clause(delete_stmt,
delete_stmt._returning)

self.stack.pop(-1)

return text


class DeleteOrderBy(Delete):

def __init__(self, table, whereclause, bind=None, returning=None,
 order_by=None, limit=None, offset=None, **kwargs):
Delete.__init__(self, table, whereclause, bind, returning, **kwargs)
self._order_by_clause = ClauseList(*util.to_list(order_by) or [])
self._limit = limit
self._offset = offset

@_generative
def order_by(self, *clauses):
self.append_order_by(*clauses)

def append_order_by(self, *clauses):
if len(clauses) == 1 and clauses[0] is None:
self._order_by_clause = ClauseList()
else:
if getattr(self, '_order_by_clause', None) is not None:
clauses = list(self._order_by_clause) + list(clauses)
self._order_by_clause = ClauseList(*clauses)

@_generative
def limit(self, limit):
self._limit = limit

@_generative
def offset(self, offset):
self._offset = offset

def _copy_internals(self, clone=_clone):
self._whereclause = clone(self._whereclause)
for attr in ('_order_by_clause',):
if getattr(self, attr) is not None:
setattr(self, attr, clone(getattr(self, attr)))

def get_children(self, column_collections=True, **kwargs):
children = Delete.get_children(column_collections, **kwargs)
return children + [self._order_by_clause]

def _compiler(self, dialect, **kw):
return CustomCompiler(dialect, self, **kw)


def delete(table, whereclause = None, **kwargs):
return DeleteOrderBy(table, whereclause, **kwargs)



-- 
Tarek Ziadé | http://ziade.org

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: versioning when updating a relationship

2010-12-22 Thread Andronikos Nedos

 The versioning example appears to be cautious about determining if there was 
 a net change present.    The good news is that we don't actually need the 
 new value of customer_ref when we create the historical entry, we need the 
 current entry, which should be present in the object's dict.   Within 
 create_version(), if we see no net change on column-based attributes, we 
 assume nothing changed.   All we really need to do is check all the 
 relationship() based attributes as well - I'll commit the attached patch 
 which adds:

     if not obj_changed:
         # not changed, but we have relationships.  OK
         # check those too
         for prop in obj_mapper.iterate_properties:
             if isinstance(prop, RelationshipProperty) and \
                 attributes.get_history(obj, prop.key).has_changes():
                 obj_changed = True
                 break


Great, that will work well.

Many thanks,
A.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Adding order_by, offset and limit support to Delete

2010-12-22 Thread Michael Bayer
The general idea is great though you probably want to use the @compiles 
decorator to link the compile function to the expression element, since that's 
the API point of extension - that would remove the need for the _compiler() 
call:

http://www.sqlalchemy.org/docs/core/compiler.html?highlight=compiles

You probably don't need the _copy_internals() call either since 
insert/update/delete expressions aren't generally used with clause adaption 
(for a description of what that's all about, see 
http://techspot.zzzeek.org/2008/01/23/expression-transformations/ ).



On Dec 22, 2010, at 8:51 AM, Tarek Ziadé wrote:

 Hello,
 
 I need to add order_by, limit and offset support to Delete objects for
 MySQL. Here's what I've done so far, which works. See the long paste
 below, I use delete() on my tables.
 
 I would like to know if this is the right way to do things, or if I am
 missing something. I am currently using 0.6.x
 
 Thanks !
 
 Tarek
 
 
 from sqlalchemy.sql.expression import _generative, Delete, _clone, ClauseList
 from sqlalchemy import util
 from sqlalchemy.sql.compiler import SQLCompiler
 
 class CustomCompiler(SQLCompiler):
 
def visit_delete(self, delete_stmt):
self.stack.append({'from': set([delete_stmt.table])})
self.isdelete = True
 
text = DELETE FROM  + self.preparer.format_table(delete_stmt.table)
 
if delete_stmt._returning:
self.returning = delete_stmt._returning
if self.returning_precedes_values:
text +=   + self.returning_clause(delete_stmt,
 delete_stmt._returning)
 
if delete_stmt._whereclause is not None:
text +=  WHERE  + self.process(delete_stmt._whereclause)
 
if len(delete_stmt._order_by_clause)  0:
text +=  ORDER BY  + self.process(delete_stmt._order_by_clause)
 
if delete_stmt._limit is not None or delete_stmt._offset is not None:
text += self.limit_clause(delete_stmt)
 
if self.returning and not self.returning_precedes_values:
text +=   + self.returning_clause(delete_stmt,
 delete_stmt._returning)
 
self.stack.pop(-1)
 
return text
 
 
 class DeleteOrderBy(Delete):
 
def __init__(self, table, whereclause, bind=None, returning=None,
 order_by=None, limit=None, offset=None, **kwargs):
Delete.__init__(self, table, whereclause, bind, returning, **kwargs)
self._order_by_clause = ClauseList(*util.to_list(order_by) or [])
self._limit = limit
self._offset = offset
 
@_generative
def order_by(self, *clauses):
self.append_order_by(*clauses)
 
def append_order_by(self, *clauses):
if len(clauses) == 1 and clauses[0] is None:
self._order_by_clause = ClauseList()
else:
if getattr(self, '_order_by_clause', None) is not None:
clauses = list(self._order_by_clause) + list(clauses)
self._order_by_clause = ClauseList(*clauses)
 
@_generative
def limit(self, limit):
self._limit = limit
 
@_generative
def offset(self, offset):
self._offset = offset
 
def _copy_internals(self, clone=_clone):
self._whereclause = clone(self._whereclause)
for attr in ('_order_by_clause',):
if getattr(self, attr) is not None:
setattr(self, attr, clone(getattr(self, attr)))
 
def get_children(self, column_collections=True, **kwargs):
children = Delete.get_children(column_collections, **kwargs)
return children + [self._order_by_clause]
 
def _compiler(self, dialect, **kw):
return CustomCompiler(dialect, self, **kw)
 
 
 def delete(table, whereclause = None, **kwargs):
return DeleteOrderBy(table, whereclause, **kwargs)
 
 
 
 -- 
 Tarek Ziadé | http://ziade.org
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy-Future

2010-12-22 Thread Michael Bayer
yeah we can go with sqlalchemy.contrib, just need guidance on the correct way 
to do it based on the somewhat conflicting links I posted.


On Dec 21, 2010, at 3:04 PM, Hong Minhee wrote:

 Thanks for your fine answer.
 
 Is there any plan of defining sqlalchemy.contrib namespace package into 
 current 0.7 branch or 0.6.x release? I hope for it to be clear.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Questions about sessions connections

2010-12-22 Thread Michael Bayer

On Dec 21, 2010, at 5:03 AM, Franck Vonbau wrote:

 Dear all,
 
 I'm starting a web project based on Python, and I've decided to rely on 
 web.py and SQLAlchemy.
 My DB is for the moment a SQLite database, and I'm trying to figure out how 
 the framework deals with sessions / transactions. Your help would be greatly 
 appreciated !
 
 Here's how I declare my global Session :
 
 engine = create_engine('sqlite:///.', echo = True, 
 listeners=[MyListener()])
 Session = scoped_session(sessionmaker(bind=engine))
 
 Q1) The documentation recommends to use that kind of scoped_session for a web 
 project : why, actually ? What if I use regular sessions ? What would 
 happen in that case if different users of my site would get their own 
 sessions ?

the scoped_session is a regular session, it just places them into a thread 
local registry.   This so that distinct web requests running in separate 
threads each get a Session object dedicated to their local scope of work, while 
allowing a global Session object that can be accessed from everywhere without 
the need to explicitly pass it to all functions and methods.

 
 I've made a simple test : I declare 2 actions, 'add' and 'commit'.
 'add' deliberately doesn't commit the Session because I want to understand 
 the behavior of the framework.
 
 class Add:
 
 def GET(self, name=None):
 
 print Session.connection()
 print engine.pool.__dict__
 for conn in engine.pool._all_conns:
 print [CONN] %s %conn
 
 if prenom is not None :
 user = User(name)
 Session.add(user)
 # No commit here
  
 class Commit:
 
 def GET(self):
   
 Session.commit()
  
 
 I ran several tests : I triggered the 'Add' action from different  browsers, 
 adding 'Jack' then 'John'
 Sometimes 'Jack' appears in Session.new, sometimes not.
 Sometimes Session.new is empty, sometimes not.
 It seems to depend on the underlying connection of the session.

I would imagine that web.py is using a pool of threads.   As each request 
enters, a thread is chosen randomly from the pool.  If the thread is the one 
that you happened to use previously for a particular operation, its previous 
state is still left around.  Its  for this reason the documentation recommends 
closing out sessions at the end of each request, see 
http://www.sqlalchemy.org/docs/orm/session.html#lifespan-of-a-contextual-session
 .


 
 Q2) Why's that ? Shouldn't the session be somehow global ? Why doesn't it 
 encapsulate all my users ?

scoped_session provides a Session object that is local per thread.  You 
wouldn't want to use a single Session for all web requests, as the Session 
represents a single transaction, a collection of objects and state local to a 
particular operation - your web request.   The metaphor used by the tutorial is 
that having one session for all requests is like throwing a banquet and having 
all your guests eat from the same plate.

 
 Q3) Moreover, what happens to the connections I open when I add a user 
 without committing ?

if you leave the Session hanging open, it holds onto the connection/transaction 
resources it has established.

 Why are they recycled by the framework on the next call ? My listeners 
 indicates they're never checked-in (except on commit time)

you must explicitly close out the session using rollback(), commit(), or 
close(), or with scoped_session you can also use remove().   web.py should have 
some kind of hook so that this operation can be automated.

 
 Q4) When I run the commit action, it seems to randomly pick a connection 
 and flush the objects. Why's that ?

again there's nothing random going on in the Session, this is probably a 
function of your test which is choosing different sessions based on which 
thread web.py is picking.   Just a guess.


 
 Besides , I'm trying to restrict pool_size to 1 in the engine.
 
 Q5) It does not seem to change anything : more and more connections are open 
 when I run add multiple times. How could I actually restrict the pool size ?

the pool limits to 15 connections total by default, per engine.  If you are 
seeing more than 15 connections simultaneously, this would correspond to more 
than one Engine being created, or subprocesses being created perhaps.  I 
haven't used web.py so perhaps they have some guidelines on ORM integration.   
The configuration of the pool itself is controlled by the pool_size and 
max_overflow parameters, see 
http://www.sqlalchemy.org/docs/core/pooling.html#connection-pool-configuration .

 
 Q6) What is the best strategy when using scoped_session in a web project ? 
 Forgetting a commit seems to have very dangerous consequences (delayed flush).

the guidelines at 
http://www.sqlalchemy.org/docs/orm/session.html#lifespan-of-a-contextual-session
 should be followed, so that the lifespan of the Session local to a thread is 
linked to that of the web request.


 
 Thanks very much for your 

Re: [sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread Michael Bayer
Thank you, please apply the change in r9327b3748997 to your 
_params_from_query() function.


On Dec 22, 2010, at 5:47 AM, sector119 wrote:

 from beaker import cache
 
 from sqlalchemy import *
 from sqlalchemy.orm import mapper, sessionmaker, scoped_session,
 relationship
 from sqlalchemy.types import *
 
 # from examples/beaker_caching
 from eps.model import caching_query
 
 
 ### INIT
 
 cache_manager = cache.CacheManager()
 metadata = MetaData()
 engine = create_engine('postgresql+psycopg2://
 LOGIN:passw...@127.0.0.1:5432/DBNAME', echo=False)
 Session = scoped_session(sessionmaker(autoflush=True,
 autocommit=False,
 
 query_cls=caching_query.query_callable(cache_manager), bind=engine))
 
 cache_manager.regions['default'] = {
'type': 'memory',
'lock_dir': '/tmp',
 }
 
 ### END INIT
 
 
 ### TABLES
 
 # groups
 
 groups_table = Table('groups', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(255), unique=True, nullable=False)
 )
 
 class Group(object):
def __init__(self, name):
self.name = name
 
 mapper(Group, groups_table)
 
 # users
 
 users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('username', String(255), unique=True, nullable=False),
Column('first_name', Unicode(255), nullable=False),
Column('last_name', Unicode(255), nullable=False),
Column('middle_name', Unicode(255), nullable=False)
 )
 
 
 # users_groups
 
 users_groups_table = Table('users_groups', metadata,
Column('user_id', Integer, ForeignKey('users.id')),
Column('group_id', Integer, ForeignKey('groups.id'))
 )
 
 class User(object):
def __init__(self, username, first_name, last_name, middle_name):
self.username = username
self.first_name = first_name
self.last_name = last_name
self.middle_name = middle_name
 
 mapper(
User,
users_table,
properties={
'groups': relationship(Group, lazy=True,
 secondary=users_groups_table, backref='users')
}
 )
 
 cache_user_relationships = caching_query.RelationshipCache('default',
 'by_id', User.groups)
 
 ### END TABLES
 
 
 ### HELPERS
 
 def get_user(username):
return Session.query(User).\
   options(cache_user_relationships).\
   options(caching_query.FromCache('default',
 'by_username')).\
   filter_by(username=username).one()
 
 def print_groups(user):
for g in user.groups:
print g.name
 
 ### END HELPERS
 
 
 ### CREATE
 
 metadata.create_all(engine)
 
 ### END CREATE
 
 
 ### POPULATE
 
 u1 = User('sector119', u'A', u'B', u'C')
 u1.groups = [Group('G1')]
 u2 = User('sector120', u'D', u'E', u'F')
 u2.groups = [Group('G2')]
 Session.add_all([u1, u2])
 
 Session.commit()
 
 ### END POPULATE
 
 
 ### TEST ...
 
 u = get_user('sector119')
 print '1. %s groups:' % u.username
 print_groups(u)
 print '2. %s groups:' % u.username
 print_groups(u)
 
 u = get_user('sector120')
 print '1. %s groups:' % u.username
 print_groups(u)
 print '2. %s groups:' % u.username
 print_groups(u)
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy-Future

2010-12-22 Thread Hong Minhee
In the case of Flask and Sphinx, they use the namespace package called 
flaskext and sphinxcontrib.

   - http://flask.pocoo.org/docs/extensiondev/
   - https://bitbucket.org/birkenfeld/sphinx-contrib

See the case of repoze.bfg also. It made repoze and repoze.bfg both 
namespace packages:

   - 
   http://repoze.org/viewcvs/repoze.bfg/trunk/setup.py?rev=10644view=markup
   - 
   
http://repoze.org/viewcvs/repoze.bfg/trunk/repoze/__init__.py?rev=1228view=markup
   - 
   
http://repoze.org/viewcvs/repoze.bfg/trunk/repoze/bfg/__init__.py?rev=4409view=markup
   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
Thank you, Michael!

On 22 Грд, 18:38, Michael Bayer mike...@zzzcomputing.com wrote:
 Thank you, please apply the change in r9327b3748997 to your 
 _params_from_query() function.

 On Dec 22, 2010, at 5:47 AM, sector119 wrote:







  from beaker import cache

  from sqlalchemy import *
  from sqlalchemy.orm import mapper, sessionmaker, scoped_session,
  relationship
  from sqlalchemy.types import *

  # from examples/beaker_caching
  from eps.model import caching_query

  ### INIT

  cache_manager = cache.CacheManager()
  metadata = MetaData()
  engine = create_engine('postgresql+psycopg2://
  LOGIN:passw...@127.0.0.1:5432/DBNAME', echo=False)
  Session = scoped_session(sessionmaker(autoflush=True,
  autocommit=False,

  query_cls=caching_query.query_callable(cache_manager), bind=engine))

  cache_manager.regions['default'] = {
     'type': 'memory',
     'lock_dir': '/tmp',
  }

  ### END INIT

  ### TABLES

  # groups

  groups_table = Table('groups', metadata,
     Column('id', Integer, primary_key=True),
     Column('name', String(255), unique=True, nullable=False)
  )

  class Group(object):
     def __init__(self, name):
         self.name = name

  mapper(Group, groups_table)

  # users

  users_table = Table('users', metadata,
     Column('id', Integer, primary_key=True),
     Column('username', String(255), unique=True, nullable=False),
     Column('first_name', Unicode(255), nullable=False),
     Column('last_name', Unicode(255), nullable=False),
     Column('middle_name', Unicode(255), nullable=False)
  )

  # users_groups

  users_groups_table = Table('users_groups', metadata,
     Column('user_id', Integer, ForeignKey('users.id')),
     Column('group_id', Integer, ForeignKey('groups.id'))
  )

  class User(object):
     def __init__(self, username, first_name, last_name, middle_name):
         self.username = username
         self.first_name = first_name
         self.last_name = last_name
         self.middle_name = middle_name

  mapper(
     User,
     users_table,
     properties={
         'groups': relationship(Group, lazy=True,
  secondary=users_groups_table, backref='users')
     }
  )

  cache_user_relationships = caching_query.RelationshipCache('default',
  'by_id', User.groups)

  ### END TABLES

  ### HELPERS

  def get_user(username):
     return Session.query(User).\
                    options(cache_user_relationships).\
                    options(caching_query.FromCache('default',
  'by_username')).\
                    filter_by(username=username).one()

  def print_groups(user):
     for g in user.groups:
         print g.name

  ### END HELPERS

  ### CREATE

  metadata.create_all(engine)

  ### END CREATE

  ### POPULATE

  u1 = User('sector119', u'A', u'B', u'C')
  u1.groups = [Group('G1')]
  u2 = User('sector120', u'D', u'E', u'F')
  u2.groups = [Group('G2')]
  Session.add_all([u1, u2])

  Session.commit()

  ### END POPULATE

  ### TEST ...

  u = get_user('sector119')
  print '1. %s groups:' % u.username
  print_groups(u)
  print '2. %s groups:' % u.username
  print_groups(u)

  u = get_user('sector120')
  print '1. %s groups:' % u.username
  print_groups(u)
  print '2. %s groups:' % u.username
  print_groups(u)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Questions about sessions connections

2010-12-22 Thread Franck _

Thank you very much Michael ! It makes perfectly sense.

From: mike...@zzzcomputing.com
Subject: Re: [sqlalchemy] Questions about sessions  connections
Date: Wed, 22 Dec 2010 11:07:22 -0500
To: sqlalchemy@googlegroups.com




On Dec 21, 2010, at 5:03 AM, Franck Vonbau wrote:Dear all,

I'm starting a web project based on Python, and I've decided to rely on web.py 
and SQLAlchemy.
My DB is for the moment a SQLite database, and I'm trying to figure out how the 
framework deals with sessions / transactions. Your help would be greatly 
appreciated !

Here's how I declare my global Session :

engine = create_engine('sqlite:///.', echo = True, listeners=[MyListener()])
Session = scoped_session(sessionmaker(bind=engine))

Q1) The documentation recommends to use that kind of scoped_session for a web 
project : why, actually ? What if I use regular sessions ? What would happen 
in that case if different users of my site would get their own sessions ?

the scoped_session is a regular session, it just places them into a thread 
local registry.   This so that distinct web requests running in separate 
threads each get a Session object dedicated to their local scope of work, while 
allowing a global Session object that can be accessed from everywhere without 
the need to explicitly pass it to all functions and methods.

I've made a simple test : I declare 2 actions, 'add' and 'commit'.
'add' deliberately doesn't commit the Session because I want to understand the 
behavior of the framework.

class Add:

def GET(self, name=None):

print Session.connection()
print engine.pool.__dict__
for conn in engine.pool._all_conns:
print [CONN] %s %conn

if prenom is not None :
user = User(name)
Session.add(user)
# No commit here
 
class Commit:

def GET(self):
  
Session.commit()
 

I ran several tests : I triggered the 'Add' action from different  browsers, 
adding 'Jack' then 'John'
Sometimes 'Jack' appears in Session.new, sometimes not.
Sometimes Session.new is empty, sometimes not.
It seems to depend on the underlying connection of the session.

I would imagine that web.py is using a pool of threads.   As each request 
enters, a thread is chosen randomly from the pool.  If the thread is the one 
that you happened to use previously for a particular operation, its previous 
state is still left around.  Its  for this reason the documentation recommends 
closing out sessions at the end of each request, see 
http://www.sqlalchemy.org/docs/orm/session.html#lifespan-of-a-contextual-session
 .


Q2) Why's that ? Shouldn't the session be somehow global ? Why doesn't it 
encapsulate all my users ?

scoped_session provides a Session object that is local per thread.  You 
wouldn't want to use a single Session for all web requests, as the Session 
represents a single transaction, a collection of objects and state local to a 
particular operation - your web request.   The metaphor used by the tutorial is 
that having one session for all requests is like throwing a banquet and having 
all your guests eat from the same plate.

Q3) Moreover, what happens to the connections I open when I add a user without 
committing ?

if you leave the Session hanging open, it holds onto the connection/transaction 
resources it has established.
Why are they recycled by the framework on the next call ? My listeners 
indicates they're never checked-in (except on commit time)

you must explicitly close out the session using rollback(), commit(), or 
close(), or with scoped_session you can also use remove().   web.py should have 
some kind of hook so that this operation can be automated.

Q4) When I run the commit action, it seems to randomly pick a connection and 
flush the objects. Why's that ?

again there's nothing random going on in the Session, this is probably a 
function of your test which is choosing different sessions based on which 
thread web.py is picking.   Just a guess.


Besides , I'm trying to restrict pool_size to 1 in the engine.

Q5) It does not seem to change anything : more and more connections are open 
when I run add multiple times. How could I actually restrict the pool size ?

the pool limits to 15 connections total by default, per engine.  If you are 
seeing more than 15 connections simultaneously, this would correspond to more 
than one Engine being created, or subprocesses being created perhaps.  I 
haven't used web.py so perhaps they have some guidelines on ORM integration.   
The configuration of the pool itself is controlled by the pool_size and 
max_overflow parameters, see 
http://www.sqlalchemy.org/docs/core/pooling.html#connection-pool-configuration .

Q6) What is the best strategy when using scoped_session in a web project ? 
Forgetting a commit seems to have very dangerous consequences (delayed flush).

the guidelines at