[sqlalchemy] Modeling a Tree-looking structure in SqlAlchemy.

2010-12-13 Thread Hector Blanco
hello everyone!

I was wondering which is the best way to model a tree-looking
structure with SqlAlchemy.

Right now, I have this two classes:


#!/usr/bin/python2.6
class MyNode(object):
def __init__(self):
self.items = list()

def addItem(self, item):
if isinstance(item, MyNode) or isinstance(item, MyLeaf):
self.items.append(item)

def __str__(self):
retval = str()
for item in self.items:
if isinstance(item, MyNode):
retval = retval + \n Node:  + str(item) +  
\n
else:
retval = \nLeaf:  + str(item) + retval
return retval

class MyLeaf(object):
def __init__(self):
self.content = I'm a leaf
def __str__(self):
return self.content


As you can see, a node can contain (in its .items list) other nodes,
other leafs and a mix of nodes and leafs.

When I model that using SqlAlchemy, both MyNode and MyLeaf will be
stored their own tables and will have a unique id field in the
database.

I was wondering if the best thing to do is using two relationships in
the MyNode class or trying to do it with one (closer to how it is
now).

If I use two relationships, the first one (which I could call
.otherNodes, for instance) could give me the other instances of
MyNode and the second one (.leaves, for instance) would give
instances of MyLeaf.

And well... I really don't know if it's doable putting both of them
together in a unique relationship.

I know I'm going to need intermediate tables to model this structure,
but I don't know if it's doable (or good idea) to make only one table
that may look like:

nodes_towards_content = Table(
nodes_towards_content,
metadata,
Column(node_id, Integer, ForeignKey(nodes.id)),
Column(other_node_id, Integer, ForeignKey(nodes.id)),
Column(other_leaf_id, Integer, ForeignKey(leaves.id))
)

So if I insert a node, the column nodes_to_content.c.other_node_id
would take the id of that inserted node while
nodes_to_content.c.other_leaf_id would be null, and if I insert a
leave, the process would be the opposite.

If I use a unique table, I am also a bit concerned of how to retrieve
the objects (what to tell the query, I mean). For the moment I have
always had relationships that returned one type of objects (or objects
that inherited from others), but this would be getting two totally
different objects... So if I want to retrieve all the items of a
MyNode instance, I don't know what to put in the relationship:

class MyNode:
[ ... ]
def getAllNodes():
items = session.query(?).all() #What to put in   ? 
object, maybe?

Thank you in advance!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to apply compiler extensions to implement ROW_NUMBER() OVER support for oracle

2010-12-13 Thread Ralph Heinkel
Hi Michael,

this drop-in is great, it does exactly what I need.  It just has a
small problem in that the variable 'rownum' is not allowed in oracle,
it produces a
DatabaseError: (DatabaseError) ORA-00923: FROM keyword not found where
expected
Replacing 'rownum' with 'ora_rn' or something similar fixes this
problem.

On Dec 9, 5:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:

 the recipe for built in ROW_NUMBER() for LIMIT/OFFSET should be a drop in and 
 is at:

 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowFunctionsByDef...

 applying to certain domain objects only would be a little more involved.   
 You'd probably want to place some kind of hint on the statement that the 
 custom  compiler picks up on, or perhaps it looks at the table being 
 selected to make this determination.

What would be the safest way to get access to the table/model for
which the query was created?
I found that  element._froms[0] is a pointer to my table, is this a
good/stable/portable way to get hold of it? This way I could just
instrument my model (aka hint) so see whether I need the ROW_NUMBER
approach or not.

Ralph

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Inserting Inherited Entities doesn't insert into child table?

2010-12-13 Thread rzeigler
Hello,
I'm attempting to user the declarative joined-table inheritance setup
described in the documentation. My classes are as follows

class Relatable(base.Base):
PROFNOTE = 0

__tablename__='relatables'
guid = Column(String(32), primary_key=True)
rel_type = Column('rel_type', SmallInteger, nullable=False)
__mapper_args__ = {'polymorphic_on': type}

def __init__(self, id):
self.guid = id

class ProfNote(relatable.Relatable):
__tablename__ = 'profnotes'

guid = Column('guid', String(32), ForeignKey('relatables.guid'),
nullable=False)
author_id = Column(Integer, ForeignKey('credentials.user_id'),
nullable=False)
title = Column(String(320), nullable=False)
abstract = Column(Text, nullable=False)
published = Column(Boolean,  nullable=False)
curriculum = Column(String(320), ForeignKey('curricula.name'),
nullable=False)

__mapper_args__ = { 'polymorphic_identity' :
relatable.Relatable.PROFNOTE }

def __init__(self, guid, title, abstract):
relatable.Relatable.__init__(self, guid)
self.title = title
self.abstract = abstract
self.published = False

I'm trying to insert into the database using the following code:

profnote = ProfNote('...guid...', 'Fake Title', '')
session.add(profnote)
session.commit()

However, insert only occurs into the relatables table. The profnotes
table remains empty. Why is the insert not occurring into the
profnotes table?

Regards,
Ryan

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Inserting Inherited Entities doesn't insert into child table?

2010-12-13 Thread Michael Bayer
profnote doesn't have a primary key.  SQLA doesn't emit INSERT for any 
primary mapped tables that don't have a PK.

On Dec 13, 2010, at 1:47 PM, rzeigler wrote:

 Hello,
 I'm attempting to user the declarative joined-table inheritance setup
 described in the documentation. My classes are as follows
 
 class Relatable(base.Base):
PROFNOTE = 0
 
__tablename__='relatables'
guid = Column(String(32), primary_key=True)
rel_type = Column('rel_type', SmallInteger, nullable=False)
__mapper_args__ = {'polymorphic_on': type}
 
def __init__(self, id):
self.guid = id
 
 class ProfNote(relatable.Relatable):
__tablename__ = 'profnotes'
 
guid = Column('guid', String(32), ForeignKey('relatables.guid'),
 nullable=False)
author_id = Column(Integer, ForeignKey('credentials.user_id'),
 nullable=False)
title = Column(String(320), nullable=False)
abstract = Column(Text, nullable=False)
published = Column(Boolean,  nullable=False)
curriculum = Column(String(320), ForeignKey('curricula.name'),
 nullable=False)
 
__mapper_args__ = { 'polymorphic_identity' :
 relatable.Relatable.PROFNOTE }
 
def __init__(self, guid, title, abstract):
relatable.Relatable.__init__(self, guid)
self.title = title
self.abstract = abstract
self.published = False
 
 I'm trying to insert into the database using the following code:
 
 profnote = ProfNote('...guid...', 'Fake Title', '')
 session.add(profnote)
 session.commit()
 
 However, insert only occurs into the relatables table. The profnotes
 table remains empty. Why is the insert not occurring into the
 profnotes table?
 
 Regards,
 Ryan
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Modeling a Tree-looking structure in SqlAlchemy.

2010-12-13 Thread Russell Warren
 I was wondering which is the best way to model a tree-looking
 structure with SqlAlchemy.

I've spent a lot of time dealing with different types of hierarchies
in SQLAlchemy lately...

One way is to use the nested sets model.  This does a decent job at
modelling hierarchies, at least as long as you don't have the same
leaf in the tree at multiple locations.

Some links describing nested sets:
http://dev.mysql.com/tech-resources/articles/hierarchical-data.html
http://en.wikipedia.org/wiki/Nested_set_model

And here is a decent example of how to do this in sqlalchemy:
http://www.sqlalchemy.org/trac/browser/examples/adjacency_list/adjacency_list.py

The 'classic' way is the 'adjacency list' model.  Here is an
sqlalchemy example:
http://www.sqlalchemy.org/trac/browser/examples/adjacency_list/adjacency_list.py

You can also look up another pattern that seems to get labelled as
'materialized path'.  No clean sqlalchemy example I know of, but the
concept is easy:
http://www.google.ca/search?q=(hierarchical+OR+hierarchy)+materialized+path

Good luck!
Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Modeling a Tree-looking structure in SqlAlchemy.

2010-12-13 Thread Russell Warren
Sorry, I just saw I messed up the nested sets SQLA example link.  Here
is the right one:
http://www.sqlalchemy.org/trac/browser/examples/nested_sets/nested_sets.py

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Transaction isolation level support?

2010-12-13 Thread Russell Warren
I'm aware that create_engine() supports an 'isolation_level' argument
for setting the transaction isolation level.  However - searching the
online documentation for isolation_level only gets hits for
PostgreSQL and Sqlite, though...

http://www.sqlalchemy.org/docs/search.html?q=isolation_level

I've checked ahead to the 0.7 docs as well, but still only the two
hits.

Do other dialects support this, but just not indicate it in the
documentation?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Modeling a Tree-looking structure in SqlAlchemy.

2010-12-13 Thread Laurent Rahuel
Hello,

You should also take a look at http://pypi.python.org/pypi/sqlamp/0.5.2, an 
implementation of Materialized Path for SQLAlchemy.

Regards,

Laurent

Le 13 déc. 2010 à 23:30, Russell Warren a écrit :

 Sorry, I just saw I messed up the nested sets SQLA example link.  Here
 is the right one:
 http://www.sqlalchemy.org/trac/browser/examples/nested_sets/nested_sets.py
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: complex mapped collection

2010-12-13 Thread A.M.
On Sep 14, 2010, at 5:03 PM, A.M. wrote:

 Hello,
 
 I have a schema like this:
 
 Image
 *id
 *name
 
 Metadatakey
 *id
 *exifkey
 *name
 
 Metadata
 *id
 *image_id - Image
 *metadatakey_id - Metadatakey
 *value

Thanks to Michael Bayer, I was able to put together the image metadata model I 
posted on some months ago. 

I added an association_proxy to make the db schema more transparent which works 
well when reading (as per this example 
http://stackoverflow.com/questions/780774/sqlalchemy-dictionary-of-tags), but 
when setting an attribute through the association proxy, it seems like I would 
need access to the current session to make the determination of whether or not 
the metadatakey exists (and grab it).

Is it OK practice to use the session in the _metadataCreator method? Is there a 
better way to accomplish this? The last line of the source code below 
demonstrates what I wish to accomplish. Thanks.

-M

---

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import Integer,Unicode
from sqlalchemy.orm import relationship,scoped_session,backref
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.schema import Column,ForeignKey
from sqlalchemy.orm.collections import attribute_mapped_collection
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.associationproxy import association_proxy

Base = declarative_base()

class Image(Base):
   __tablename__ = 'image'
   id = Column(Integer,primary_key=True)
   metadatadict = relationship('MetadataValue',
   
collection_class=attribute_mapped_collection('key'),
   primaryjoin='Image.id==MetadataValue.image_id')

   def _metadataCreator(key,value):
   #--- get existing MetadataName for this key here, but how?
   return MetadataValue(namekey=key,value=value)

   metadatas = 
association_proxy('metadatadict','value',creator=_metadataCreator)

class MetadataName(Base):
   __tablename__ = 'metadataname'
   id = Column(Integer,primary_key=True)
   namekey = Column(Unicode,nullable=False,unique=True)

class MetadataValue(Base):
   __tablename__ = 'metadatavalue'
   id = Column(Integer,primary_key=True)
   image_id = Column(Integer,ForeignKey(Image.id),nullable=False)
   image = relationship(Image,
primaryjoin='MetadataValue.image_id==Image.id')
   metadataname_id = Column(Integer,ForeignKey(MetadataName.id),nullable=False)
   metadataname = relationship(MetadataName,

primaryjoin='MetadataValue.metadataname_id==MetadataName.id')
   value = Column(Unicode,nullable=False)

   @property
   def key(self):
   return self.metadataname.namekey

engine = create_engine('postgresql://localhost/test',echo=True)
session = scoped_session(sessionmaker(autocommit=False,bind=engine))

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

metadataname1 = MetadataName(namekey=u'MIMEType')
metadataname2 = MetadataName(namekey=u'PixelWidth')
img1 = Image()
metadatavalue1 = 
MetadataValue(image=img1,value=u'image/png',metadataname=metadataname1)

session.add_all([metadataname1,img1,metadatavalue1,metadataname2])

session.flush()

#BEGIN TEST AREA
print img1.metadatas['MIMEType']
img1.metadatas['PixelWidth'] = u'300'


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] SQLAlchemy / Memcache layer

2010-12-13 Thread Vince Ng
Hi,


I've been attempting to try to add an abstract layer of Memcache on top of
the current version of SQLAlchemy, but I've been running into a few
different issues and wanted to get a few thoughts on the best approach on
loading objects.


I've studied the examples at:

http://www.sqlalchemy.org/trac/browser/examples/beaker_caching

My issue with these methods are they cache simple queries and relationships,
but in a fairly simple way and doesn't account for invalidation.  They also
seem to require more explicit query calls versus some of the sophisticated
lazy-loading property relationship mapping of SQLAlchemy.


My goal is to add Memcache at the object layer based on primary key (and to
control caching of specific object types, versus caching all queries
generated).  All ColumnProperty's of an object may be stored in cache, and
relationships and RelationshipProperty's are ignored for now.  The objects
that the relationships point to may be stored in cache on their primary
key.  If an object is updated/deleted, then the object in cache will be
invalidated.


Basic Example of What I'd Like to Accomplish:

# User has property 'image' which has a 'image_id' foreign key relation to
Image object


 # initial request

user = Session.query(User).get(1)

# checks memcache for user ID 1

# nothing in memcache, queries DB for user ID 1 (image_id for user 1 is 100)

# stores user 1 data in memcache

image = user.image

# checks memcache for image ID 1

# nothing in memcache, queries DB for image ID 100

# stores image 100 data in memcache


 # separate subsequent request

user = Session.query(User).get(1)

# checks memcache for user ID 1, found it!

# populate User object with data from memcache

image = user.image

# checks memcache for image ID 100, found it!

# populate Image object with data from memcache

image.view_count = image.view_count + 1

Session.commit()

# invalidate image ID 100 in memcache


I suspect the best way to issue invalidate requests to Memcache will be to
create a MapperExtension and use before_update() and before_delete().


However, I'm having much more trouble figuring out where and how I should
store data in Memcache and when/how to load from it.  MapperExtensions only
offer hooks after data has been read from the DB.  I've looked into
subclassing the Query class and overriding the get() function (as well as a
few other functions), but the problem I run into is that I can load the
regular columns properties, but will run into issues with lazy-loaded
RelationshipProperties with mostly the error:

DetachedInstanceError: Parent instance User at 0x7fc7503942d0 is not
bound to a Session; lazy load operation of attribute 'images' cannot
proceed


Any ideas or suggestions?


Thanks,

Vince

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: How to apply compiler extensions to implement ROW_NUMBER() OVER support for oracle

2010-12-13 Thread Michael Bayer

On Dec 13, 2010, at 12:12 PM, Ralph Heinkel wrote:

 Hi Michael,
 
 this drop-in is great, it does exactly what I need.  It just has a
 small problem in that the variable 'rownum' is not allowed in oracle,
 it produces a
 DatabaseError: (DatabaseError) ORA-00923: FROM keyword not found where
 expected
 Replacing 'rownum' with 'ora_rn' or something similar fixes this
 problem.
 
 On Dec 9, 5:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 
 the recipe for built in ROW_NUMBER() for LIMIT/OFFSET should be a drop in 
 and is at:
 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowFunctionsByDef...
 
 applying to certain domain objects only would be a little more involved.   
 You'd probably want to place some kind of hint on the statement that the 
 custom  compiler picks up on, or perhaps it looks at the table being 
 selected to make this determination.
 
 What would be the safest way to get access to the table/model for
 which the query was created?
 I found that  element._froms[0] is a pointer to my table, is this a
 good/stable/portable way to get hold of it? This way I could just
 instrument my model (aka hint) so see whether I need the ROW_NUMBER
 approach or not.

Theres different ways to locate tables in the selectable depending on what 
you want.   In this case it might be best for you to look at the 
_order_by_clause attribute to see which tables or selectables are being ordered 
upon, since those are the columns/tables that are impacted by the ROW NUMBER 
operation.  select() itself has a public froms member as well.   This member 
includes all FROM clauses including those that might not render if the select 
is embedded within another one where correlation is taking place - if you 
wanted to filter more carefully like that I'd emulate what 
compiler.py:SQLCompiler.visit_select() is doing.



 
 Ralph
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy / Memcache layer

2010-12-13 Thread Michael Bayer

On Dec 13, 2010, at 8:31 PM, Vince Ng wrote:

 Hi,
 
 I've been attempting to try to add an abstract layer of Memcache on top of 
 the current version of SQLAlchemy, but I've been running into a few different 
 issues and wanted to get a few thoughts on the best approach on loading 
 objects.
 
 I've studied the examples at:
 http://www.sqlalchemy.org/trac/browser/examples/beaker_caching
 My issue with these methods are they cache simple queries and relationships, 
 but in a fairly simple way and doesn't account for invalidation.  They also 
 seem to require more explicit query calls versus some of the sophisticated 
 lazy-loading property relationship mapping of SQLAlchemy.
 
 My goal is to add Memcache at the object layer based on primary key (and to 
 control caching of specific object types, versus caching all queries 
 generated).  All ColumnProperty's of an object may be stored in cache, and 
 relationships and RelationshipProperty's are ignored for now.  The objects 
 that the relationships point to may be stored in cache on their primary key.  
 If an object is updated/deleted, then the object in cache will be invalidated.
 
 Basic Example of What I'd Like to Accomplish:
   # User has property 'image' which has a 'image_id' foreign key relation 
 to Image object
 
   # initial request
   user = Session.query(User).get(1)
   # checks memcache for user ID 1
   # nothing in memcache, queries DB for user ID 1 (image_id for 
 user 1 is 100)
   # stores user 1 data in memcache
   image = user.image
   # checks memcache for image ID 1
   # nothing in memcache, queries DB for image ID 100
   # stores image 100 data in memcache
 
   # separate subsequent request
   user = Session.query(User).get(1)
   # checks memcache for user ID 1, found it!
   # populate User object with data from memcache
   image = user.image
   # checks memcache for image ID 100, found it!
   # populate Image object with data from memcache
   image.view_count = image.view_count + 1
   Session.commit()
   # invalidate image ID 100 in memcache
 
 I suspect the best way to issue invalidate requests to Memcache will be to 
 create a MapperExtension and use before_update() and before_delete().

I'd go with SessionExtension instead - you can gather up everything from the 
dirty list in one go and invalidate.


 
 However, I'm having much more trouble figuring out where and how I should 
 store data in Memcache and when/how to load from it. 

if you're going for get(), subclass Query and override get() to check memcache 
first.   This is pretty much what LOLApps! talked about in their Pycon talk 
last year.   But this approach is not as comprehensive as that of the Beaker 
example, which places itself at the central point of __iter__() and can make 
decisions about all queries, including that it can cache results not just on 
primary key but on filter criterion too.

The loading of related many-to-ones like user.image uses part of the 
functionality of query.get(), but not the get() method itself, so again 
overriding __iter__() as stated in the example is the most focused place.   The 
Beaker example illustrates how to augment exactly the process of the 
many-to-one lazyload, using custom query options that are invoked at the point 
of load.  The RelationshipCache() represents an object that, when placed in the 
options collection of a Query, is invoked directly before the lazyload 
occurs.   It looks at the path that is to be loaded, of the form (mapper, 
attributename), i.e. such as (user_mapper, 'image'), then compares this path 
into its own list of paths to determine if the Query should be instructed to 
pull from cache.   You can modify this to do something like 
user_mapper.get_property('image').direction is 
sqlalchemy.orm.interfaces.MANYTOONE to apply caching to all m2os.   Running 
the Beaker example for a few hours and using pdb to step through each line of 
code should make it clearer what's going on.


 MapperExtensions only offer hooks after data has been read from the DB.  I've 
 looked into subclassing the Query class and overriding the get() function (as 
 well as a few other functions), but the problem I run into is that I can load 
 the regular columns properties, but will run into issues with lazy-loaded 
 RelationshipProperties with mostly the error:
 DetachedInstanceError: Parent instance User at 0x7fc7503942d0 is not bound 
 to a Session; lazy load operation of attribute 'images' cannot proceed

You need your objects to be bound to Sessions when they load things - this so 
they are aware of the database transaction they mean to participate in, as well 
as where they should be loading unloaded (or in this case, cache-expired as 
well) data.  The Beaker example uses query.merge_result() for all loaded 
results to ensure that objects returned participate 

[sqlalchemy] Re: Transaction isolation level support?

2010-12-13 Thread Russell Warren
 Do other dialects support this, but just not indicate it in the
 documentation?

On further investigation I see this old ticket:

http://www.sqlalchemy.org/trac/ticket/443

... which implies it is, in fact, only for PostgreSQL and Sqlite at
the moment.

I'm also quite interested in setting the isolation level on a per
session/connection basis.  However - it seems like the isolation_level
argument to create_engine() sets the isolation level for all managed
connections.  I recognize that there may (at least) be some issues
with this based on the connection pooling scheme in use, but is there
a way (even a hack) to do a one-time set of the isolation level to
SERIALIZABLE (for example) for a created session?

This thread makes it seem like per-session/connection isolation
setting is not as simple as it might seem at first...
http://groups.google.com/group/sqlalchemy-devel/browse_thread/thread/653b6b2f49a940b8

And this StackOverflow question makes it seem like it isn't easy to
set the isolation level per session...
http://bit.ly/eG5DUl

Still digging...

Russ

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] UserDefinedType for tuples

2010-12-13 Thread Enrico
I want to make a custom type and I'm not sure whether to use
UserDefinedType or TypeDecorator.

for a UserDefinedType it seems I must write bind_processor and 
result_processor or
for TypeDecorator it seems I must write process_bind_param  and
process_result_value

I think I need a UserDefinedType as I'm combining two primitive types
rather than specialising one. I want Vector2D(float, float) and an
ordered list of such vectors and to pass these into the DB using
something as close as possible to Python such as [(ax, ay), (bx, by),
(cx, cy)]

After reading some doco and posts I'm still unclear exactly what the
functions I have to write should do as they often seem to do very
little. Really, I just want to insert tuples into columns.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.