Re: [sqlalchemy] SQLAlchemy-Future

2010-12-30 Thread Ron DuPlain
On Wed, Dec 29, 2010 at 11:47 AM, Michael Bayer
mike...@zzzcomputing.com wrote:
 current questions:

 2. what are the pros/cons to the the two approaches:  common package 
 namespace named externally to the project, embedding a package namespace in 
 the project itself.   Obviously the former seems better to me since the 
 sqlalchemy. namespace remains unaffected.

afaik, you would have to forfeit sqlalchemy/__init__.py with the
latter, which would be very disruptive.  With namespace packages, you
have no guarantee on which package's __init__.py is used.

Quoting http://packages.python.org/distribute/setuptools.html#namespace-packages

You must NOT include any other code and data in a namespace package’s
__init__.py. Even though it may appear to work during development, or
when projects are installed as .egg files, it will not work when the
projects are installed using “system” packaging tools – in such cases
the __init__.py files will not be installed, let alone executed.

-Ron

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: post_processors error during 0.3.10 to 0.4 migration (returning different object type based on db data)

2007-10-30 Thread Ron
 i can see..i
 wonder if someone has suggested that already.   thats a good way to
 provide this hook for you in just the right place (oh you know what,
 i think hibernate allows something like this).  not sure what you
 mean by base_class unless you just mean the existing inherits
 argument to mapper().

base_class is probably not necessary.  Just stuck it in there in case
the mapper needed to know the base class of the all the other
Classes.

 however, didnt you say that your class
 attributes come from a different table ?  in that case this is still
 not going to work...if youre relying upon eager loading of related ,
 multiple sets of rows, thats not available until well after the
 polymorphic decisions have been made.  the most that polymorhpic_func
 could get is the first row with the Thing's primary key in it.


That's a good point.  I suppose the function could use that primary
key to select stuff out of the Attributes table and then analyze those
to determine the proper class.  But that seems like an unhappy hack.
Why isn't there a hook into a post-populate part of the mapping?  Or
whatever the absolute very last step of making an instance happens to
be.  Does such a thing exist and I just missed it?

 As a temporary detour, I was curious if this little hack does it too
 - change your metaclass to not generate any new mappers.  Just have
 one Thing mapper, and instead of the metaclass making a new mapper
 when it sees Server (and all the other classes), do this:

 from sqlalchemy.orm import mapperlib

 thing_mapper = class_mapper(Thing)
 mapperlib.mapper_registry[mapperlib.ClassKey(Server, None)] =
 thing_mapper

 that will just make Server be linked to the same mapper as that of
 Thing.  combine that with your populate_instance() extension as
 usual.   just wondering if that works all the way through.  this
 wouldnt use any SA inheritance/polymorphism or anything like that.

Ok, so that actually largely fixes the error.  I have a couple other
errors I need to work out.  Probably related to sessions.

What exactly are those lines doing?  getting the mapper of the Thing
class, and assigning the same mapper to the Server class?  It just hit
me that it's probably the answer to the question I asked towards the
top of the post.

One thing I lose by doing it this is having Server mapped to only its
subset of Things in the db.  I was using assign_mapper to accomplish
two different goals.  One, to populate instances of Things with their
attribute and row values.  Two, to add useful member functions
like .select() to the class so that I could more cleanly work on the
subset managed by the class.  I suppose I could write my own functions
to provide the latter and that might actually be the more correct way
to expose such functionality.

I think this got me most of the way towards fixing my code.

I still like the idea of the polymorphic_on callable, but I didn't try
the patch.

Thanks,
-Ron


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] post_processors error during 0.3.10 to 0.4 migration

2007-10-28 Thread Ron

I've been trying to migrate my code to to 0.4 and I'm getting stuck on
this error.  I haven't been able to narrow down what property of my
schema or code triggers this, but I thought I'd ask the group in case
there was an easy answer.

Here Thing is a class that is mapped to a table with a single column.
It has a relation to an attribute table with (thing_id, key, value)
columns.  I have a subclass of Thing called Server, that instead of
mapping directly to the table maps to a select on the thing table
where the thing has certain attributes from the attribute table.  If I
create a Server then add attributes to it then flush the data I get no
errors.  But if I try to query for a Server to which I tried to add
attributes I get the attached error.  Adding attributes straight to
Things or querying for Servers that I didn't add attributes to does
not produce the error.

Not sure if any of that was clear, but it's a start.  Any ideas?

-Ron

return Thing.query.filter(Thing.c.name == name).one()
  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.0-
py2.4.egg/sqlalchemy/orm/query.py, line 605, in one
ret = list(self[0:2])
  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.0-
py2.4.egg/sqlalchemy/orm/query.py, line 619, in __iter__
return self._execute_and_instances(context)
  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.0-
py2.4.egg/sqlalchemy/orm/query.py, line 624, in
_execute_and_instances
return iter(self.instances(result, querycontext=querycontext))
  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.0-
py2.4.egg/sqlalchemy/orm/query.py, line 685, in instances
context.attributes.get(('populating_mapper', instance),
object_mapper(instance))._post_instance(context, instance)
  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.0-
py2.4.egg/sqlalchemy/orm/mapper.py, line 1534, in _post_instance
post_processors = selectcontext.attributes[('post_processors',
self, None)]
KeyError: ('post_processors', sqlalchemy.orm.mapper.Mapper object at
0xb78a18cc, None)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: post_processors error during 0.3.10 to 0.4 migration (returning different object type based on db data)

2007-10-28 Thread Ron

Ok, I've figured out the problem but not really sure what the proper
solution is.

Basically, I have Thing objects that can have attributes associated
with them.  I have other classes that are subclasses of the Thing
object.  These classes can provide more specific functionality based
on the type of Thing it is.  Since Thing and it's subclasses all share
the same table, I need a way to get the correct class based on what
type of Thing it is.  I do this by examining the Attributes associated
with a thing.  The different subclasses of Thing match different
attributes.  In 0.3 I did this by called an instance._setProperClass()
function in the populate_instance method of a MapperExtension.  This
seems to make 0.4 angry.  If I call the same _setProperClass() after I
get the object normally everything seems to work fine.

I've attached a simplified version of what I do in my code to
illustrate the problem.


What I did was kind of a hack in 0.3 so I'm not that surprised that it
doesn't work in 0.4, but I'm not sure how else to achieve the
functionality I'm looking for.  Is there a better way to allow for
sqlalchemy to return objects of different types based on the data they
happen to contain?

-Ron

#!/usr/bin/env python

from sqlalchemy import *

from sqlalchemy.ext.sessioncontext import SessionContext
from sqlalchemy.ext.assignmapper import assign_mapper

from sqlalchemy.orm import * #Mapper, MapperExtension
from sqlalchemy.orm.mapper import Mapper

#from clusto.sqlalchemyhelpers import ClustoMapperExtension

import sys
# session context


METADATA = MetaData()

SESSION = scoped_session(sessionmaker(autoflush=True,
transactional=True))

THING_TABLE = Table('things', METADATA,
Column('name', String(128), primary_key=True),
#Column('thingtype', String(128)),
mysql_engine='InnoDB'
)

ATTR_TABLE = Table('thing_attrs', METADATA,
   Column('attr_id', Integer, primary_key=True),
   Column('thing_name', String(128),
  ForeignKey('things.name',
ondelete=CASCADE,
 onupdate=CASCADE)),
   Column('key', String(1024)),
   Column('value', String),
   mysql_engine='InnoDB'
   )



class CustomMapperExtension(MapperExtension):

def populate_instance(self, mapper, selectcontext, row, instance,
**flags):

Mapper.populate_instance(mapper, selectcontext, instance, row,
**flags)

## Causes problems if run here!
instance._setProperClass()
return EXT_CONTINUE






class Attribute(object):

Attribute class holds key/value pair backed by DB

def __init__(self, key, value, thing_name=None):
self.key = key
self.value = value

if thing_name:
self.thing_name = thing_name

def __repr__(self):
return thingname: %s, keyname: %s, value: %s %
(self.thing_name,
  self.key,
  self.value)
def delete(self):
SESSION.delete(self)


SESSION.mapper(Attribute, ATTR_TABLE)


DRIVERLIST = {}

class Thing(object):

Anything


someattrs = (('klass', 'server'),)

def __init__(self, name, *args, **kwargs):

self.name = name

for attr in self.someattrs:
self.addAttr(*attr)

def _setProperClass(self):

Set the class for the proper object to the best suited driver


if self.hasAttr('klass'):
klass = self.getAttr('klass')

self.__class__ =  DRIVERLIST[klass]

def getAttr(self, key, justone=True):

returns the first value of a given key.

if justone is False then return all values for the given key.


attrlist = filter(lambda x: x.key == key, self._attrs)

if not attrlist:
raise KeyError(key)

return justone and attrlist[0].value or [a.value for a in
attrlist]

def hasAttr(self, key, value=None):

if value:
attrlist = filter(lambda x: x.key == key and x.value ==
value, self._attrs)
else:
attrlist = filter(lambda x: x.key == key, self._attrs)

return attrlist and True or False

def addAttr(self, key, value):

Add an attribute (key/value pair) to this Thing.

Attribute keys can have multiple values.

self._attrs.append(Attribute(key, value))


SESSION.mapper(Thing, THING_TABLE,
   properties={'_attrs' : relation(Attribute, lazy=False,
   cascade='all, delete-
orphan',),
   },
   extension=CustomMapperExtension())

DRIVERLIST['thing'] = Thing

class Server(Thing):
someattrs = (('klass', 'server'),)

pass

DRIVERLIST['server'] = Server

[sqlalchemy] Re: post_processors error during 0.3.10 to 0.4 migration (returning different object type based on db data)

2007-10-28 Thread Ron

So, the code I posted is a much simplified version of what I'm trying
to accomplish only used to illustrate the error I was getting.  What I
actually want to do is select the appropriate class based on any
number of Attributes a Thing might have.  I have a metaclass that is
applied to Thing and all it's subclasses.  This metaclass does the
actual call to mapper, creates the select query to map against for the
various subclasses, and builds a DRIVERLIST dictionary with data that
can be used by setProperClass.  In other words, the type of a given
Thing is determined by it's attributes at runtime, not when the Thing
is created.

I didn't run into any functional problems doing it this way in 0.3 so
I'm not sure what you mean by wrong mapper (I used assign_mapper if
that makes any difference).  The reason I did the setProperClass at
the end of the populate_instance function is because I wanted to make
use of the attrs that the mapper would populate.  That seemed like the
easiest way to accomplish the goal at the time.

I've read the Mapping Class Inheritance Hierarchies section in the
documentation and it looks like they won't quite do what I'm trying to
accomplish.  Maybe if I explained my app architecture a little more
you could clarify the solution a bit (sorry, this ended up being more
verbose than I intended):

I have a datastore that consists of 3 tables.
 1. Thing table (just a primary-key name column)
 2. Attr table (key/value columns with an id and foreign key to Thing
table)
 3. Thing-to-Thing relation table (Things can be 'connected' to each
other)

The idea for that schema is to maximize the flexibility of what one
can store.  In this vein I created a Thing class.  This class has many
methods for managing attributes, connections between Things,
searching, matching, clever __init__, __str__, __eq__, etc.  The
design is such that subclasses of Thing only need to set class
variables to achieve certain functionality.  For example, there is a
meta_attrs list that will pre-fill the attributes for an object and
there is also a required_attrs var that will let you define required
arguments to init.  My goal was to make sublcasses or 'drivers' as
simple as possible.

To expand on the Server example from my testcode, say I had this
class:

class Server(Thing):
meta_attrs = [('type', 'server')]

def ssh(self):
   # start an ssh session to this server
   somemagic()

People use that class and do things like:

someserver.addAttr('manufacturer', 'sun')

adding lots of data to the db.  Then later someone decides that sun
servers have some special functionality that should be exposed, say cd
ejection.  They create a new class:

class SunServer(Server):
meta_attrs = [('manufacturer', 'sun')]

def ejectCD(self):
# eject the cd

It should be that easy.  Now, some things I didn't mention earlier.
The all meta_attrs of all parent classes also get applied to new
objects.  So any  s=SunServer()  will have both ('type', 'server') and
('manufacturer', 'sun') attributes.  Also, now that I have this new
SunServer class any time I select something from the database that
matches all its meta_attrs it should return a SunServer object where
it used to return a regular Server object.

ex.

t1 = Thing.query.filter(Thing.c.name == 'someOldSunServer')
isinstance(t1, SunServer) == True

This is without updating the database, or making any other change
aside from adding that new SunServer class.  So, where does sqlalchemy
fit into all this?  People developing drivers shouldn't ever have to
know about SA (they, of course, could make use of it if they want).
The Thing class has a __metaclass__ that takes care of all the SA
magic so every subclass of Thing is taken care of.  However, I'm not
sure how to get the polymorfic stuff in the SA mapper to match against
arbitrary attributes of classes  (specifically those named in the
meta_attrs).  Should I not rely on SA to return any specific type at
all (just always return Thing) and make my own query/select functions
that call _setProperClass on their own?

I'm trying to take advantage of as much of the SA magic as possible,
but I'm unsure when I am going beyond its scope and some of the more
advanced topics are not quite documented enough for me to fully
understand how to use them.

Thanks for the help,
-Ron



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] best practices for catching sqlerrors

2007-09-24 Thread Ron

In my application I'd like db related errors that are encountered when
using mapped classes to have better exceptions that make more sense in
the context of my program.  Where is the best place to put this
stuff?

For example, say I have a table with a single primary_key column and I
have a class MyData mapped to that table.  If someone were to try to
make and flush a MyData(someDuplicateEntry) instance they'd get an
ugly sqlalchemy exception.  I'd rather they get, say, an
AlreadyExistsException() or something to that effect.  Now, I can wrap
each flush with a try/except block, or have the __init__ method of the
class do a check during instantiation, but I'm wondering if there is
some better way to do it.

Perhaps a way to map SqlerrorA to throw ExceptionA and SqlerrorB to
throw ExceptionB.

Thanks,
Ron


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dictionaries collection_class

2007-06-18 Thread Ron

Ok, I applied that change to the version I'm using (0.3.8).  That
fixed it.

But I think I stumbled on another bug/inconcistancy.

If I tried to str/repr obj.attr it failed.

I looked around the code and this fix seemed to work, but I'm not sure
if it fits in with how the class is intended to function.

In /sqlalchemy/ext/associationproxy.py

in class class _AssociationDict(object)

change

def items(self):
return [(k, self._get(self.col[k])) for k in self]

to

def items(self):
return [(k, self._get(self.col[k])) for k in self.keys()]

Does that make sense?  Or is the problem deeper in the code?  It seems
to go along with what you said earlier about how the iteration was
made to work like an instrumentedlist.

-Ron


On Jun 16, 3:07 pm, jason kirtland [EMAIL PROTECTED] wrote:
 Ron wrote:
  Now I'm having trouble with updating values in AttributeDict:

  so

  obj.attrs['key'] = 'somevalue'   (works)
  obj.attrs['key'] = 'newvalue'(second one doesn't work)

  I get this error:

File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.3.8-
  py2.4.egg/sqlalchemy/orm/mapper.py, line 679, in init
  raise e
  TypeError: lambda() takes exactly 2 arguments (3 given)

  Any more ideas?

  -Ron

 Fixed in 2739.  (a silly typo)

 -jek


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dictionaries collection_class

2007-06-16 Thread Ron

Now I'm having trouble with updating values in AttributeDict:

so

obj.attrs['key'] = 'somevalue'   (works)
obj.attrs['key'] = 'newvalue'(second one doesn't work)

I get this error:

  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.3.8-
py2.4.egg/sqlalchemy/orm/mapper.py, line 679, in init
raise e
TypeError: lambda() takes exactly 2 arguments (3 given)

Any more ideas?

-Ron


On Jun 12, 2:52 pm, jason kirtland [EMAIL PROTECTED] wrote:
 Ron wrote:

  Nevermind.  I see what I was doing.  In much of my code it worked
  fine, just like a dictionary.

  But in another part of my code I do:

  for i in self.attrs:
 ...blah

  which works like a list.  I remember when I saw the __iter__
  implementation requirement noting that that may be confusing.

 That's a side-effect of the way dict collections are managed by the
 InstrumentedList.  The iterator behavior will be over keys as you'd
 expect in 0.4, both on dict collection classes and in dict-based
 association proxies.

 I think the 0.3 association proxy does that only to be consistent
 with the underlying attribute's iterator, but I don't see any
 strong reason to keep that behavior in 0.3, especially as
 association dict support is pretty much brand new.  Making the
 assoc proxy's iterator truly dict-like can go in 0.3.9 if there's
 call.

 Cheers,
 Jason


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dictionaries collection_class

2007-06-16 Thread Ron

Now I'm having trouble with updating values in AttributeDict:

so

obj.attrs['key'] = 'somevalue'   (works)
obj.attrs['key'] = 'newvalue'(second one doesn't work)

I get this error:

  File /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.3.8-
py2.4.egg/sqlalchemy/orm/mapper.py, line 679, in init
raise e
TypeError: lambda() takes exactly 2 arguments (3 given)

Any more ideas?

-Ron


On Jun 12, 2:52 pm, jason kirtland [EMAIL PROTECTED] wrote:
 Ron wrote:

  Nevermind.  I see what I was doing.  In much of my code it worked
  fine, just like a dictionary.

  But in another part of my code I do:

  for i in self.attrs:
 ...blah

  which works like a list.  I remember when I saw the __iter__
  implementation requirement noting that that may be confusing.

 That's a side-effect of the way dict collections are managed by the
 InstrumentedList.  The iterator behavior will be over keys as you'd
 expect in 0.4, both on dict collection classes and in dict-based
 association proxies.

 I think the 0.3 association proxy does that only to be consistent
 with the underlying attribute's iterator, but I don't see any
 strong reason to keep that behavior in 0.3, especially as
 association dict support is pretty much brand new.  Making the
 assoc proxy's iterator truly dict-like can go in 0.3.9 if there's
 call.

 Cheers,
 Jason


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dictionaries collection_class

2007-06-12 Thread Ron


 Using the association_proxy extension in combination with your
 dictionary collection class is an easy way to get this kind of
 simplified access.  Assuming your Attribute's value is in a
 property called 'value', you can set up simple dict access like so:

 class Obj(object):
   attrs = association_proxy('_attrs', 'value')

 mapper(Obj, ..., properties = {
'_attrs': relation(Attribute, collection_class=your_dict_class)
   }

 obj.attrs['foo'] = 'a'

 -jek

When I try the above I get this error at flush time:

InvalidRequestError: Class 'str' entity name 'None' has no mapper
associated with it

Here is my dictionary collection_class:

class AttributeDictNEW(dict):

My Attribute Dict


def append(self, item):
super(AttributeDictNEW, self).__setitem__(item.name, item)

def test__iter__(self):
return iter(self.values())

def test__getitem__(self, name):
return super(AttributeDictNEW, self).__getitem__(name).value

def __setitem__(self, name, value):
if not isinstance(value, Attribute):
newattr = Attribute(name, str(value))
self.append(newattr)

else:
self.append(value)

The test__ functions are named such to get out of the way of the
parent class's functions while testing.

I may be misunderstanding how an association_proxy works.

-Ron


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dictionaries collection_class

2007-06-12 Thread Ron


 The association proxy will take care of Attribute construction for
 you, so you can get away with just:

 class AttributeDictNEW(dict):
 def append(self, item):
 self[item.key] = item
 def __iter__(self):
 return self.itervalues()


So now I if I try to get something from the dict:

obj.attr['foo']

I get this error:

KeyError: schema.Attribute object at 0xb78a8e0c

ok, so it looks like something is turning 'foo' into an Attribute.
Fine, so I add this to the AttributeDictNEW:

def __getitem__(self, item):
return super(AttributeDictNEW, self).__getitem__(item.name)

But I get:

return super(AttributeDictNEW, self).__getitem__(item.name)
AttributeError: 'str' object has no attribute 'name'

if I do a sys.stderr.write(str(type(item))) before the return it
outputs this:

class 'schema.Attribute'

item is an Attribute, but the error indicates it's a str that doesn't
have a 'name' member variable.  So I'm totally confused.

-Ron


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---