[sqlalchemy] Re: UniqueObject recipe and turbogears

2007-06-19 Thread King Simon-NFHD78

The way to hook most parts of the ORM is by creating a MapperExtension
. Basically, define a subclass of MapperExtension, overriding
whichever method you are interested in (possibly append_result or
create_instance in your case), and pass an instance of this class as the
'extension' parameter to the mapper.

However, I think it would be worthwhile trying to understand exactly why
the current recipe isn't working for you. (I see someone has updated it
to put the session in the hashkey - I'm not sure I would have been brave
enough to do that until at least one other person had said it was the
right thing to do. I don't really know what I'm talking about ;-) )

If you have any places in your code where you create UniqueName objects
without explicitly saying UniqueName(), then they won't pass
through the metaclass __call__ method, and therefore won't get added to
the cache. This also includes situations where the object is loaded from
a relation on another class.

But I don't understand why that would be a problem, as that would only
affect instances that already exist in the database, and because of the
identity_map magic, you will only get one object in the session
representing that row anyway.

I think a testcase is probably needed - you could try and make a single
script that starts two threads and performs operations on UniqueName
obects, and see if you can get the same failure.

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of kris
Sent: 18 June 2007 20:48
To: sqlalchemy
Subject: [sqlalchemy] Re: UniqueObject recipe and turbogears


I agree with your summary, I also noted that sqlalchemy doesn't really
like
to have objects that are linked together and in different sessions.

I tried using hash_key = (session.context, name), but this failed in
the same way.
Loading the code with log statement, I note that some object are
loaded
not with the UniqueName loader, but probably by sqlalchemy itself.

I believe it is those objects that are creating the conflict.

Is there a way to hook the sqlalchemy loading system, so I can place
the object
in the cache?





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UniqueObject recipe and turbogears

2007-06-18 Thread King Simon-NFHD78

I'm not using the recipe, so I could be completely wrong about this, but
here goes:

The EntitySingleton metaclass stores already-constructed instances in a
class-level WeakValueDictionary attribute. This will be shared between
threads. When you create your UniqueName object, it looks in that
WeakValueDictionary, finds the existing instance and returns it, even
though it may have been created in a different thread and attached to a
different session. I don't think the EntitySingleton is thread-safe.

I don't know how best to fix it. You really want the instances to be
associated with their session. You could try replacing the first line of
__call__ with:

  hashkey = (ctx.current, name)

...but I don't know whether that could have any undesired side-effects.
Note that your UniqueName objects will only be unique within the current
session, but that may be OK for your purposes.

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of kris
Sent: 15 June 2007 21:29
To: sqlalchemy
Subject: [sqlalchemy] UniqueObject recipe and turbogears



I have been trying to use the unique object recipe
   ( http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject )
with turbogears, but have run in to trouble.

In a nut shell sqlalchemy complains that a UniqueName object is
attached in
multiple sessions and throws an exception.

I am not sure if the recipe plays nicely with assign_mapper, or the
other extensions.
I am not using activemapper though TG does bring it into the picture.

For reference I have posted the error I am seeing. This comes in many
forms, but generally
the same.The key lines below are the creation,  the unique object
during creation
the creation of another database object.

2007-06-15 12:04:26,849 cherrypy.msg INFO HTTP: Page handler: >
Traceback (most recent call last):
  File "/var/lib/python-support/python2.4/cherrypy/_cphttptools.py",
line 105, in _run
self.main()
  File "/var/lib/python-support/python2.4/cherrypy/_cphttptools.py",
line 254, in main
body = page_handler(*virtual_path, **self.params)
  File "", line 3, in default
  File "/var/lib/python-support/python2.4/turbogears/controllers.py",
line 334, in expose
output = database.run_with_transaction(
  File "", line 5, in run_with_transaction
  File "/var/lib/python-support/python2.4/turbogears/database.py",
line 302, in so_rwt
retval = func(*args, **kw)
  File "", line 5, in _expose
  File "/var/lib/python-support/python2.4/turbogears/controllers.py",
line 351, in 
mapping, fragment, args, kw)))
  File "/var/lib/python-support/python2.4/turbogears/controllers.py",
line 378, in _execute_func
output = errorhandling.try_call(func, *args, **kw)
  File "/var/lib/python-support/python2.4/turbogears/
errorhandling.py", line 73, in try_call
return func(self, *args, **kw)
  File "/home/kgk/work/bisquik/development/TG/bisquik/resource/
resource.py", line 152, in default
response = method(resource, doc=xmldoc, **kw)
  File "", line 3, in modify
  File "/var/lib/python-support/python2.4/turbogears/controllers.py",
line 330, in expose
output = func._expose(func, accept, func._allow_json,
  File "", line 5, in _expose
  File "/var/lib/python-support/python2.4/turbogears/controllers.py",
line 351, in 
mapping, fragment, args, kw)))
  File "/var/lib/python-support/python2.4/turbogears/controllers.py",
line 378, in _execute_func
output = errorhandling.try_call(func, *args, **kw)
  File "/var/lib/python-support/python2.4/turbogears/
errorhandling.py", line 73, in try_call
return func(self, *args, **kw)
  File "/home/kgk/work/bisquik/development/TG/bisquik/resource/
gobject_resource.py", line 144, in modify
request = tagparser.parseDoc (txt)
  File "/home/kgk/work/bisquik/development/TG/bisquik/resource/
tagparser.py", line 169, in parseDoc
parseString (doc, handler)
  File "/usr/lib/python2.4/site-packages/_xmlplus/sax/__init__.py",
line 47, in parseString
parser.parse(inpsrc)
  File "/usr/lib/python2.4/site-packages/_xmlplus/sax/expatreader.py",
line 109, in parse
xmlreader.IncrementalParser.parse(self, source)
  File "/usr/lib/python2.4/site-packages/_xmlplus/sax/xmlreader.py",
line 123, in parse
self.feed(buffer)
  File "/usr/lib/python2.4/site-packages/_xmlplus/sax/expatreader.py",
line 216, in feed
self._parser.Parse(data, isFinal)
  File "/usr/lib/python2.4/site-packages/_xmlplus/sax/expatreader.py",
line 312, in start_element
self._cont_handler.startElement(name, AttributesImpl(attrs))
  File "/home/kgk/work/bisquik/development/TG/bisquik/resource/
tagparser.py", line 80, in startElement
node = GObject()
  File "/usr/lib/python2.4/site-packages/sqlalchemy/orm/mapper.py",
line 672, in init
oldinit(self, *args, **kwargs)
  File "/home/kgk/work/bisquik/development/TG/bisquik/model/
tag_model.py", line 480, in __init__
self.table = 'gobjects'
  File "/home/kgk/work/bisquik/development/TG/bisquik/

[sqlalchemy] Autoloading tables on old versions of MySQL

2007-06-14 Thread King Simon-NFHD78
Hi,

I just noticed that autoloading of tables on old versions of MySQL is
broken (I'm using 3.23.58, but I believe the problem will occur up to
4.1.1)

The problem is that reflecttable tries to use the
'character_set_results' system variable, which was only introduced in
4.1.1. The attached patch prevents the exception that occurs if it is
not present, but I don't know if that is the correct way to fix it
(should it be decoded using some default encoding perhaps?)

I don't know how concerned you are about supporting such an old version
of MySQL, and I also realise that table reflection is a nice-to-have
feature rather than a necessity, but it's just so convenient :-)

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



mysql.patch
Description: mysql.patch


[sqlalchemy] Re: how to retrieve/update data from/on multiple databases

2007-05-30 Thread King Simon-NFHD78

I think the answer to this partly depends on which parts of SA you are
using, and how your databases are set up. If all the databases have
different schemas, and you are using the low-level SA API (or your ORM
objects are each defined against a single database), I would probably
use a metadata/engine instance per database. You can then define your
Table objects against the appropriate metadata instances.

If you need to persist the same ORM class to multiple databases, then
you could look in to the 'bind_to' parameter to create_session
(mentioned here:
http://www.sqlalchemy.org/docs/unitofwork.html#unitofwork_api_bind).

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Alchemist
Sent: 30 May 2007 14:19
To: sqlalchemy
Subject: [sqlalchemy] how to retrieve/update data from/on multiple
databases


Working with:
Python 2.4
SQLAlchemy 0.3.7
Postgresql 8.2 database servers

I am working in a multidatabase environment.  I need to perform a
query over multiple databases, then manipulate the obtained results
and commit my changes in different tables in different databases.

How can I query from multiple databases?
How can INSERTs/UPDATEs be performed on different tables in different
databases?

Thank you.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: auto-load ForeignKey references?

2007-05-10 Thread King Simon-NFHD78

Max Ischenko wrote:
> 
> On May 10, 4:38 pm, "King Simon-NFHD78" <[EMAIL PROTECTED]>
> wrote:
> > You're halfway there with your 'posts' relation. I think if you pass
> > backref='author' in your relation, then WordpressPost 
> objects will get
> > an 'author' property which points back to the WordpressUser.
> 
> Nope, it doesn't work. At least, I can't get it to work.
> 
> If I use backref='author' new attribute 'author' appears but equals
> None even though the author_id is something like 123.
> 

You're not getting caught by this, are you:

http://www.sqlalchemy.org/trac/wiki/WhyDontForeignKeysLoadData

Basically, setting author_id to a number won't automatically cause the
author to be loaded.

If that's not the case in your situation, I'm out of ideas. Do you have
a test case?

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: auto-load ForeignKey references?

2007-05-10 Thread King Simon-NFHD78

Max Ischenko wrote:
> 
> Hi,
> 
> If I have two tables related via foreign key how can I tell SA that
> accessing foreign key should fetch related object automatically? By
> default it simply gives me the FK as integer which is not what I want.
> 
> Here are my mappers:
> 
> wp_users_tbl = Table('wp_users', meta, autoload=True)
> wp_posts_tbl = Table('wp_posts', meta,
> Column('post_author', Integer,
> ForeignKey(wp_users_tbl.c.ID)),
> autoload=True)
> 
> mapper(WordpressPost, wp_posts_tbl)
> mapper(WordpressUser, wp_users_tbl, properties={
> 'posts' : relation(WordpressPost),
>})
> 
> >>> post = db.select_one(...)
> >>> print post.post_author # prints 123 instead of 
> WordpressUser instance
> 
> Thanks,
> Max.

You're halfway there with your 'posts' relation. I think if you pass
backref='author' in your relation, then WordpressPost objects will get
an 'author' property which points back to the WordpressUser. Relations
are described more fully here:

http://www.sqlalchemy.org/docs/datamapping.html#datamapping_relations

The property pointing to the WordpressUser instance is separate from the
property containing the foreign key value. If you want the property to
be named 'post_author', you'll need to rename the foreign key
relationship to prevent them clashing. You can either rename your
ForeignKey column, or override the column name as described here:

http://www.sqlalchemy.org/docs/adv_datamapping.html#advdatamapping_prope
rties_colname

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: elixir, sqlalchemy & myqldb: Error connectiong to local socket

2007-05-09 Thread King Simon-NFHD78

Pirkka wrote:
> 
> I started migrating from ActiveMapper to Elixir, and have been
> wrestling with this for hours:
> 
> DBAPIError: (Connection failed) (OperationalError) (2005, "Unknown
> MySQL server host '/Applications/MAMP/tmp/mysql/mysql.sock' (1)")
> 
> Here are the command parameters that sqlalchemy is using to open the
> connection:
> 
> {'passwd': 'root', 'host': '/Applications/MAMP/tmp/mysql/mysql.sock',
> 'db': 'django', 'user': 'root', 'client_flag': 2}
> 
> I have no problem connecting to the above sock file using Django or
> mysql command line client with -S parameter. Any ideas what could
> cause the connection to fail?
> 
> Yours,
> Pirkka

I think your 'host' parameter should be 'localhost', and you should pass
an extra parameter 'unix_socket' that contains the path to the socket
file.

If you are specifying it as a URI, it should look something like this:

mysql://root:[EMAIL PROTECTED]/django?unix_socket=/Applications/MAMP/tmp/my
sql/mysql.sock

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 'MapperExtension' object has no attribute 'translate_row'

2007-05-04 Thread King Simon-NFHD78

Hi,

Your TaskExtension class doesn't have a translate_row method, which
apparently is part of the MapperExtension interface (although it doesn't
appear in the docs). From the source code (in orm/mapper.py):

def translate_row(self, mapper, context, row):
"""Perform pre-processing on the given result row and return a
new row instance.

This is called as the very first step in the ``_instance()``
method.
"""

return EXT_PASS

You could make life much easier for yourself by making TaskExtension
inherit from MapperExtension. In that way you would only need to
implement the methods that you actually want to override, and you won't
have to update your extension class every time new extension methods are
added.

Hope that helps,

Simon


> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Sanjay
> Sent: 04 May 2007 14:37
> To: sqlalchemy
> Subject: [sqlalchemy] Re: 'MapperExtension' object has no 
> attribute 'translate_row'
> 
> 
> > testcase please
> 
> Here is the sample code which works perfectly till 0.3.4 and produces
> error in newer versions in my system:
> 
> from sqlalchemy import *
> from sqlalchemy.ext.assignmapper import assign_mapper
> from sqlalchemy.ext.sessioncontext import SessionContext
> 
> context = SessionContext(create_session)
> session = context.current
> 
> metadata = BoundMetaData('sqlite:///satest', echo=False)
> 
> task_tbl = Table('task', metadata,
> Column("task_id", Integer, primary_key=True, autoincrement=True),
> Column("descr", Unicode(30), nullable=False))
> 
> metadata.drop_all()
> metadata.create_all()
> 
> class Task(object):
> pass
> 
> from sqlalchemy import EXT_PASS
> 
> class TaskExtension(object):
> 
> def get_session(self):
> return EXT_PASS
> def select_by(self, query, *args, **kwargs):
> return EXT_PASS
> def select(self, query, *args, **kwargs):
> return EXT_PASS
> def get_by(self, query, *args, **kwargs):
> return EXT_PASS
> def get(self, query, *args, **kwargs):
> return EXT_PASS
> def create_instance(self, mapper, selectcontext, row, class_):
> return EXT_PASS
> def append_result(self, mapper, selectcontext, row, instance,
> identitykey, result, isnew):
> return EXT_PASS
> def populate_instance(self, mapper, selectcontext, row, instance,
> identitykey, isnew):
> mapper.populate_instance(selectcontext, instance, row,
> identitykey, isnew)
> def before_insert(self, mapper, connection, instance):
>  return EXT_PASS
> def before_update(self, mapper, connection, instance):
> return EXT_PASS
> def after_update(self, mapper, connection, instance):
> return EXT_PASS
> def after_insert(self, mapper, connection, instance):
> return EXT_PASS
> def before_delete(self, mapper, connection, instance):
> return EXT_PASS
> def after_delete(self, mapper, connection, instance):
> return EXT_PASS
> 
> assign_mapper(context, Task, task_tbl, extension=TaskExtension())
> 
> t = Task(descr='xyz')
> t.flush()
> session.clear()
> t = Task.get(1) # produces exception
> 
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SELECT LIKE

2007-04-13 Thread King Simon-NFHD78

Disrupt07 wrote
> 
> @Simon
> Thanks. But what is ?  Is it SQLAlchemy or pure SQL?
> 

It is a Query object, as described here:

  http://www.sqlalchemy.org/docs/datamapping.html

If you haven't read them yet, I'd recommend working through a tutorial -
I found this one really helpful:

  http://www.rmunn.com/sqlalchemy-tutorial/tutorial.html

There's also the 'official' one:

  http://www.sqlalchemy.org/docs/tutorial.html

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SELECT LIKE

2007-04-13 Thread King Simon-NFHD78

Disrupt07 wrote:
> 
> I have a table storing users' info.
> table: userinfo
> columns: name, surname, age, location, ...
> 
> I need to query this table using SQLAlchemy's ORM methods (e.g.
> select(), select_by(), get_by()).  The query should be like
>SELECT * FROM userinfo WHERE name LIKE 'Ben%' ORDER BY name, age
> 
> Which SQLAlchemy method should I use?  Can some provide me with an
> example to solve my problem?
> 
> Thanks

Columns have a 'like' method, as well as startswith and endswith, so for
your particular example you could use something like this:

  .select(userinfo.c.name.startswith('Ben'))

Or:

  .select(userinfo.c.name.like('Ben%'))

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy.orm.attributes.InstrumentedList

2007-04-12 Thread King Simon-NFHD78

Disrupt07 wrote 
> 
> What is sqlalchemy.orm.attributes.InstrumentedList?
> 
> I need to use the sqlalchemy.orm.attributes.InstrumentedList type in
> my Python controller methods and need to check if the type of another
> object is of type sqlalchemy.orm.attributes.InstrumentedList.  How can
> I do this? (e.g. using the type() function and what shall I import in
> my controller file?)
> 
> Thanks.
> 

Well, you could import sqlalchemy.orm.attributes, and then when you want
to check the type of an object you would say 'if isinstance(your_object,
sqlalchemy.orm.attributes.InstrumentedList):'

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 'PropertyLoader' object has no attribute 'strategy'

2007-04-11 Thread King Simon-NFHD78

 
Roger Demetrescu wrote:
> 
> On 4/11/07, King Simon-NFHD78 <[EMAIL PROTECTED]> wrote:
> >
> > I've got no idea about the source of the problem, but it 
> would probably
> > be helpful if you could provide stack traces from the exceptions, if
> > that's possible.
> 
> 
> Do you mean using the traceback module ? I've just searched about it
> and I guess I should use this:
> 
> try:
> # do my stuff
> except:
> traceback.print_exc(file=sys.stdout)
> raise
> 
> 
> Is this the best way to show the stack trace ?
> 

That would be one way to do it. If you are using the python logging
module, another way would be to use the logger.exception method, which
automatically adds the exception info, including the traceback, into the
log message.

Ie.

try:
   # do your stuff
except:
   logging.exception('Oops, an exception occurred')
   # or use .exception
   raise

> 
> 
> 
> > Other than that, I would have thought you should be able to 
> track down
> > the source of the 'global name' exception by grepping your 
> source code
> > for uses of 'anxnews_urllocal' without a '.' in front of it. Is it
> > possible that you are doing something like this, for example:
> >
> >   query.select_by(anxnews_urllocal > 5)
> >
> > When you probably mean:
> >
> >   query.select_by(table.c.anxnews_urllocal > 5)
> 
> Man, you got it !!   :)
> 
> The only line of my simple code that have a "anxnews_urllocal" without
> a preceding "." is here:
> 
> 
> if anexo.anxnews_tipof == Anexo.IMAGEM:
> cronometro.start()
> renamed, size = _gerar_thumbs(local, 
> SIZE_WEB, SIZE_THUMB)
> if renamed:
> anexo.anxnews_urllocal =
> posixpath.splitext(anxnews_urllocal)[0] + ".jpg"
>  
> 
> This "renamed" condition is very rare... It only occurs when I'm
> dealing with a BMP image...
> And my log file confirms that.. I was manipulating a BMP image...  :D
> 
> So I'm changing "splitext(anxnews_urllocal)" to
> "splitext(anexo.anxnews_urllocal)"
> 
> 
> > Hope that helps,
> 
> It sure helped...
> 
> 
> Thanks !
> 
> Roger
> 

Well, it certainly makes a change - normally I'm the one asking the
questions ;-)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 'PropertyLoader' object has no attribute 'strategy'

2007-04-11 Thread King Simon-NFHD78

I've got no idea about the source of the problem, but it would probably
be helpful if you could provide stack traces from the exceptions, if
that's possible.

Other than that, I would have thought you should be able to track down
the source of the 'global name' exception by grepping your source code
for uses of 'anxnews_urllocal' without a '.' in front of it. Is it
possible that you are doing something like this, for example:

  query.select_by(anxnews_urllocal > 5)

When you probably mean:

  query.select_by(table.c.anxnews_urllocal > 5)

Note that this is purely a Python exception - I can't think of any way
SQLAlchemy can try and use one of your column names as a global
variable.

Hope that helps,

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Roger Demetrescu
> Sent: 11 April 2007 06:57
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: 'PropertyLoader' object has no 
> attribute 'strategy'
> 
> 
> Some details I forgot to mention:
> 
> I'm using:
> 
>  * SQLAlchemy 0.3.6
>  * Postgresql 7.3.4
>  * Linux RedHat   kernel 2.4.20-8
> 
> 
> Other important detail: looking at my log files, I noticed 
> that the message:
> global name 'anxnews_urllocal' is not defined
> 
> appears several hours after the message:
> 'PropertyLoader' object has no attribute 'strategy'
> 
> 
> Thanks
> 
> Roger
> 
> 
> On 4/11/07, Roger Demetrescu <[EMAIL PROTECTED]> wrote:
> > Hi all,
> >
> > I have a daemon with 2 threads to control upload / download of some
> > files (they use SQLAlchemy to find out which files must be worked).
> >
> > Once a week, my daemon's logging system sends me an email 
> with this message:
> >
> > 'PropertyLoader' object has no attribute 'strategy'
> >
> >
> >
> > After that, I receive another email with this message:
> >
> > global name 'anxnews_urllocal' is not defined
> >
> > where 'anxnews_urllocal' is a field from a table.
> >
> >
> >
> > I usually don't need to touch this daemon... it still works 
> fine even
> > after this alert.
> >
> > Any hints about what could be causing this exception ?
> >
> > Please feel free to ask for more details...
> >
> >
> > TIA
> >
> > Roger
> >
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: offset limit on query with relations

2007-04-05 Thread King Simon-NFHD78


Huy wrote:
> 
> Hi,
> 
> When using the generative limit() offset() or order_by calls 
> on mapper 
> query, the sql generated looks weird.
> 
> I get something like
> 
> select table1.* table2.*
> from (select table1a.id from table1a limit 20 offset 0 order by 
> table1.col) as table_row, table1 join table2 (on...)
> 
> 
> Notice how the limit and offset is in that subselect ? Is 
> this by design.
> The query results are not what I would expect either because the 
> subselect doesn't join to the main table (table1).
> 
> Hope what I"m describing makes sense.
> 

If the outer query involved eager loading of many-to-many properties,
the number of rows returned would not necessarily be the same as the
number of entities being loaded. By doing the limit and offset in the
inner query, it guarantees that you will get exactly the expected number
of entities.

At least, that's my understanding. Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to do many-to-many on the same table?

2007-04-03 Thread King Simon-NFHD78

Tml wrote:
> 
> Hi,
> 
> I want to make a relation such that.. i have users post some
> articles.. and can link them as relatd to other articles.
> 
> I have the Article table like this:
> articles = Table('articles', metadata,
> Column('id', Integer, primary_key=True),
> Column('topic', Unicode(256), nullable=False),
> Column('rank', Integer),
> Column('content', Unicode, nullable=False),
> mysql_engine='InnoDB'
> )
> 
> And I have a related articles association table like this:
> 
> articles_related_articles =
>Column('article_id', Integer, ForeignKey('articles.id'),
> index=True, nullable=False),
> Column('related_id', Integer, nullable=False),
> Column('bias', SmallInteger, default=0),
> mysql_engine='InnoDB'
> )
> 


You probably want to add ForeignKey('articles.id') to the related_id
column as well, and add primary_key=True to both article_id and
related_id since they identify the row.


> 
> So, if article 1 is related to article 2.. then there should be two
> rows in the articles_related_articles table like:
> > 1 2 0
> > 2 1 0
> 
> This is how I make the relation mapping in SA:
> 
> mapper(ArticleRelatedArticle, articles_realted_articles,
>primary_key=[articles_realted_articles.c.article_id,
> articles_related_articles.c.related_id],
>properties={'article': relation(Article),}
> )
> 
> assign_mapper(context, Article, articles,
>  'related': relation(ArticlesRelatedArticles, 
> cascade="all, delete-
> orphan", lazy=True)
> )
> 

You're using assign_mapper for one class, and plain mapper for another.
I don't know whether that might be part of the confusion.

I think I would set up the ArticleRelatedArticle mapper something like
this (untested, probably contains mistakes):

assign_mapper(context, ArticleRelatedArticle, articles_related_articles,
   properties={'article': relation(Article,
 
primaryjoin=articles_related_articles.c.article_id==articles.c.id,
backref='related'),
   'related': relation(Article,
 
primaryjoin=articles_related_articles.c.related_id==articles.c.id,
backref='original')})


This would add two properties to your Article class, 'related' and
'original'. Both lists would contain ArticleRelatedArticle instances,
which have 'article' , 'related' and 'bias' properties.

You might also want to look into the association proxy plugin
(http://www.sqlalchemy.org/docs/plugins.html#plugins_associationproxy)
which would make this less verbose when you aren't using the 'bias'
property.


> 
> I am doing this in the shell, but its not working as expected. I dont
> see those two rows created, and related_id is always getting
> overrideen to NULL, even though I have it specified while creating the
> object.
> 
> In [1]: a = Article.select()[0]
> In [3]: a.id
> Out[3]: 1L
> In [4]: b = Article.select()[1]
> In [5]: c = ArticlesRelatedArticles(article_id=a.id, related_id=b.id)
> In [6]: a.related.append(c)
> In [7]: session.flush()
> /usr/lib/python2.4/site-packages/SQLAlchemy-0.3.4-py2.4.egg/sq
> lalchemy/
> databases/mysql.py:313: Warning: Field 'related_id' doesn't have a
> default value

I think this might be because you aren't using assign_mapper for the
ArticleRelatedArticle class, so it doesn't get the magic 'assign
constructor parameters to attributes' behaviour

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select() got multiple values for keyword argument 'from_obj'

2007-04-02 Thread King Simon-NFHD78

Shouldn't acl.cod_ruolo be inside the [] - part of the first parameter
to 'select'?

The parameters to select are 'columns=None, whereclause=None,
from_obj=[], **kwargs', so your 'and_' part is going in as the from_obj
parameter, and then you are supplying another from_obj, hence the error
message.

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Jose Soares
Sent: 02 April 2007 15:01
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] select() got multiple values for keyword argument
'from_obj'


Hi all,

I'm trying to create the following query using SA:

SELECT DISTINCT operatore.id, anagrafica.nome, acl.cod_ruolo
FROM operatore JOIN anagrafica
ON operatore.id_anagrafica = anagrafica.id
LEFT OUTER JOIN acl ON acl.id_operatore = operatore.id
LEFT OUTER JOIN ruolo_permesso ON ruolo_permesso.cod_ruolo =
acl.cod_ruolo
WHERE (ruolo_permesso.cod_permesso = 'CTR'
AND acl.id_asl IS NOT NULL AND operatore.data_fine_attivita IS NULL)

-

select([Operatore.c.id, Anagrafica.c.nome],Acl.c.cod_ruolo,
   and_(RuoloPermesso.c.cod_permesso=='CTR',
Acl.c.id_asl<>None,
Operatore.c.data_fine_attivita==None),

   
from_obj=[Operatore.mapper.mapped_table.join(Anagrafica.mapper.mapped_ta
ble, 
Operatore.c.id_anagrafica == Anagrafica.c.id
  
).outerjoin(Acl.mapper.mapped_table,
Acl.c.id_operatore == Operatore.c.id
  
).outerjoin(RuoloPermesso.mapper.mapped_table,   
RuoloPermesso.c.cod_ruolo == Acl.c.cod_ruolo)],
   distinct=True)
-

...but it gives me this error:

*exceptions.TypeError: ("select() got multiple values for keyword 
argument 'from_obj'", >)*

any ideas?
jo




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Confused by foreign_keys argument

2007-03-28 Thread King Simon-NFHD78
That compiles and appears to run in the small test program attached, but
if you look at the query generated when accessing the 'cs' property, it
doesn't actually use the join condition:
 
SELECT c.id AS c_id, c.name AS c_name
FROM c, a_b, b_c
WHERE ? = a_b.a_id AND b_c.c_id = c.id ORDER BY a_b.oid
 
ie. the a_b.b_id = b_c.b_id clause is missing.
 
If you aren't keen on the 'viewonly' pattern, how would you recommend
doing this? Just by adding a normal python property and doing a query?
The main reason I like setting it up as a relation is for the potential
of making it eager-loading just by changing a single flag.
 
Thanks,
 
Simon



From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Michael Bayer
Sent: 28 March 2007 17:19
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Re: Confused by foreign_keys argument


what it cant locate are foreign keys between the parent and child
tables, "a" and "c"because there arent any.  when you have a
many-to-many, the rules for figuring out the relationship change, and it
knows to do that by the presence of the "secondary" argument. 

so if you can manufacture a "secondary" table you can do this:

secondary = a_b_table.join(b_c_table,
onclause=a_b_table.c.b_id==b_c_table.c.b_id)
mapper(
  A, a_table,
  properties={'cs': relation(C, secondary=secondary,
primaryjoin=a_table.c.id==secondary.c.a_b_a_id,
secondaryjoin=secondary.c.b_c_c_id==c_table.c.id,
 viewonly=True,
)
 }
   )

im not totally sure the lazy clause is going to work but try it out.

this goes back to my general dislike of "viewonly" and how i cant
generally support it, becuase as the rules for relationships get more
strict and accurate, cases like these become harder to model.



On Mar 28, 2007, at 10:39 AM, King Simon-NFHD78 wrote:


a_table = Table('a', metadata, 

Column('id', Integer, primary_key=True),

Column('name', String(16)),

)




b_table = Table('b', metadata,

Column('id', Integer, primary_key=True),

Column('name', String(16)),

)




c_table = Table('c', metadata,

Column('id', Integer, primary_key=True),

Column('name', String(16)),

)




a_b_table = Table('a_b', metadata,

  Column('a_id', Integer, ForeignKey('a.id'),

 primary_key=True),

  Column('b_id', Integer, ForeignKey('b.id'),

 primary_key=True),

  )

b_c_table = Table('b_c', metadata,

  Column('b_id', Integer, ForeignKey('b.id'),

 primary_key=True),

  Column('c_id', Integer, ForeignKey('c.id'),

 primary_key=True)

  )




class A(object):

pass




class B(object):

pass




class C(object):

pass




mapper(B, b_table)

mapper(C, c_table)




#

# How can I create a mapper on A with a property that gives

# all the 'C' objects?

#

# This doesn't work - it requires the foreign_keys parameter

# to be passed, but I don't know what to pass.

mapper(

  A, a_table,

  properties={'cs': relation(primaryjoin=and_(a_table.c.id ==

a_b_table.c.a_id,

  a_b_table.c.b_id
==

b_c_table.c.b_id,

  c_table.c.id ==

b_c_table.c.c_id),

 viewonly=True,

 )

 }

   )






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



join2.py
Description: join2.py


[sqlalchemy] Confused by foreign_keys argument

2007-03-28 Thread King Simon-NFHD78

Hi,

I don't think I understand exactly what I'm supposed to pass to the
foreign_keys parameter to relation.

I'm trying to set up a 1-to-many viewonly relation, with no backref,
where there are two intermediate tables between the parent table and the
child. To complicate things further, I need to restrict the rows by
joining against a couple of other tables as well.

I was previously running on a fairly old version of SA (revision 2283),
and the relation seemed to work correctly when I ANDed all of my
conditions together and passed them to relation as the primaryjoin.

However, I just tried 0.3.6 and got "ArgumentError: Cant locate any
foreign key columns ... Specify foreign_keys argument ..."

(Interestingly, if I try 0.3.5, the relation compiles but when I try to
access it I get a different exception, InvalidRequestError, about a
column not being available because it conflicts with another one)

For example, if these are my tables:
#--
# Three entities, 'a', 'b' and 'c'.
# 'a' and 'b' have a many-to-many relationship
# 'b' and 'c' have a many-to-many relationship
a_table = Table('a', metadata, 
Column('id', Integer, primary_key=True),
Column('name', String(16)),
)

b_table = Table('b', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(16)),
)

c_table = Table('c', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(16)),
)

a_b_table = Table('a_b', metadata,
  Column('a_id', Integer, ForeignKey('a.id'),
 primary_key=True),
  Column('b_id', Integer, ForeignKey('b.id'),
 primary_key=True),
  )
b_c_table = Table('b_c', metadata,
  Column('b_id', Integer, ForeignKey('b.id'),
 primary_key=True),
  Column('c_id', Integer, ForeignKey('c.id'),
 primary_key=True)
  )

class A(object):
pass

class B(object):
pass

class C(object):
pass

mapper(B, b_table)
mapper(C, c_table)

#
# How can I create a mapper on A with a property that gives
# all the 'C' objects?
#
# This doesn't work - it requires the foreign_keys parameter
# to be passed, but I don't know what to pass.
mapper(
  A, a_table,
  properties={'cs': relation(primaryjoin=and_(a_table.c.id ==
a_b_table.c.a_id,
  a_b_table.c.b_id ==
b_c_table.c.b_id,
  c_table.c.id ==
b_c_table.c.c_id),
 viewonly=True,
 )
 }
   )

#--

I probably haven't explained myself very well. I've tried lots of ways
of expressing the relation, including passing a select as an association
table, but I can't seem to get it to work. Does anyone have any ideas?

Thanks a lot,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using mapper with custom select creates unneeded subquery

2007-03-22 Thread King Simon-NFHD78

This caught me out a couple of weeks ago, and I've seen a couple of
other similar questions as well. You need to add 'correlate=False' to
the nested select.

I wonder if this should be added to the FAQ?

Hope that helps,

Simon

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of Koen Bok
Sent: 22 March 2007 10:47
To: sqlalchemy
Subject: [sqlalchemy] Re: Using mapper with custom select creates
unneeded subquery


Let me post some sample code with that:

mapper(Request, request_table, properties={
'children' : relation(
Request,

primaryjoin=request_table.c.id_parent==request_table.c.id,
backref=backref("parent",
remote_side=[request_table.c.id])),
'i': relation(Item,
primaryjoin=item_table.c.id==request_table.c.id_item,
backref='requests', lazy=True),
[SOME MORE STUFF]
'stock': relation(Stock, primaryjoin=and_(

request_table.c.id_item==stock_table.c.id_product,

request_table.c.id_location==stock_table.c.id_location,

request_table.c.id_stocktype==stock_table.c.id_stocktype),
foreign_keys=[stock_table.c.id_product,
stock_table.c.id_location,
stock_table.c.id_stocktype])})

stock_request = select(
[c for c in stock_table.c] + \
[stock_table.c.quantity.op('-')
(func.sum(request_table.c.quantity)).label('unordered')] + \
[stock_table.c.quantity.op('-')
(func.sum(request_table.c.allocation)).label('unallocated')],
and_(
request_table.c.id_item==stock_table.c.id_product,
request_table.c.id_location==stock_table.c.id_location,

request_table.c.id_stocktype==stock_table.c.id_stocktype),
group_by=[c for c in stock_table.c]).alias('stock_request')

mapper(Stock, stock_request, properties={
'product': relation(Item,
primaryjoin=item_table.c.id==stock_table.c.id_product,
backref='_stock'),
'location': relation(Item,
primaryjoin=item_table.c.id==stock_table.c.id_location),
'stocktype': relation(StockType)})

If you need more, just let me know!

Koen

On Mar 22, 11:42 am, "Koen Bok" <[EMAIL PROTECTED]> wrote:
> Thanks for the reply! If the performance is about equal, that's fine!
>
> But I think I might have found a bug.
>
> When I make a selection it generates the following (faulty) SQL query:
>
> SELECT
> stock_request.id_stocktype AS stock_request_id_stocktype,
> stock_request.unordered AS stock_request_unordered,
> stock_request.id_location AS stock_request_id_location,
> stock_request.id_product AS stock_request_id_product,
> stock_request.unallocated AS stock_request_unallocated,
> stock_request.quantity AS stock_request_quantity,
> stock_request.id AS stock_request_id FROM
> (
> SELECT
> stock.id AS id,
> stock.id_stocktype AS id_stocktype,
> stock.id_product AS id_product,
> stock.id_location AS id_location,
> stock.quantity AS quantity,
> (stock.quantity - sum(request.quantity)) AS unordered,
> (stock.quantity - sum(request.allocation)) AS
unallocated
> FROM request
> WHERE
> request.id_item = stock.id_product
> AND
> request.id_location = stock.id_location
> AND
> request.id_stocktype = stock.id_stocktype
> GROUP BY
> stock.id,
> stock.id_stocktype,
> stock.id_product,
> stock.id_location,
> stock.quantity,
> stock.quantity
> ) AS stock_request, stock
> WHERE
> stock.id_product = 5
> AND
> stock.id_location = 7
> AND
> stock.id_stocktype = 1
> ORDER BY
> stock_request.id
> LIMIT 1
>
> The FROM in the subquery should be: FROM request, stock
>
> The strange thing is that whenever I print the subquery's sql, it has 
> stock in the FROM and tehrefore is correct.
>
> Or am I not understanding it right?
>
> Koen
>
> On Mar 22, 2:58 am, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> > when you pass a selectable to the mapper, the mapper considers that 
> > selectable to be encapsulated, in the same way as a table is.  the 
> > Query cannot add any extra criterion to that selectable directly 
> > since it would modify the results and corrupt the meaning, if not 
> > the actual syntax, of the selectable itself.  therefore the mapper 
> > is always going to select * from (your selectable) - its the only 
> > way to guarantee the correct results.
>
> > the queries it generates, i.e. select * from (select * from ...)) 
> > will be optimized by the database's optimizer in most cases and 
> > should not add any overhead to your application.
>
> > On Mar 21, 2007, at 8:08 PM, Koen Bok wrote:
>
> > > My mapper looks li

[sqlalchemy] Re: Inconsistent results in session.flush()

2007-03-19 Thread King Simon-NFHD78

Michael Bayer wrote:
> 
> On Mar 17, 2007, at 11:39 AM, Simon King wrote:
> 
> >
> > I had assumed that post_update was only necessary when you are 
> > inserting two rows that are mutually dependent, but in this case I 
> > wasn't inserting either a ReleaseLine or a Label. I suppose 
> > post_update can actually have a knock-on affect across the whole 
> > dependency graph.
> 
> Yes, you are correct.  I did the requisite three hours of 
> staring and uncovered the actual condition that led to this 
> problem, which is definitely a bug.
> 
> Because the second flush() did not require any update 
> operation to either the ReleaseLine or the Label object, the 
> "circular" dependency graph that would be generated for them 
> just gets thrown away, but that also threw away a processor 
> thats supposed to be setting the "release_line_id" column on 
> your Branch table.  so the fix is, if the "circular" 
> dependency graph doesnt get generated for a particular cycle 
> between mappers, or if a particular mapper within the cycle 
> isnt included in the "circular" dependency graph, make sure 
> the flush task for that mapper still gets tacked on to the 
> returned structure so that it can operate on other things 
> external to the cycle, in this case your Branch.  the Branch 
> is related to the ReleaseLine/Label via a many-to-one, which 
> was not as stressed in the unit tests so its harder for this 
> issue to pop up (also requires the two separate flushes...so 
> very hard to anticipate this problem).
> 
> So you can take the post_update out and update to rev 2424.
> 

Yep. I can definitely say that I wouldn't have figured that one out.
Thanks again for all your help,

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: fetchall returns bad list

2007-03-19 Thread King Simon-NFHD78

tml wrote:
> I have this code, but the returning "syms" doesn't work for 
> TurboGear's jsonify. Returning "newSyms" works
> 
> >
> > chem_data = Table('chem_data', metadata,
> > Column('id', Integer, primary_key=True),
> > Column('symbol', String(8), nullable=False, index=True),
> > Column('name', Unicode(64), nullable=False, index=True),
> > Column('location', Unicode(32)),
> > Column('description', Unicode),
> > mysql_engine='InnoDB'
> > )
> >
> > def searchData(filter, limit=10):
> > cols = [chem_data.c.name, chem_data.c.symbol]
> > symList = select(cols, or_(chem_data.c.symbol.like(filter),
> >  chem_data.c.name.like(filter)), 
> limit=limit).execute()
> > if symList.rowcount:
> > syms = symList.fetchall()
> > print "X", syms, type(syms)
> > newSyms = [(t.name, t.symbol) for t in syms]
> > print "Y", newSyms, type(newSyms)
> > return newSyms # This works
> > return syms # This wont work.
> > else:
> > return []
> >
> > The print returns:
> > X [(u'Oxygen', 'O')]  Y  [(u'Oxygen', 
> 'O')]  > 'list'>
> >
> > Does fetchall() return some kind of weird list? or might be 
> a bug in 
> > SA...
> >
> > thanks.
> >
> > -tml
> 

They are normal lists, but each element is an instance of
sqlalchemy.engine.base.RowProxy. Try adding 'print type(syms[0])' to see
that.

RowProxy is handy because you can access the values in a list-like way
(row[0]), a dictionary-like way (row['symbol']), or even in an
'object-like' way (row.name). You can even use your column objects to
access the values (row[chem_data.c.location])

jsonify doesn't know how to convert RowProxy objects, but it is easy to
add a rule to convert them. Add something like this to your json.py
(untested):

#---
import sqlalchemy

@jsonify.when("isinstance(obj, sqlalchemy.engine.base.RowProxy)")
def jsonify_row(obj):
return tuple(obj)
# or you could use dict(obj) if you want to use dictionary-style
access

#---

You may even be able to add a rule to convert the ResultProxy object as
well (the object returned by 'execute()' (also untested):

@jsonify.when("isinstance(obj, sqlalchemy.engine.base.ResultProxy)")
def jsonify_results(obj):
return obj.fetchall()

Then your searchData function can just return symList directly.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Inconsistent results in session.flush()

2007-03-16 Thread King Simon-NFHD78
I've just run the attached script about thirty times, and it succeeded 5
times and failed the rest. I've cut out a lot of unnecessary stuff, but
it's still a bit long I'm afraid. I'll cut it down some more, but since
you seemed so eager to see it ;-) I thought I'd send it along as is.

On a bad run, the dependency tuples look like this:

DEBUG:sqlalchemy.orm.unitofwork.UOWTransaction.0x..50:Dependency sort:
Mapper|User|user
  Mapper|Component|component
Mapper|ChangeOrigin|change_origin
  Mapper|Label|label (cycles: [Mapper|Label|label,
Mapper|ReleaseLine|release_line])
Mapper|Counter|counter
Mapper|Branch|branch
  

And on a good run they look like this:

DEBUG:sqlalchemy.orm.unitofwork.UOWTransaction.0x..10:Dependency sort:
Mapper|User|user
  Mapper|Component|component
Mapper|ChangeOrigin|change_origin
  Mapper|ReleaseLine|release_line (cycles:
[Mapper|ReleaseLine|release_line,  Mapper|Label|label])
Mapper|Counter|counter
Mapper|Branch|branch
  

Thanks a lot for looking at this,

Simon

Michael Bayer wrote:
> 
> I can actually read a fair degree from these dumps, i need 
> mostly to know what the actual dependencies are (i.e. which 
> classes are dependent on what, whats the error).  also when 
> you do the full debug echoing the UOW should illustrate a 
> series of "dependency tuples"  
> which will show what pairs of classes the UOW perceives as 
> "dependent" on each other.
> 
> On Mar 16, 2007, at 6:59 AM, King Simon-NFHD78 wrote:
> 
> >
> > Hi,
> >
> > I'm having a problem where the results of session.flush() vary from 
> > one run to another of my test suite. The unit of work 
> transaction dump 
> > is significantly different from one run to the next, similar to the 
> > issue in ticket 461. I haven't managed to make a test case small 
> > enough to post to the list yet, and I think I need to delve 
> a little 
> > further into the code to find out why it's failing.
> >
> > (This is with both 0.3.5 and rev2416)
> >
> > The logs from the UOWTransaction on a failing run and a passing run 
> > are below. As well as the ordering being different, there 
> is at least 
> > one class (ReleaseLine) that doesn't appear in the bad run.
> >
> > Unfortunately I don't know how to go about debugging this. 
> I think I 
> > need to see exactly what is going on in the dependency sort. Do you 
> > have any suggestions for suitable places to add some extra logging?
> >
> > This is a failing run:
> > INFO:sqlalchemy.orm.unitofwork.UOWTransaction.0x..30:Task dump:
> >
> >  UOWTask(0x184b2b0, Component/component/None) (save/update phase)
> >|
> >|- UOWTask(0x184bb50, User/user/None) (save/update phase)
> >|   |- Save User(0x1851870)
> >|   |   |- Process User(0x1851870).branches
> >|   |   |- Process User(0x1851870).reviews
> >|   |   |- Process User(0x1851870).labels
> >|   |   |- Process Branch(0x17ee310).user
> >|   |   |- Process Branch(0x184b190).user
> >|   |
> >|   |- UOWTask(0x184bb70, ChangeOrigin/change_origin/None)
> > (save/update phase)
> >|   |   |   |- Process Branch(0x17ee310).change_origin
> >|   |   |   |- Process Branch(0x184b190).change_origin
> >|   |   |
> >|   |   |- UOWTask(0x184b590, Label/label/None) 
> (save/update phase)
> >|   |   |   |
> >|   |   |   |- UOWTask(0x184b1b0, Branch/branch/None) 
> (save/update
> > phase)
> >|   |   |   |   |- Save Branch(0x17ee310)
> >|   |   |   |   |- Save Branch(0x184b190)
> >|   |   |   |   |   |- Process Branch(0x17ee310).review
> >|   |   |   |   |   |- Process Branch(0x184b190).review
> >|   |   |   |   |
> >|   |   |   |   |- UOWTask(0x183f470, Review/review/None)
> > (save/update phase)
> >
> >|   |   |   |   |   |
> >|   |   |   |   |
> >|   |   |   |   |
> >|   |   |   |   |- UOWTask(0x184bb30,
> > ) 
> > (save/update phase)
> >|   |   |   |   |   |   |- Process Branch(0x17ee310).label
> >|   |   |   |   |   |   |- Process Branch(0x184b190).label
> >|   |   |   |   |   |
> >|   |   |   |   |
> >|   |   |   |   |
> >|   |   |   |
> >|   |   |   |
> >|   |   |   |- UOWTask(0x184bfb0, Counter/counter/None) (save/ 
> > update
> > phase)
> >|   |   |   |   |- Save Counter(0x184b0f0)
> >|   |   |   |   |
> >|   |   |   |
> >|   |   |   |---

[sqlalchemy] Inconsistent results in session.flush()

2007-03-16 Thread King Simon-NFHD78

Hi,

I'm having a problem where the results of session.flush() vary from one
run to another of my test suite. The unit of work transaction dump is
significantly different from one run to the next, similar to the issue
in ticket 461. I haven't managed to make a test case small enough to
post to the list yet, and I think I need to delve a little further into
the code to find out why it's failing.

(This is with both 0.3.5 and rev2416)

The logs from the UOWTransaction on a failing run and a passing run are
below. As well as the ordering being different, there is at least one
class (ReleaseLine) that doesn't appear in the bad run.

Unfortunately I don't know how to go about debugging this. I think I
need to see exactly what is going on in the dependency sort. Do you have
any suggestions for suitable places to add some extra logging?

This is a failing run:
INFO:sqlalchemy.orm.unitofwork.UOWTransaction.0x..30:Task dump:

 UOWTask(0x184b2b0, Component/component/None) (save/update phase)
   |
   |- UOWTask(0x184bb50, User/user/None) (save/update phase)
   |   |- Save User(0x1851870)
   |   |   |- Process User(0x1851870).branches
   |   |   |- Process User(0x1851870).reviews
   |   |   |- Process User(0x1851870).labels
   |   |   |- Process Branch(0x17ee310).user
   |   |   |- Process Branch(0x184b190).user
   |   |
   |   |- UOWTask(0x184bb70, ChangeOrigin/change_origin/None)
(save/update phase)
   |   |   |   |- Process Branch(0x17ee310).change_origin
   |   |   |   |- Process Branch(0x184b190).change_origin
   |   |   |
   |   |   |- UOWTask(0x184b590, Label/label/None) (save/update phase)
   |   |   |   |
   |   |   |   |- UOWTask(0x184b1b0, Branch/branch/None) (save/update
phase)
   |   |   |   |   |- Save Branch(0x17ee310)
   |   |   |   |   |- Save Branch(0x184b190)
   |   |   |   |   |   |- Process Branch(0x17ee310).review
   |   |   |   |   |   |- Process Branch(0x184b190).review
   |   |   |   |   |
   |   |   |   |   |- UOWTask(0x183f470, Review/review/None)
(save/update phase)

   |   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |   |- UOWTask(0x184bb30,
)
(save/update phase)
   |   |   |   |   |   |   |- Process Branch(0x17ee310).label
   |   |   |   |   |   |   |- Process Branch(0x184b190).label
   |   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |
   |   |   |   |
   |   |   |   |- UOWTask(0x184bfb0, Counter/counter/None) (save/update
phase)
   |   |   |   |   |- Save Counter(0x184b0f0)
   |   |   |   |   |
   |   |   |   |
   |   |   |   |
   |   |   |
   |   |   |
   |   |
   |   |
   |
   |
   |- UOWTask(0x184bb50, User/user/None) (delete phase)
   |   |
   |   |- UOWTask(0x184bb70, ChangeOrigin/change_origin/None) (delete
phase)
   |   |   |
   |   |   |- UOWTask(0x184b590, Label/label/None) (delete phase)
   |   |   |   |
   |   |   |   |- UOWTask(0x184b1b0, Branch/branch/None) (delete phase)
   |   |   |   |   |
   |   |   |   |   |- UOWTask(0x183f470, Review/review/None) (delete
phase)
   |   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |   |- UOWTask(0x184bb30,
) (delete
phase)
   |   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |   |
   |   |   |   |
   |   |   |   |
   |   |   |   |- UOWTask(0x184bfb0, Counter/counter/None) (delete
phase)
   |   |   |   |   |
   |   |   |   |
   |   |   |   |
   |   |   |
   |   |   |
   |   |
   |   |
   |
   |

And on a good run looks like this:

INFO:sqlalchemy.orm.unitofwork.UOWTransaction.0x..f0:Task dump:

 UOWTask(0x17f2610, User/user/None) (save/update phase)
   |- Save User(0x17fe9f0)
   |   |- Process User(0x17fe9f0).labels
   |   |- Process Branch(0x17caa10).user
   |   |- Process Branch(0x17f2470).user
   |   |- Process User(0x17fe9f0).reviews
   |   |- Process User(0x17fe9f0).branches
   |
   |- UOWTask(0x17f2a50, ChangeOrigin/change_origin/None) (save/update
phase)
   |   |   |- Process Branch(0x17caa10).change_origin
   |   |   |- Process Branch(0x17f2470).change_origin
   |   |
   |   |- UOWTask(0x17f29f0, Component/component/None) (save/update
phase)
   |   |   |
   |   |   |- UOWTask(0x17f2ad0, ReleaseLine/release_line/None)
(save/update phase)
   |   |   |   |   |- Process Branch(0x17caa10).release_line
   |   |   |   |   |- Process Branch(0x17f2470).release_line
   |   |   |   |   |- Process Counter(0x17fed10).release_line
   |   |   |   |
   |   |   |   |- UOWTask(0x17f2430, Counter/counter/None) (save/update
phase)
   |   |   |   |   |- Save Counter(0x17fed10)
   |   |   |   |   |
   |   |   |   |
   |   |   |   |
   |   |   |   |- UOWTask(0x17f2450, Branch/branch/None) (save/update
phase)
   |   |   |   |   |- Save Branch(0x17caa10)
   |   |   |   |   |- Save Branch(0x17f2470)
   |   |   |   |   |   |- Process Branch(0x17caa10).review
   |   |   |   |   |   |- Process Branch(0x17f2470).review
   |   |   |   |   |
   |   |   |   |   |- UOWTask(0x17faf70, Review/review/None

[sqlalchemy] Re: table name bind param

2007-03-15 Thread King Simon-NFHD78

tml wrote:
> 
> also to clarify, the text actually has :table_name used in many other
> places:
> 
> t = metadata.engine.text("LOCK TABLE :table_name WRITE; "
>"UPDATE :table_name SET rgt=rgt + 
> 2 WHERE rgt > :insert_node_val and parent_id = :parent_id;"
>"UPDATE :table_name SET lft=lft + 
> 2 WHERE lft > :insert_node_val and parent_id = :parent_id;"
>..
> 
> so if i had it as %s, i would have to repeat the same name 
> multiple times in % (name, name, name). I'm ok with this, 
> just curious if there is a better way.
> 

I don't think bind parameters can be used for table names, so you are
stuck with python format strings, but you can use a dictionary instead
of a tuple when formatting strings. Instead of using %s, you use
%(name)s, and instead of the tuple (name, name, name) you pass a single
dictionary {'name': }

http://docs.python.org/lib/typesseq-strings.html

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Table being removed from nested query

2007-03-06 Thread King Simon-NFHD78

That did it - thanks a lot

Simon 

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Michael Bayer
> Sent: 06 March 2007 14:38
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: Table being removed from nested query
> 
> 
> try putting "correlate=False" in the nested select.
> 
> On Mar 6, 2007, at 6:29 AM, King Simon-NFHD78 wrote:
> 
> > Hi,
> >
> > I have a problem in which a table is being removed from the FROM 
> > clause of a nested query. The attached file should show the 
> problem, 
> > which I've tested on 0.3.5 and rev 2383.
> >
> > In the example, there are two tables, department and employee, such 
> > that one department has many employees. The inner query 
> joins the two 
> > tables and returns department IDs:
> >
> >   inner = select([departments.c.department_id],
> >  employees.c.department_id ==
> > departments.c.department_id)
> >   inner = inner.alias('filtered_departments')
> >
> > The SQL looks like:
> >
> >  SELECT departments.department_id
> >  FROM departments, employees
> >  WHERE employees.department_id = departments.department_id
> >
> > I then join this query back to the department table:
> >
> >  join = inner.join(departments,
> >
> > onclause=inner.c.department_id==departments.c.department_id)
> >
> > SQL for the join condition looks like:
> >
> >  (SELECT departments.department_id
> >   FROM departments, employees
> >   WHERE employees.department_id = departments.department_id)
> >   AS filtered_departments
> >   JOIN departments ON filtered_departments.department_id = 
> > departments.department_id
> >
> > This still looks correct to me. However, I then base a query on this
> > join:
> >
> >   outer = select([departments.c.name],
> >  from_obj=[join],
> >  use_labels=True)
> >
> > At this point, the 'departments' table is no longer part of 
> the inner 
> > query. The SQL looks like:
> >
> >  SELECT departments.name
> >  FROM (SELECT departments.department_id AS department_id
> >FROM employees
> >WHERE employees.department_id = departments.department_id)
> > AS filtered_departments
> >   JOIN departments ON filtered_departments.department_id = 
> > departments.department_id
> >
> > ...and the query doesn't run.
> >
> > I think I can work around it by putting the join condition in the 
> > whereclause of the select, instead of from_obj, but is 
> there a reason 
> > why the join version doesn't work?
> >
> > Thanks,
> >
> > Simon
> >
> > >
> > 
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Table being removed from nested query

2007-03-06 Thread King Simon-NFHD78
Hi,

I have a problem in which a table is being removed from the FROM clause
of a nested query. The attached file should show the problem, which I've
tested on 0.3.5 and rev 2383.

In the example, there are two tables, department and employee, such that
one department has many employees. The inner query joins the two tables
and returns department IDs:

  inner = select([departments.c.department_id],
 employees.c.department_id ==
departments.c.department_id)
  inner = inner.alias('filtered_departments')

The SQL looks like:

 SELECT departments.department_id
 FROM departments, employees
 WHERE employees.department_id = departments.department_id

I then join this query back to the department table:

 join = inner.join(departments,
 
onclause=inner.c.department_id==departments.c.department_id)

SQL for the join condition looks like:

 (SELECT departments.department_id
  FROM departments, employees
  WHERE employees.department_id = departments.department_id)
  AS filtered_departments
  JOIN departments ON filtered_departments.department_id =
departments.department_id

This still looks correct to me. However, I then base a query on this
join:

  outer = select([departments.c.name],
 from_obj=[join],
 use_labels=True)

At this point, the 'departments' table is no longer part of the inner
query. The SQL looks like:

 SELECT departments.name
 FROM (SELECT departments.department_id AS department_id
   FROM employees
   WHERE employees.department_id = departments.department_id)
AS filtered_departments
  JOIN departments ON filtered_departments.department_id =
departments.department_id

...and the query doesn't run.

I think I can work around it by putting the join condition in the
whereclause of the select, instead of from_obj, but is there a reason
why the join version doesn't work?

Thanks,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



inner_query_test.py
Description: inner_query_test.py


[sqlalchemy] Re: Polymorphic collections / ticket #500

2007-03-06 Thread King Simon-NFHD78
I wanted to do something like this in the past, and in the end, rather
than using polymorphic mappers it made more sense to create a
MapperExtension which overrides create_instance. In create_instance you
can examine your 'typ' column to decide what class to create, selecting
one of your Manager/Demigod classes if necessary, or falling back to the
Person class otherwise.
 
Hope that helps,
 
Simon




From: sqlalchemy@googlegroups.com
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Morrison
Sent: 06 March 2007 01:12
To: sqlalchemy
Subject: [sqlalchemy] Polymorphic collections / ticket #500


The fix for ticket #500 breaks a pattern I've been using.

It's most likely an anti-pattern, but I don't see a way to get
what I want in SA otherwise.

I've got a series of "entities"

class Person():
   pass

class Manager(Person):
   def __init__(self):
   # do manager stuff

class Demigod(Person):
   def __init__(self):
   # do demigod stuff

etc.

there are mappers for each of these entities that inherit from
Person(), so all of the normal Person() properties exist, but Person()
itself is not polymorphic. That's on purpose, and because the class
hierarchy of Manager(), etc, is not exhaustive, and I occasionally  want
to save instances of Person() directly.  
If I make the Person() class polymorphic on a column of say
"typ", then SA clears whatever "typ" I may have tried to set directly,
and seems to make me specify an exhaustive list of sub-types. 

And so I leave Person() as non-polymorphic. I also have a
collection of Person() objects on a different mapper, which can load
entity objects of any type. 

Before rev #2382, I could put a Manager() in a Person()
collection, and  it would flush OK. Now it bitches that it wants a real
polymorphic mapper. I don't want to use a polymorphic mapper, because I
don't want to specify an exhaustive list of every class that I'm ever
going to use. 

What to do?

Thanks,
Rick





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Associations in a cleaner way

2007-03-05 Thread King Simon-NFHD78

 
Knut Aksel Røysland wrote:
> 
> [snip]
> 
> However, an instance of D also needs a reference to an instance of C.
> If the appropriate instance of C exists in the database (or 
> is pending to go into it), I want to pick this one, or 
> otherwise create a new instance of C.
> 
> What I am looking for is the most clean way to achieve this. 
> I have run into trouble trying to use session.get(C, c_id) to 
> lookup instances of C that have not been flushed yet. (I 
> guess this might have something to do with primary keys not 
> working before instances have become persistent?)
> 
> Furthermore, I want the constructor of D to be where I lookup 
> or create the appropriate instance of P, which seems to 
> require that I pass the session object to the constructor so 
> it can use session.get to look for an existing instance of P. 
> I feel this passing of the session object around, is going to 
> clutter the code, so I am looking for a cleaner way.
> 
> [snip]

You may find the UniqueObject recipe useful:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject

It should help with the 'use-existing-or-create-new' part of your problem. I 
had a few difficulties when I tried to use it, but on reflection I think that 
was because I held references to objects after I had cleared the session. It 
also makes it difficult when you really do want to check whether an object 
exists in the DB without creating it, but it shouldn't be too difficult to 
adapt.

Instead of passing the session object around, you might be able to use the 
object_session function, which returns the session which the object is 
associated with, so in your constructor you could try 'session = 
object_session(self)'

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQA failing on table creation

2007-03-02 Thread King Simon-NFHD78

percious wrote:
> Here is the dump:
> ..
> sqlalchemy.exceptions.SQLError: (OperationalError) (1071, 
> 'Specified key was too long; max key length is 999 bytes') 
> '\nCREATE TABLE `Album` (\n\tid INTEGER NOT NULL 
> AUTO_INCREMENT, \n\tname VARCHAR(128), \n\tdirectory 
> VARCHAR(512), \n\t`imageOrder` VARCHAR(512), \n\t`coverImage` 
> INTEGER, \n\tPRIMARY KEY (id), \n\t UNIQUE (directory), \n\t 
> FOREIGN KEY(`coverImage`) REFERENCES `Image` (id)\n)\n\n' ()
> 
> Here is the table code:
> AlbumTable = Table('Album', metadata,
> Column('id', Integer, primary_key=True),
> Column('name', Unicode(128)),
> Column('directory', Unicode(512), unique=True),
> Column('imageOrder', Unicode(512)),
> Column('coverImage', Integer, ForeignKey('Image.id')),
> )
> Mysql version 5.0.27
> 
> TIA
> -chris

I think this is because of your 'unique=True' on your Unicode directory
column. MySQL is building a unique index on that column, and the number
of bytes that it uses per character varies depending on the encoding. If
it is UTF-16, for example, it will use 2 bytes per character, so your
VARCHAR(512) column would be 1024 bytes, and as the error message says,
the max key length is 999 bytes. This is a MySQL problem, not
SQLAlchemy.

I don't know what the solution is - you may need to play with MySQL's
character encoding.

See for example things like:

http://www.xaprb.com/blog/2006/04/17/max-key-length-in-mysql/

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to get column names in result

2007-02-26 Thread King Simon-NFHD78

Also, the rows that you get back from a 'select' are instances of
RowProxy, so you can use row.keys() to get the names even after column
labels have been applied.

http://www.sqlalchemy.org/docs/docstrings.myt#docstrings_sqlalchemy.engi
ne_RowProxy

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of Jonathan Ellis
> Sent: 26 February 2007 17:51
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: how to get column names in result
> 
> 
> You can see what columns are part of a table (or a select!) with
> .columns.keys() or .c.keys().
> 
> On 2/24/07, vkuznet <[EMAIL PROTECTED]> wrote:
> >
> > Hi,
> > a very simple question which I cannot find in documentation. How to 
> > get column names together with result. This is useful for web 
> > presentation of results using templates. I used use_labels=True, 
> > indeed it construct query with names, but my final result contains 
> > only column values. What I wanted is to get as a first row 
> the column 
> > names.
> >
> > Thanks,
> > Valentin.
> >
> >
> > >
> >
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How do I tell if an object has been INSERTed or UPDATEd?

2007-02-26 Thread King Simon-NFHD78

Marco Mariani wrote:
> 
> Simon Willison wrote:
> > I've got a bit of code that looks like this:
> >
> > session = get_session()
> > session.save(obj)
> > session.flush()
> >   
> You can see what's going to be inserted/updated/deleted by 
> accessing session.new, session.dirty, session.deleted
> 
> http://www.sqlalchemy.org/docs/unitofwork.myt
> 
> > What's the best way of telling if obj has been newly 
> created (INSERT) 
> > or merely updated (UPDATE)? I tried just checking for 
> "obj.id is None"
> > but I can't garauntee that my primary key is called 'id'.
> I would hope so! :-))
> 

If you are wanting to know _after_ the session.flush(), I don't think
session.new/dirty/deleted will help you. Also, your primary key will be
read back from the database immediately after INSERT, so it won't be
None. Between, the save and the flush, "obj in session.new" should do
the job.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Lazy loading advantages and disadvantages

2007-02-22 Thread King Simon-NFHD78

Adam M Peacock wrote:
> Is there a difference in the SQL executed when using lazy vs eager  
> loading?  Specifically, if I use eager loading will everything be  
> queried at once with a more efficient join, or will it still use the  
> lazy style (as far as I understand it) of generating a ton extra  
> queries as it loads each relation separately?  If it is the former,  
> more efficient case (an eager relation uses a join) is it possible to

> override the loader type at query time, such as being lazy by default

> but being nice to the database when I know I'm going to need all the  
> data from the relation (especially if I'm calling a  couple thousand  
> rows for a report)? 

Eager loads are performed using a join, so only a single query is
issued. You can change the eager/lazy behaviour at query time by using
the 'options' method on the query object.

See
 and
 for more.

Hope that helps,

Simon


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Tracking changes on mapped entities

2007-02-12 Thread King Simon-NFHD78

Allen Bierbaum wrote:
> 
> Is it possible to examine the session and get a list of all 
> mapped instances that have been changed?
> 

This information is available on the 'new', 'dirty' and 'deleted'
properties of the session:

http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_changed

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SelectResults and GROUP_BY or DISTINCT?

2007-02-08 Thread King Simon-NFHD78

I don't use postgres, so I don't know for certain, but that error
message looks like the one described in this message:

http://groups.google.com/group/sqlalchemy/msg/f4f47510b76720c9

If it is that case, I think it was fixed in rev 2301, and 2302 added a
more convenient way to use DISTINCT with SelectResults.

http://www.sqlalchemy.org/trac/changeset/2301
http://www.sqlalchemy.org/trac/changeset/2302

Hope that helps,

Simon 

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of isaac
> Sent: 08 February 2007 00:32
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] SelectResults and GROUP_BY or DISTINCT?
> 
> 
> Hi,
> 
> I couldn't find any mention of this in the docs or by googling...
> 
> I'm using SelectResults to build somewhat complex queries 
> that have a string of joins. It seems that order_by doesn't 
> work in this situation. Postgres says:
> 
> ERROR:  column "people.name_last" must appear in the GROUP BY 
> clause or be used in an aggregate function
> 
> Any ideas or a quick example of adding a group_by to a 
> SelectResults?  
> According to some discussion in the Postgres docs, adding 
> DISTINCT might allow ORDER_BY to work (don't see a way to do 
> that either).  
> Either way is fine w/me. :)
> 
> (I would have just given up and used another method, but I'm 
> using the @paginate decorator in TurboGears, which currently requires
> SelectResults)
> 
> Thanks.
> --i
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SelectResults, counts and one-to-many relationships

2007-02-06 Thread King Simon-NFHD78

Michael Bayer wrote:
> 
> I added distinct() to selectresults as a method and made the 
> unit test a little clearer (since i dont like relying on the 
> selectresults "mod")...
> 
> q = sess.query(Department)
> d = SelectResults(q)
> d =
> d.join_to('employees').filter(Employee.c.name.startswith('J'))
> d = d.distinct()
> d = d.order_by([desc(Department.c.name)])
> 
> 

...and...

> 
> for the order by getting removed during the select, that 
> seemed to be an optimization that got stuck in there and 
> since this is a really fringe use case its never come up, so 
> i removed it and added your test case (only with 
> distinct=True) in rev 2301.
> 

I think you're slipping - I had to wait a whole three and a half hours
for this fix ;-) Seriously, thanks again,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: new setuptools vs local SA copy

2007-02-06 Thread King Simon-NFHD78

Rick Morrison wrote:
> 
> I keep two versions of SA installed here, one is a stable 
> version installed in the Python site-packages folder, and one 
> is current trunk with some local patches for testing.
> 
> I used to be able to run tests and programs using the local 
> version by just inserting the local directory into the Python 
> path, and imports would then use that.
> 
> I've recently upgraded to setuptools 0.6c5 and that doesn't 
> seem to work anymore -- I now always get the version from the 
> site-packages folder.
> 
> Anyone running this kind of configuration out there run into 
> something like this?
> 

The way I've done this is to run 'python setup.py develop' in the SVN
checkout. This puts the path to the checkout in easy-install.pth, and it
also creates an SQLAlchemy.egg-link file with the same path - I don't
know what this is used for.

To go back to the stable version I run 'easy_install -U SQLAlchemy'.
This seems to work on both Windows and Linux, but I am only on
setuptools 0.6c3.

This is probably more complicated than it needs to be - I would have
thought you can switch just by editing the easy-install.pth file.

The correct way is probably to use setuptools' --multi-version switch,
and put pkg_resources.require() somewhere in your application, but I've
not used that yet.

Another thing that I've found very useful (on Linux) is this:

 
http://peak.telecommunity.com/DevCenter/EasyInstall#creating-a-virtual-p
ython

Particularly with fast-moving projects like SQLAlchemy and TurboGears,
trying to share a single copy of a library between multiple applications
without breaking them every time I upgraded the library was getting
tricky. There's also working-env:

 http://blog.ianbicking.org/workingenv-update.html

which I haven't tried yet, but has the advantage of working on Windows
(apparently).

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SelectResults, counts and one-to-many relationships

2007-02-05 Thread King Simon-NFHD78
Hi again,

I'm using the SelectResults mod, and I think there may be a bug with the
way one-to-many relationships are handled. The attached file should show
the problem.

In the example, I have two mapped classes, Department and Employee, such
that a department has many employees. In the 'check' function I am
trying to select all the departments which have an employee whose name
starts with 'J'. I am also ordering the results by Department name,
descending.

If I pass distinct=False, then the result of the count() is wrong - I
assume that is the expected behaviour, and not a bug. However, if I pass
distinct=True, the order_by does not appear in the nested
('tbl_row_count') query, so the wrong row is returned.

In case it makes a difference, I've tested this with revisions 2283 and
2300 of the trunk.

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



one_to_many_test.py
Description: one_to_many_test.py


[sqlalchemy] Re: Deferred column loading and inheritance

2007-02-02 Thread King Simon-NFHD78

Michael Bayer wrote:
> your extension needs to translate the row being passed to the 
> descendant mapper, i.e. inside of populate_instance.  any 
> dictionary which has Column instances for keys will work.  such as:
> 
> def populate_instance(self, mapper, selectcontext,
>   row, instance, identitykey, isnew):
> newmapper = class_mapper(type(instance))
> newrow = dict((k,row[k]) for k in list(employees.c))
> newrow[managers.c.manager_id] = row[employees.c.person_id]
> newrow[engineers.c.engineer_id] = row[employees.c.person_id]
> newmapper.populate_instance(selectcontext, instance, newrow,
>  identitykey, isnew)
> return True
> 

That works perfectly - thanks again! I did just notice a tiny mistake in
the docs for MapperExtension - In the populate_instance example it
passes a frommapper argument to the other mapper, but this parameter
doesn't exist.

Thanks again for all your help,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Deferred column loading and inheritance

2007-02-01 Thread King Simon-NFHD78
Hi again,

I have a situation where I'd like to be able to load entities from a
single table, but that have attributes defined in other tables,
depending on their type (basically the same as the multiple table
inheritance example in the docs), and I'd like all of the attributes
that are defined in other tables to be lazily loaded, so my initial
selection only queries the first table - no polymorphic unions or
anthing.

I've attached my initial attempt, which works for the simple case.
However, I think the only reason it works is because the primary key
columns on each table are named the same. If you rename
engineers.person_id to engineers.engineer_id, for example, it fails when
populating the instance because it can't find a value to populate the
engineer_id property with. Even if the column is called person_id, the
load fails if it is part of a bigger query (eg. an eager load from
another object), because column aliases also break it.

I played around with making the primary_key deferred as well, but that
stops the deferred loading from working completely (probably not
surprising really)

I suppose a different way of doing this would be to map the attributes
to a completely separate class, and then proxy them somehow on the main
class, but I thought I would ask the question first, to see if anyone
has any ideas.

Thanks a lot,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



deferred_test.py
Description: deferred_test.py


[sqlalchemy] Re: iteration over mapper

2007-01-31 Thread King Simon-NFHD78

Jose Soares wrote:
> Hi all,
> 
> Probably this is a stupid question,  :-[ but I don't 
> understand how to iterate an object mapper to get fields value.
> ---
> 
> user = session.query(User).select(id=1)
> 
> for j in user.c:
>print j.name
> 
> logname
> id
> password
> 
> 
> 
> for j in user.c:
>print j.value
> 
> 'Column' object has no attribute 'value'
> 

The fields are attributes of the 'user' object itself, so the values are
at user.logname, user.id and user.password. To get an attribute whose
name is stored in a variable, you can use 'getattr':

for col in user.c:
  value = getattr(user, col.name)
  print col.name, value

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using .label() on boolean expressions

2007-01-19 Thread King Simon-NFHD78


Simon King wrote:


I don't know if this is valid SQL, but MySQL seems to accept 
it... I'd like to write a query that looks like:


  SELECT s.result LIKE 'Pass%' AS pass
  ...

Which would return 1 or 0 for each row depending on whether 
the result column begins with Pass.




Another way I tried to do this was to use the SQL:

SELECT IF(s.result LIKE 'Pass%', 'Pass', 'Fail') AS pass
...

because that would be a function rather than a boolean expression, and
functions can be labelled. I knew I couldn't call 'sa.func.if', but I
thought it would be nice if you could use sa.func.if_  - the
_FunctionGateway object could strip the trailing underscore from the
name. It took me a while to realise I could use sa.func.IF, but the
capital letters look ugly :-). Alternatively, _FunctionGateway could be
given a __call__ method which would take the name as a parameter, so you
could use 'sa.func("if")'.

Just an idea.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using .label() on boolean expressions

2007-01-19 Thread King Simon-NFHD78


Hi,

I don't know if this is valid SQL, but MySQL seems to accept it... I'd
like to write a query that looks like:

 SELECT s.result LIKE 'Pass%' AS pass
 ...

Which would return 1 or 0 for each row depending on whether the result
column begins with Pass. In SQLAlchemy this would become:

 sa.select([s.c.result.startswith('Pass').label('pass')] ...)

Without the .label(), this works, but I can't label it because
BooleanExpressions don't have a label method.

Is there another way to do this?

Thanks,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questions about polymorphic mappers

2007-01-18 Thread King Simon-NFHD78


Michael Bayer wrote:

Simon King wrote:
> [requirements for instances returned from
>  MapperExtension.create_instance]

at this point the entity_name should get set after your 
custom create_instance is called (at least thats in the 
trunk).  init_attr is not required, it pre-sets attributes on 
the object that are otherwise auto-created later (but the 
autocreation step throws a single AttributeError per 
attribute, which hits performance a little bit).




Thanks a lot for explaining that. It looks to me like I would be better
off simply using this method to load my class hierarchy, rather than
trying to twist polymorphic_identity into something that it was never
meant to do. Also, adding get_polymorphic_identity as a MapperExtension
method would add an overhead for every single object load for what is
probably a very infrequently used feature - I'd hate to be responsible
for that! Yet again, SQLAlchemy is already able to do exactly what I
want - sorry it's taken a while for me to realise it.

Cheers,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questions about polymorphic mappers

2007-01-15 Thread King Simon-NFHD78


Michael Bayer wrote:


you can still override create_instance() as well and try to 
spit out subclasses that are otherwise not mapped.




This was something I looked at a while ago as well, and I wasn't sure
what the requirements on objects returned from create_instance were. If
it is not overridden, the mapper calls _create_instance, which sets
_entity_name and calls attribute_manager.init_attr. How important are
these things to the rest of the library?

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questions about polymorphic mappers

2007-01-15 Thread King Simon-NFHD78


Ah, I see what you mean. Yep, that would work perfectly for me. It would
mean that I would need to set up a mapper for every subclass, but that
is no great hardship (I was half-imagining just mapping the base class,
but that probably has other implications)

Cheers,

Simon

Micheal Bayer wrote: 
im thinking it would just return the string (or whatever) 
value that would match the key inside your polymorphic_map.  
so the polymorphic map would have a 1->1 key->class mapping, 
but this function would allow you to do translations from 
whatever is in the result set (like regexps or whatever).


On Jan 15, 2007, at 5:56 AM, King Simon-NFHD78 wrote:

>
> Micheal Bayer wrote:
>> id rather just add another plugin point on MapperExtension 
for this, 
>> which takes place before the "polymorphic" decision stage at
>> the top of the _instance method, like 
get_polymorphic_identity().   
>> that way you could do all of this stuff cleanly in an 
extension (and 
>> id do that instead of making polymorphic_identity into a 
list).  hows 
>> that sound?

>
> That would be ideal for me, and would seem to be the most flexible 
> solution as well - it leaves the decision for which class 
to use up to 
> the application. What would it actually return, though? An instance 
> ready to be populated?

>
> Thanks a lot,
>
> Simon
>
> >


> 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questions about polymorphic mappers

2007-01-15 Thread King Simon-NFHD78


Micheal Bayer wrote:


id rather just add another plugin point on MapperExtension 
for this, which takes place before the "polymorphic" decision 
stage at the top of the _instance method, like 
get_polymorphic_identity().  that way you could do all of 
this stuff cleanly in an extension (and id do that instead of 
making polymorphic_identity into a list).  hows that sound?




That would be ideal for me, and would seem to be the most flexible
solution as well - it leaves the decision for which class to use up to
the application. What would it actually return, though? An instance
ready to be populated?

Thanks a lot,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Questions about polymorphic mappers

2007-01-12 Thread King Simon-NFHD78

Michael Bayer wrote:
> 
> i think using the polymorphic_map is OK.  i downplayed its 
> existence since I felt it was confusing to people, which is 
> also the reason i made the _polymorphic_map argument to 
> mapper "private"; it was originally public.  but it seemed 
> like it was producing two ways of doing the same thing so i 
> made it private.

OK - I'll carry on using that then.

>
> 
>

Ah - that's what I was missing. I hadn't seen the class_mapper function.
Thanks for that.

> 
> as far as having multiple "polymorphic_identity" values map 
> to the same class, i would think we could just have 
> polymorphic_identity be a list instead of a scalar.  right 
> now, if you just inserted multiple values for the same class 
> in polymorphic_map, it would *almost* work except that the 
> save() process is hardwiring the polymorphic_on column to the 
> single polymorphic_identity value no matter what its set to.
> 
> so attached is an untested patch which accepts either a 
> scalar or a list value for polymorphic_identity, and if its a 
> list then instances need their "polymorphic_on" attribute set 
> to a valid entry before flushing.  try this out and see if it 
> does what you need, and i can easily enough add this to the 
> trunk to be available in the next release (though id need to 
> write some tests also).
> 

I think this would definitely be a useful feature, and in fact I was
originally going to attempt (or at least suggest!) something like that
myself. I'll try the patch and let you know how well it works.

However, I still have a situation where I would like to be able to use a
default class for unknown types. I don't want to hard-code all the
possible options up-front - only the ones that I actually want to treat
specially. I've been playing around with some different options, and
this is what I've ended up with:

class EmployeeMeta(type):
def __call__(cls, kind, _fix_class=True, **kwargs):
if not _fix_class:
return type.__call__(cls, kind=kind, **kwargs)
cls = get_employee_class(kind)
return cls(kind=kind, _fix_class=False, **kwargs)

def get_employee_class(kind):
if kind == 'manager':
return Manager
else:
return Employee

class Employee(object):
__metaclass__ = EmployeeMeta

class Manager(Employee):
pass


class EmployeeMapperExtension(sa.MapperExtension):
def create_instance(self, mapper, selectcontext, row, class_):
cls = get_employee_class(row[employee_table.c.kind])
if class_ != cls:
return sa.class_mapper(cls)._instance(selectcontext, row)
return sa.EXT_PASS

assign_mapper(ctx,
  Employee, employee_table,
  extension=EmployeeMapperExtension())
assign_mapper(ctx,
  Manager,
  inherits=Employee.mapper)


This seems to do the right thing - Manager instances get created for
managers, but any other row becomes an Employee. To add a subclass for
another row type, I just need to adapt the get_employee_class function
and add another call to assign_mapper. With a bit more work in the
metaclass, it could all be done with a special attribute in the
subclass.

The only thing I'm not sure about is the mapper extension - is it OK to
call the mapper._instance method, or is there a better way to do this?

Thanks again,

Simon


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---


inheritance_test.py
Description: inheritance_test.py


<    1   2   3