[sqlalchemy] MySQL deadlocks and retry
My application uses MySQL with InnoDB tables and replication. We're starting to encounter an issue on a few particular parts of the application where we're getting tracebacks like this: class 'sqlalchemy.exc.OperationalError': (OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') According to MySQL documentation, there are a large variety of circumstances in which this could happen, mostly innocuous in nature. The official recommended solution to the problem is: retry the transaction. Does SQLAlchemy offer some method for me to catch this exception and then retry the transaction? Thanks in advance! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL deadlocks and retry
Michael Bayer wrote: Does SQLAlchemy offer some method for me to catch this exception and then retry the transaction? there's nothing offered beyond the usual notion of catching an exception and running the function again. It also depends very much upon the construction of your application, whether you're looking to do this in an ORM context, etc. Yeah, we sort of figured. Our application uses WSGI middleware to wrap particular requests in transactions. We ended up writing something in there to catch this particular exception and re-run the request up to three times. Its not quite as pretty as I'd like, but it works :) -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Creating a custom type
Kless wrote: I just see that will be by a problem of change in API AttributeError: 'str' object has no attribute '_descriptor' As I indicated to you over on the Elixir list, you seem to have used the Elixir encryption extension as a starting point. The _descriptor and the concept of an entity are Elixir concepts, not SQLAlchemy concepts. You need to base your work on other mapper extensions, not on the Elixir extension, which is designed to only work with Elixir models, not plain-SQLAlchemy models. I'd suggest starting over, and using the following documentation references: Mapper Extension API reference: http://tinyurl.com/8wfkgl Applying Mapper Extensions: http://tinyurl.com/7a3b5c Good luck. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sharding database elixir metadata.drop_all and metadata.create_all problem
Michael Bayer wrote: im not familiar with what __metada__ is and this seems to be an elixir specific issue. metadata.drop_all()/create_all() always do what they say. Yeah, the metadata's he's creating are empty, so they aren't creating anything. He needs to associate the metadata with some tables. I've linked to (and quoted from) the relevant documentation over on the elixir list, where the original poster initially raised the question. Sorry for the noise. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: reading from one database and writing to another
Michael Bayer wrote: Thanks I'll take a look. I left out what I think is an important part of this scenario (or maybe it's trivial - I don't have a good perspective on this yet). In any case, I would like to use the ORM component of sqlalchemy and completely hide the fact that the read/ write connections are possibly different. (They might become the same connection if the local database becomes unaccessible and/or is too far behind the master). that is going to be very hard to accomplish as the Session does not have a clustering rules engine built into it in order to determine read/write locations, nor is that within its scope. It can only handle class X talks to engine Y. Just an FYI, I am actually splitting reads and writes in a fairly straightforward way in my web application by inferring the intent of the request based upon the method. We developed the application so that all requests that write to the database are in a POST. Everything else uses other methods (primarily GET). We use a scoped session, ensuring that each request gets its own session and then wrote some WSGI middleware which will automatically bind the session to the correct database instance (one of the masters, or the correct slave) based upon the request method. We also automatically wrap POST's in a transaction, and roll back upon errors. FWIW, this middleware is like 20 lines of code long. If your app is not web-based you might have trouble getting away with something like this :) -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Getting All Columns From A Join
Alex Ezell wrote: This query works fine, but the issue is that I am only returned the columns in the Fieldmap table. I would like for the result to have all the columns from both the Fieldmap and the MemberField tables. You need to tell the ORM to return both entities. By default, the ORM will return the entity you are querying for, not additional entities, regardless of the query. You can do this by adding the additional entity you want to fetch using the Query object's `add_entity` method. The SQLAlchemy documentation has several useful examples of how to make use of `add_entity`. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Merging Objects Issue
Sorry I missed this thread up until now. I wasn't paying close attention! Lets see if I can offer some insight. The general idea of what I'd like to do is create the object from the Elixir model, modify some of its attributes, pickle it in the HTTP session. In a subsequent HTTP request, deserialize it, modify more of the attributes and then save it to the DB. There are other ways to do this in a more brute force way, but I'd like to use the tools that sqlalchemy provides. I think that the problem may very well be Elixir-related, as the mapper and table are attached to the class once they are set up by Elixir. This could explain why pickle is attempting to pickle the mapper and the table. If this is the case, we might be able to get around this by putting a custom `__getstate__` and `__setstate__` on the `Entity` base class, but this seems kinda crazy to me. To be entirely honest with you, I think you'd be better off not attempting to pickle objects into your session, which could get out of hand relatively quickly. It sounds like you are building up an object across multiple HTTP requests, and I'd suggest that you consider not creating the object and persisting it to the database until all of that data is available. In cases like these, I tend to use AJAX, and push the state to the client side, where it won't cause as many scaling problems for me later. Alternatively, if you're really bound to a session-backed method for doing this, you could store a more simple representation of state (like a dictionary) in the session, which is pretty simply, especially when you can construct your model object like this on the last request: state = session.get('object_state') instance = MyEntity(**state) instance.flush() Good luck, and if you have any further questions, don't hesitate to ask over on the Elixir mailing list. I'll do my best to answer followups over there! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Merging Objects Issue
Michael Bayer wrote: I think if Elixir doesn't support mapped entities being pickled, or requires end-user __getstate__/__setstate__ (i think the latter is the more reasonable requirement), it should be explicit about this. pickling/unpickling is a basic necessity particularly for people who are using file/memcached-based caching strategies. You bet. If this is indeed the problem, then we need to file a ticket and fix it. I was arguing against monkeypatching the Entity class with the custom __getstate__ and __setstate__, but didn't make that clear. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Merging Objects Issue
Jonathan LaCour wrote: Michael Bayer wrote: I think if Elixir doesn't support mapped entities being pickled, or requires end-user __getstate__/__setstate__ (i think the latter is the more reasonable requirement), it should be explicit about this. pickling/unpickling is a basic necessity particularly for people who are using file/memcached-based caching strategies. You bet. If this is indeed the problem, then we need to file a ticket and fix it. I was arguing against monkeypatching the Entity class with the custom __getstate__ and __setstate__, but didn't make that clear. My trivial testing seems to indicate that pickle works fine. Granted, this is an extremely trivial example, but it seems to work for me: from elixir import * from pickle import loads, dumps class Person(Entity): name = Field(String(64)) setup_all() metadata.bind = 'sqlite:///' metadata.create_all() p = Person(name='Jonathan') session.flush(); session.clear() p = Person.get(1) data = dumps(p) session.clear() p = loads(data) p.name = 'New Name' session.merge(p) session.flush(); session.clear() p = Person.get(1) assert p.name == 'New Name' And, I forgot that I have actually used Elixir with several cacheing mechanisms that rely on pickle without any issues at all... -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] deferral pattern idea
I've had a few situations recently when I've been optimizing some queries that I have been making through the ORM (via Elixir), and I've wanted to take advantage of deferred columns. However, I've found that defining which columns get deferred is often more appropriate to do at *query* time, not at the time of table/mapper definition. I know about deferring groups of columns, and the defer option, but I'd really wish I could do something like this: from sqlalchemy.orm import load_only results = MyMappedClass.query.options( load_only('column_one', 'column_two', 'column_three') ).all() Rather than what I am having to do in many cases where I have 20+ columns on a mapped object: desired_columns = ['column_one', 'column_two'] query = MyMappedClass.query for column in MyMappedClass.table.c: if column.name not in desired_columns: query.options(defer(column)) Does this seem desirable to anyone else, or am I just crazy :) -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: deferral pattern idea
Michael Bayer wrote: I'll give you the private way to do it if you'd like to play with it Cool. This will be helpful to play with for now... since the query already can do this, it seems harmless enough to create a load_only() MapperOption. However, it seems in a way to be fundamentally different than defer(). With defer you can say defer('foo.bar.data') to defer the loading of some column multiple levels along the relation() chain. It makes less sense with a load_only() option, unless we said something like load_only([x, y, z], path=foo.bar) which seems entirely weird (or maybe not). So load_only() might be made to just apply to the mapper zero position in the Query to start with (im sure that's all you or most people would need it for anyway). Makes good sense to me, as it would cover my primary use case. I would be curious to see if this pattern could be extended somehow to be a bit more like `defer` but they are in a way inherently different. Thanks, let me know what you decide. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: version_id_col question
Michael Bayer wrote: its managed entirely by SQLAlchemy at the moment, starts at 1 and increments automatically, and acutually doesnt have any connection to class-based attributes so its a little insular. I would think that elixir could just move its own management of the version over to this mechanism but we can also look into opening up its behavior if that suits your functionality better. I am actually considering just adding an additional `concurrent_version` column so that the two can just not clobber one another... I might take you up on that though, depending on how that goes... -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: a way to share Session.mapper(SomeObject) across two scoped sessions?
Gaetan de Menten wrote: The only thing, is that we still provide a default session, which is based on Session.mapper, for convenience and backward compatibility. Maybe we should state more prominently in the Elixir doc that this is only a default session and that you can use any session you like. FYI, I think this should go in the advanced tutorial I referred to this morning on the Elixir list... -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: doubly-linked list
Jonathan LaCour wrote: I am attempting to model a doubly-linked list, as follows: ... seems to do the trick. I had tried using backref's earlier, but it was failing because I was specifying a remote_side keyword argument to the backref(), which was making it blow up with cycle detection exceptions for some reason. Oops, spoke too soon! Here is a test case which shows something quite odd. I create some elements, link them together, and then walk the relations forward and backward, printing out the results. All seems fine. Then, I update the order of the linked list, and print them out forward, and they work okay, but when I print things out in reverse order, its all screwy. Any ideas? from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('sqlite:///') metadata = MetaData(engine) Session = scoped_session( sessionmaker(bind=engine, autoflush=True, transactional=True) ) task_table = Table('task', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode), Column('next_task_id', Integer, ForeignKey('task.id')), Column('previous_task_id', Integer, ForeignKey('task.id')) ) class Task(object): def __init__(self, **kw): for key, value in kw.items(): setattr(self, key, value) def __repr__(self): return 'Task :: %s' % self.name Session.mapper(Task, task_table, properties={ 'next_task' : relation( Task, primaryjoin=task_table.c.next_task_id==task_table.c.id, uselist=False, remote_side=task_table.c.id, backref=backref( 'previous_task', primaryjoin=task_table.c.previous_task_id==task_table.c.id, uselist=False ) ), }) if __name__ == '__main__': metadata.create_all() t1 = Task(name=u'Item One') t2 = Task(name=u'Item Two') t3 = Task(name=u'Item Three') t4 = Task(name=u'Item Four') t5 = Task(name=u'Item Five') t6 = Task(name=u'Item Six') t1.next_task = t2 t2.next_task = t3 t3.next_task = t4 t4.next_task = t5 t5.next_task = t6 Session.commit() Session.clear() print '-' * 80 task = Task.query.filter_by(name=u'Item One').one() while task is not None: print task task = task.next_task print '-' * 80 print '-' * 80 task = Task.query.filter_by(name=u'Item Six').one() while task is not None: print task task = task.previous_task print '-' * 80 Session.clear() t1 = Task.query.filter_by(name=u'Item One').one() t2 = Task.query.filter_by(name=u'Item Two').one() t3 = Task.query.filter_by(name=u'Item Three').one() t4 = Task.query.filter_by(name=u'Item Four').one() t5 = Task.query.filter_by(name=u'Item Five').one() t6 = Task.query.filter_by(name=u'Item Six').one() t1.next_task = t5 t5.next_task = t2 t4.next_task = t6 Session.commit() Session.clear() print '-' * 80 task = Task.query.filter_by(name=u'Item One').one() while task is not None: print task task = task.next_task print '-' * 80 print '-' * 80 task = Task.query.filter_by(name=u'Item Six').one() while task is not None: print task task = task.previous_task print '-' * 80 -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] doubly-linked list
All, I am attempting to model a doubly-linked list, as follows: task_table = Table('task', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode), Column('next_task_id', Integer, ForeignKey('task.id')), Column('previous_task_id', Integer, ForeignKey('task.id')) ) class Task(object): pass mapper(Task, task_table, properties={ 'next_task' : relation( Task, primaryjoin=task_table.c.next_task_id==task_table.c.id, uselist=False ), 'previous_task' : relation( Task, primaryjoin=task_table.c.previous_task_id==task_table.c.id, uselist=False ) }) Now, this works, but only in one direction. If I create a list structure with a bunch of instances using `next_task`, then that direction of the relation works fine, but the reverse side doesn't seem to get managed automatically. Same in the other direction. Is there a way to get SQLAlchemy to understand that these relations are the inverse of one another, and manage both sides for me? -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: doubly-linked list
Jonathan LaCour wrote: I am attempting to model a doubly-linked list, as follows: Replying to myself: task_table = Table('task', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode), Column('next_task_id', Integer, ForeignKey('task.id')), Column('previous_task_id', Integer, ForeignKey('task.id')) ) class Task(object): def __init__(self, **kw): for key, value in kw.items(): setattr(self, key, value) def __repr__(self): return 'Task :: %s' % self.name mapper(Task, task_table, properties={ 'next_task' : relation( Task, primaryjoin=task_table.c.next_task_id==task_table.c.id, uselist=False, remote_side=task_table.c.id, backref=backref( 'previous_task', primaryjoin=task_table.c.previous_task_id==task_table.c.id, uselist=False ) ), }) ... seems to do the trick. I had tried using backref's earlier, but it was failing because I was specifying a remote_side keyword argument to the backref(), which was making it blow up with cycle detection exceptions for some reason. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: doubly-linked list
Michael Bayer wrote: All of the crazy mappings today are blowing my mind, so I'll point you to an old unit test with a doubly linked list: Yeah, trust me, it was blowing my mind as well, so I elected not to go this direction anyway. You're also correct that there isn't _really_ a need to maintain the previous anyway, since it can be implied from the next a heck of a lot easier. The simplest solution is usually the best, and this one definitely wasn't very simple! On to other ideas... -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] quick question...
I have been banging my head against the wall for a little bit attempting to translate this SQL: SELECT max(value) FROM ( SELECT max(sequence)+100 as value FROM task UNION SELECT 100.0 as value ) into an SQLAlchemy expression that I can embed into an INSERT. Should I just go ahead an use text() rather than bother with attempting to construct this using an SQLAlchemy expression? (Yes, I know that this is gross...) -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: quick question...
Michael Bayer wrote: sure...what do your SQL logs say ? 2008-01-08 17:41:21,832 INFO sqlalchemy.engine.base.Engine.0x..50 INSERT INTO task (sequence, subject, description, due_date, reminder, private, company_id, assigned_to_id, job_id) VALUES ((SELECT max(task.sequence) + ? AS value FROM task UNION SELECT ? AS value), ?, ?, ?, ?, ?, ?, ?, ?) 2008-01-08 17:41:21,833 INFO sqlalchemy.engine.base.Engine.0x..50 [100, '100.0', 'Item One', None, None, None, None, 1, None, None] -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: before_update mapper extension behavior?
Michael Bayer wrote: however, the column-based attributes present on the instance itself have not yet been inserted/updated into the DB, and the fact that the instance is being sent to before_update() indicates that it has in fact already been marked as dirty and is to be updated. So whatever column-attribute changes you make within before_update to the local instance will be reflected in the immediately proceeding SQL statement. Is there any way to determine why an object as been marked as dirty by asking the session? The reason I ask is because sometimes objects which have had none of their column-based attributes modified are marked as dirty as the result of something that has happened on a relation. It would be nice to know in my before_update() *why* the object is dirty without having to query the database for the current column values and do comparisons. Just curious if this is possible... -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: before_update mapper extension behavior?
Michael Bayer wrote: i just added it to before_update's docstring today: session.is_modified(instance, include_collections=False) and you can get the session using object_session(instance). I know this is relevant to your versioning extension since Ben and I had a day of it last week :) Excellent :) Yes, Ben and I talked about it, and I am really glad to see that there is a nice way to deal with this now! Thanks! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple mapper extensions
jason kirtland wrote: i think a name change is probably in order at the very least. r3130 in the trunk implements a name change- EXT_CONTINUE will propagate the hook to the next extension or back to the base implementation. EXT_STOP will halt propagation. it's only a name and doc change: EXT_CONTINUE = EXT_PASS = object() EXT_STOP = object() EXT_STOP is just a feel-good value. the general rule of halt on any return value but EXT_CONTINUE is unchanged. Perfect. This now makes total sense, thanks for doing this! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple mapper extensions
Michael Bayer wrote: both of your before_update() functions return EXT_PASS. theyll both be called. Well, yeah. I just found it a little bit odd that I had to do this. When I saw EXT_PASS, I didn't think it meant continue along with other mapper extensions. It seems like a bad idea for one mapper extension to be able to inject itself into the front of the extensions list and then cause all other mapper extensions to not execute by returning something other than EXT_PASS. So, I guess it was less of a problem, and more of me raising a point of confusion and wondering if that part of the API couldn't be smoothed out a bit :) -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple mapper extensions
Michael Bayer wrote: its a model taken from the way event loops usually work; any consumer along the event chain is allowed to say, ive consumed the event and stop further handlers from dealing with it. we can certainly change the names around into something less ridiculous. unfortuantely, changing it so that no return value, or None, does *not* short circuit the chain runs a slight risk that someone is actually using it that way. So we might need to change it such that if your before_insert returns None, an error is raised, and youre forced to return a specific value indicating the next activity...otherwise someone's upgrade might silently fail. Fair enough, I suppose. I think I can get over it, for the most part. It might just be an issue of cognitive dissonance because of the naming convention or how its described in the documentation. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] MySQL specific error in subselect
I am having an issue at work with a query that is passing in all of our unit tests against SQLite and PostgreSQL, but is failing in MySQL. Here is a short test case that will show the problem. Any help would be much appreciated! #-# from sqlalchemy import * metadata = MetaData() # # tables # p_table = Table('p', metadata, Column('id', Integer, primary_key=True), Column('foo', Unicode) ) v_table = Table('v', metadata, Column('id', Integer, primary_key=True), Column('p_id', Integer, ForeignKey('p.id')), Column('a_id', Integer, ForeignKey('a.id')), Column('bar', Unicode) ) a_table = Table('a', metadata, Column('id', Integer, primary_key=True), Column('baz', Unicode) ) # # test case # if __name__ == '__main__': metadata.connect('mysql://user:[EMAIL PROTECTED]/some_test_database') metadata.create_all() p_table.insert().execute(foo='foo1') a_table.insert().execute(baz='baz1') v_table.insert().execute(bar='bar1', p_id=1, a_id=1) p_table.insert().execute(foo='foo2') a_table.insert().execute(baz='baz2') v_table.insert().execute(bar='bar2', p_id=2, a_id=2) p_table.insert().execute(foo='foo3') a_table.insert().execute(baz='baz3') v_table.insert().execute(bar='bar3', p_id=3, a_id=3) query = select( [p_table.c.foo, v_table.c.bar, a_table.c.baz], and_( p_table.c.id==v_table.c.p_id, v_table.c.a_id==a_table.c.id, not_(p_table.c.id.in_( select( [p_table.c.id], and_( p_table.c.id==v_table.c.p_id, v_table.c.a_id==a_table.c.id, or_( p_table.c.foo=='foo2', v_table.c.bar=='bar3' ) ) ) )) ) ) try: for result in query.execute(): print result finally: metadata.drop_all() #-# It appears to me that MySQL doesn't like it when you don't specify a FROM in the subselect, whereas PostgreSQL and SQLite don't care. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL specific error in subselect
Jonathan LaCour wrote: It appears to me that MySQL doesn't like it when you don't specify a FROM in the subselect, whereas PostgreSQL and SQLite don't care. Responding to myself, I have solved the problem. As it turns out, you can force SQLAlchemy to generate a FROM for the subselect by passing the `correlate=False` keyword argument to the subselect. Is there any reason that `correlate` defaults to `True`? Should it be automatically flipped to `False` for subselects? -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: polymorphic inheritance -- stumped!
Murphy's law strikes again. As soon as I sent the email, I finally noticed that I wasn't passing my table into my outputs_mapper after staring at it for an hour... Sorry for the spam! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] handing SELECT ... FOR UPDATE
So, at work, we have a particular use case where we need to be able to do a `SELECT ... FOR UPDATE` in order to lock some rows. In this particular project we will be using the ORM package and some advanced data mapping (including multiple table polymorphic inheritance). What would be the best way to approach using `SELECT ... FOR UPDATE`? We were thinking about the possibility of creating a MapperExtension to make this happen, but aren't really sure about how to go about implementing that, and any direction would be useful. Thanks! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: DynamicMetaData question
Michael Bayer wrote: OK, let me tell you what just happened the other day. Im dealing with a Pylons application, and Pylons provides the SA engine by binding it to the Session. but the application also had a DynamicMetaData stuck in there, and at some point they were creating their own engine and connecting it to the DMD. needless to say I quickly got uber-confused as the app was running with *two* engines, which happened to point to the same database, but still completely weird. So i fixed it. But by changing the DynamicMetaData to just plain MetaData, i was then *sure* that no other part of the app was trying to sneak a connect() on there. Whereas if we only had one kind MetaData I could not rely upon that. So you are saying you got uber-confused because of DynamicMetaData? Thats even more reason to not use it :) If it confused you, its sure to confuse me (as it already has before)! Just joking around... Not sure if that justifies the existence of DMD since its Python, things are dynamiclaly typed, theres an endless number of operations that you cant really guard against. But was just a moment that I felt thankful that there *were* two versions of MetaData. I see your point. I don't care what you do with DynamicMetaData, as long as I can do this one day: metadata = MetaData() engine = create_engine(...) metadata.connect(engine) ... preferably soon ;) -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: DynamicMetaData question
Michael Bayer wrote: My controller actions don't worry about transactions or connections at all, they just execute insert/update/delete/select actions and if an exception is raised, their work is rolled back automatically. well, thats exactly the pattern provided by strategy=threadlocal, and its clear that you understand how it works, so there is no problem at all doing that. its merely a pattern that allows a little less explicitness (i.e. more magic). Ah, okay, so I am not missing anything here, just appreciating this particular piece of magic :) people were quite confused by it when it was the built-in behavior, which is because a lot of people dont really understand what threadlocal means. so i made it all optional. no need to deprecate it though, its still useful stuff. I am surprised that people don't know what threadlocal means, since it seems like a threaded application with a database pool is probably one of the most familiar ways to do things. But, still, if people were emailing the list in confusion, thats got to mean something :) I must still not understand the appropriate usage pattern for DynamicMetaData. I couldn't use BoundMetaData because I don't know the connection parameters until much after import time, so I am using the only other option I know of, which is DynamicMetaData. The fact that it provides threadlocal behavior only caused me a headache, because I would get errors unless it was disabled. the main use case for DynamicMetaData is a single application, usually a web application, that is actually many instances of the same application running at once and talking to different databases for each instance. so on each request, the DynamicMetaData gets pointed to the official database connection for that request, within that thread. Wow, does this ever actually happen? It seems like a very obscure use case to me. You are free to use it the way youre using it too. if i were writing the app that didnt know the connection parameters until later, i might just put my entire create the Tables + mappers logic within a function call that gets called when we do know the actual connection string... ... eek! but then again using DMD with threadlocal turned off is yeah a lot easier and cleaner since you can keep your declarations at the module level. Yes, this is a lot cleaner, however its confusing. Personally, I think that there should be one and only one MetaData class that can be told how to act. As far as I can tell, you could replace the existing options in SQLAlchemy (BoundMetaData, DynamicMetaData) and simplify things a bunch in the process: metadata = MetaData() engine = create_engine('connection_string') metadata.connect(engine) This would let you get rid of BoundMetaData and DynamicMetaData and just have one simple, easy way to do things. Plus, it would give you the ability to do what I am using DynamicMetaData to do without having to pass in a weird threadlocal=False kwarg. For people who want the insane threadlocal behavior of DynamicMetaData, you could have the MetaData object take in the same kwarg which would be defaulted to the saner False option (unlike the current DynamicMetaData) metadata = MetaData(threadlocal=True) ... I think that this would be a lot less confusing for users. I would be more than willing to work on a patch to make this happen if you like the idea. If not, I will at least supply a patch for yet another MetaData subclass called DelayedMetaData that gives you the behavior of DynamicMetaData without the threadlocal insanity :) What do you think? -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] DynamicMetaData question
Random question for the list, and an idea. I have an application I am working on that needs to be able to dynamically bind its metadata to an engine based upon configuration. Logically, it seems I should use `DynamicMetaData` and just call metadata.connect(engine) after I have loaded my configuration. However, I had written all of my code depending upon a threadlocal strategy as defined by using `strategy='threadlocal'` in my `create_engine` call. It turns out that DynamicMetaData has threadlocal behavior by default as well, and somehow these two things conflict. My problem was solved by making sure to pass `threadlocal=False` to my DynamicMetaData constructor. Now, here is my question: why does DynamicMetaData have any threadlocal behavior at all? It seems like the primary reason one would use a DynamicMetaData would be for being able to delay the binding of your engine to your metadata. The fact that its threadlocal is easy to miss, and I don't see why it has any threadlocal behavior at all. Am I missing something? Wouldn't it be better to two separate MetaData types, one for dynamically binding your engine, and another for threadlocal metadata? -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Announcing Elixir!
Jonathan Ellis wrote: Is there a what's new and improved in Elixir document anywhere? Well, its pretty much entirely new and improved over TurboEntity and ActiveMapper, in that it provides a totally different way of doing things. The extensive documentation and examples on the website will probably give a reasonable idea of how Elixir is different from both ActiveMapper and TurboEntity. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Announcing Elixir!
Jonathan Ellis wrote: For instance, I remember reading somewhere that AM wasn't very good at playing well with the rest of SA when AM wasn't enough, so I never bothered looking at AM very hard. I don't see anything on the elixir site about this issue, but maybe I am looking in the wrong place. I know for sure that Elixir plays a bit better with the rest of SA than ActiveMapper ever did, but there might still be problems. The key thing is that there are three of us helping maintain Elixir now, and any issues that people find dealing with mixing Elixir and traditional SQLAlchemy techniques are bugs that should be fixed! With ActiveMapper, the approach that I took had some issues and overcoming them required more time than I had to give. With Elixir, having a team of people committed to helping out makes all the difference in the world. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Announcing Elixir!
I am moving this thread over to [EMAIL PROTECTED] to continue the discussion, so please reply over there :) Ed Singleton wrote: As someone who was resisting moving to SQLAlchemy due to the apparent complexity, I've had a look through the introduction and the sample TG app, and this looks wonderful. I'm now eager to move to this as the syntax now looks preferable to SQLObject (and writing my model is 90% of what I do with the ORM). Fantastic! I am very pleased to hear that you like the direction we are going. A few points that came up as I was going through the docs and example: - You asked whether people preferred the with_fields or has_field style. I much prefer the with_fields style as it makes the code much more 'scannable'. Yes, we had this same discussion internally and I think we were split on what to do. Personally, I think they both have their merits, and have really tried to use both of them in practice to see if one comes out a clear leader in my mind. I used to be 100% behind the with_fields syntax, but have come around a bit to the has_field syntax, which I find more readable. Anyway, we might just end up keeping both... - The has_many statement immediately makes sense, but the belongs_to and has_and_belongs_to_many statements don't make obvious sense. It took me a while to get my head round them. I think in general trying to use natural language for things like this is a nice friendly thing for extreme newcomers, but tends to prevent them from understanding what is really going on. Also, unless the metaphors used are very accurate, they can actually be misleading. Movies don't belong to a Genre. There just happens to point to another. Maybe an optional set of statements with clearer meanings would be a useful addition? We originally had such aliases, but then you end up having to document the five or six synonyms for each relationship type, so we took them out. It is very easy to create your own aliases, if you want: from elixir import has_many as owns_a_bunch_of or, even: from elixir import has_many owns_a_bunch_of = has_many This should allow you to clearly pick whatever descriptive term you like to describe your models, and people can easily scan through your imports to figure out which relationship type they are dealing with. - Because the has_many, belongs_to and has_and_belongs_to_many statements are all quite different lengths, it makes it harder to scan through them. If they were all similar lengths it would make it more readable. (Particularly as, when formatted according to PEP8, this syntax has very little whitespace) Yes, I totally understand this complaint. We all had similar thoughts, but couldn't come up with anything better. Keep in mind that there is prior art here with Rails' ActiveRecord, which uses the same relationship names as we do. It seems like they also couldn't find anything better :) Of course, we are always open to suggestion. As it is though, this is wonderful. Thank you. I'm going to start using it very soon, and I guess I can override the statements easily enough (that won't break anything will it?). Great, and no, overriding statements shouldn't break a thing. Watch my blog today as well, because I am going to be posting a quick tutorial on creating your *own* DSL statements for your model objects. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Announcing Elixir!
Today, we are pleased to announce the release of Elixir (http://elixir.ematia.de), a declarative mapper for SQLAlchemy. Elixir is the successor to ActiveMapper and TurboEntity, and is a collaboration between Daniel Haus, Jonathan LaCour and Gaëtan de Menten. Elixir's website provides installation instructions, a tutorial, extensive documentation, and more. The eventual goal of Elixir is to become an official SQLAlchemy extension after some time soliciting feedback, bug reports, and testing from users. Daniel Haus http://www.danielhaus.de Gaëtan de Menten http://openhex.com Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ActiveMapper supports overriding of autoloaded columns?
Michael Bayer wrote: Is it possible to specify that an ActiveMapper-derived class should autoload its columns but to override some of those columns (for example, to set a primary key that wasn't defined in the database)? My attempts so haven't found any hybrid between fully automatic and fully manual. this would be exactly the use case for a separate Table and Mapper definition, and is the entire reason SA takes the approach that it does. your tables are not your classes ! Nope, you can't do that. This use case definitely isn't one that the ActiveMapper extension is designed to handle. Listen to Mike, and move over to full-fledged SQLAlchemy. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Activemapper and selection
percious wrote: So, 2 questions. 1) can we add a from clause to the active mapper? 2) are there plans to alleviate the need for '.' replacement in the future? Ooops, ignore part of my last reply, somehow my procmail filter dumped your message into a different folder than my SQLAlchemy lists folder :) Either way, my advice still applies. It would be great if you could illustrate with your example the current SQL that is being generated, along with some improved SQL that would be faster. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---
[sqlalchemy] Re: TurboEntity announcement
Michael Bayer wrote: wow this looks very nice. My impression is that its a lot more reflective of SQLObject's specific API, is this accurate ? Also, FYI, Daniel and I are already talking a bit about working with each other. ActiveMapper and TurboEntity are very close in spirit, and even share some code. It would be nice if the two could have a shared future. I have even suggested replacing ActiveMapper with TurboEntity at some point down the road. TurboEntity certainly brings more documentation and a new contributor into the mix. I am very excited about the potential ahead! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---
[sqlalchemy] Re: ObjectAlchemy - ODMG compatible layer for SQLAlchemy
Ilias Lazaridis wrote: I think it's clear that I'm neither looking for book-tips, nor for an academic discussion. I think what Michael is trying to convey is that the simple statement that you are looking for confirmation on has lots of highly academic baggage relating to what programmers call design patterns. In order for you to understand and interpret the simple statement, you really need to understand the concepts of design patterns, and very specifically the Active Record and Data Mapper design patterns. I'm just looking for: b) a simple confirmation of my conclusion: SQLAlchemy (DataMapper) can implement SQLObject (Active Record) SQLObject (Active Record) cannot implement SQLAlchemy (DataMapper) Your conclusion is misguided because you don't have any understanding of the underlying concepts presented. Michael's suggestion to read the Fowler book is one way for you to learn about the concepts, then to understand the simple statement, and finally to draw a conclusion. If you aren't interested in buying the book, I hear that Google is a great way to learn. If you aren't interested in learning, then I suggest that you stop trying to interpret people's words. If you want a simple statement relating to SQLObject vs. SQLAlchemy that you don't need to learn to understand, here is one for you: SQLAlchemy is more flexible than SQLObject. Anything much deeper than that is going to require you to brush up a bit on some fairly high-level concepts, like design patterns. Best of luck! -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---
[sqlalchemy] Re: ObjectAlchemy - ODMG compatible layer for SQLAlchemy
Ilias Lazaridis wrote: peoples words: SQLAlchemy implements the Data Mapper pattern, of which the Active Record pattern (which SQLObject implements) is a subset. please notice: subset. My conclusion is of course correct, and is based on the meaning of the term subset. The direct quote of your conclusion was this: SQLAlchemy (DataMapper) can implement SQLObject (Active Record) SQLObject (Active Record) cannot implement SQLAlchemy (DataMapper) Which is not of course correct because the statement is mixing terms a bit and isn't really very accurate. Let me see if I can help you out a bit on those terms. Libraries cannot implement other libraries. Libraries _can_ implement design patterns, though. SQLAlchemy and SQLObject are libraries. Data Mapper and Active Record are design patterns. Using this as a basis, here is a more correct conclusion that you could draw from the original statement: SQLAlchemy roughly implements the Data Mapper design pattern. SQLObject roughly implements the Active Record design pattern. The Active Record design pattern can be considered a subset of the Data Mapper design pattern. As a result of this, it is possible to implement the Active Record design pattern in SQLAlchemy, which is provided by the ActiveMapper extension to SQLAlchemy. It might be possible for someone to implement an SQLObject compatibility module for SQLAlchemy, but it might be difficult to provide 100% compatibility with the SQLObject API. I hope this makes a bit more sense to you, and I again would encourage you to read up on design patterns a bit so that you can have a better understanding of the subject that you are discussing. So, possibly peoples words were wrong. Please don't take this the wrong way, but you clearly aren't armed with the knowledge that is necessary to come to that conclusion. The original quote is correct, and your interpretation of it is not. Its not a big deal, but its kind of irritating when people make blanket statements from a position of ignorance. Also, if you don't want people to come to the conclusion that you are a troll, it would probably be a good idea to take the advice of the creator of the project, rather than ignoring it. Sadly, I've currently not the time to further look at the persistency case. Well, good luck anyhow. Just wondering more and more about A Dynamic Language, Without a Dynamic ORM (Python). Seems like Zope DB and Durus are the only dynamic solutions for python. The ORM league has (till now) failed to produce an dynamic OO layer on top of Relational databases. I am not going to comment on these statements, because I really don't think that they make any sense at all. Good luck - -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---