[sqlalchemy] odd question
This may or may not be elixir specific... If I have an auto-generated mapping table for a many-to-many relationship, is there a sensible way to add another column to it that's also has a foreign key relationship to a third table? Like, if I had this: Products id int name varchar ProductTypes id int name varchar Groups id int name varchar and then I defined a many to many between products and groups to get products_groups product_id group_id and I wanted to add producttype_id to that ... ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: vote for Python - PLEASE!
Daniel Haus wrote: > Looks like you're frighteningly successful. You're right, python could > use much more love, but look at this! Obviously the poll is not > representative anymore, is it... Yeah - a little skewed there. On the other hand, the poll wasn't exactly very scientific in the first place. Maybe it'll still trick someone into at least making the python links on the MySQL website actually work. :) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] vote for Python - PLEASE!
Hey everybody, MySQL has put up a poll on http://dev.mysql.com asking what your primary programming language is. Even if you don't use MySQL - please go stick in a vote for Python. I'm constantly telling folks that Python needs more love, but PHP and Java are kicking our butts... (I know the world would be a better place if the poll were honest, but I'd rather that people got the message that they should do more python development work!) Thanks! Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: [MySQL] Checking if commit() is available
Michael Bayer wrote: > > On Jun 8, 2007, at 2:17 PM, Andreas Jung wrote: > >> >> --On 8. Juni 2007 14:05:39 -0400 Rick Morrison >> <[EMAIL PROTECTED]> wrote: >> >>> try: >>> t.commit() >>> except: >>> print 'Holy cow, this database is lame' >>> >>> >> This code is also lame :-) The code should work >> for arbitrary DSNs and swallowing an exception while >> committing is evil, evil, evil. > > with a pre-5 version of mysql, it is the lesser evil Um - transactions have happily been there since 3.23.15. :) So the "best" way to do this is to check the db version and see if it's greater than 3.23.15 or not. "show variables like 'version'" will do the trick to get you the version. If it's later than 3.23.15 you should be fine. Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: UpdateClause
Michael Bayer wrote: > > On Apr 24, 2007, at 8:36 PM, Monty Taylor wrote: > >> When I get into visit_update and look into the >> update_stmt.whereclause, >> I have a CompoundClause, which contains a single clause, and an AND >> condition, which is cool. But when I look at the clause, I have a >> column >> and a parameter. If I print column.id and parameter.__dict__, I get: >> >> id {'shortname': 'testproject_id', 'unique': True, 'type': Integer(), >> 'value': None, 'key': 'testproject_id'} >> >> Why is the value none? Shouldn't we know that project1 has an id of >> 291? >> Do I need to do something in a visitor to fill in the value here that >> I'm not doing? Is it expected that the ExecutionContext provides this >> value from the object in some way? >> > > the "value" field inside of _BindParamClause, which is what youre > looking at there, is optional. its usually set when you do things like: > > mytable.c.somecolumn == 5 > > because we have a "5" there, it becomes literal(5) which is just > _BindParamClause('literal', 5, unique=True). the "5" is bound to the > bind param. > > but bind params can just be names, and the values come in with the > execute() arguments. you can also mix both approaches in which case > the params sent to execute() override the bind param values. > > so im guessing youre looking at the Updates created inside of > mapper.save_obj(), which look like, in a simplified way: > > u = table.update(col == sql.bindparam('pk_column')) > > i.e. bind param with no value. and then it executes via > > u.execute({'pk_column': 291, 'col_1': , 'col_2': > ...}) Well, that explains everything. Thanks! I was about to say I didn't know how to accomplish that... but I think that, in fact, while writing that I figured it out. :) Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] UpdateClause
I was wondering, if I've got this: projects = Table('testproject',metadata, Column('id',Integer, primary_key=True), Column('name',String(32))) class Project(object): pass mapper(Project, projects) session=create_session() project1 = Project() project1.name="name1" project2 = Project() project2.name="name2" session.save(project1) session.save(project2) session.flush() print project1.id, project2.id, project1, project2 project1.name="2name2" session.save(project1) session.flush() The first print gives me: 291 292 <__main__.Project object at 0x8318e34> <__main__.Project object at 0x8318dfc> When I get into visit_update and look into the update_stmt.whereclause, I have a CompoundClause, which contains a single clause, and an AND condition, which is cool. But when I look at the clause, I have a column and a parameter. If I print column.id and parameter.__dict__, I get: id {'shortname': 'testproject_id', 'unique': True, 'type': Integer(), 'value': None, 'key': 'testproject_id'} Why is the value none? Shouldn't we know that project1 has an id of 291? Do I need to do something in a visitor to fill in the value here that I'm not doing? Is it expected that the ExecutionContext provides this value from the object in some way? Help? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] TPC benchmarks
So I'm working on porting a version of the TPC benchmarks to MySQL. I'm porting some queries by hand when it hit me - SQLAlchemy has already done this work. So I thought, why not implement the TPC benchmarks in SQLAlchemy? Then you would have a benchmark that could be run on all the databases supported from the same code base without porting - and anything that SA couldn't do from the benchmark's perspective could be a potential point of SA improvement, either from a features or a query generation perspective. Does this sound interesting to anyone? I don't know if I'll get approval to hack on this for this project, but I think that in the end it's worth working on anyway. If anyone wants to play, I'll put up code as soon as I have it. Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: [PATCH] Using entry points to load database dialects
Michael Bayer wrote: > ok anyway, im behind on patches (which i like to test and stuff) so > ive added ticket 521 to my "in queue" list. > > if youd like to add a short unit test script that would be handy > (otherwise we might not have test coverage for the setuptools portion > of the feature). Ok. I'll put that in my queue. I hate queues. > On Mar 28, 12:26 pm, Monty Taylor <[EMAIL PROTECTED]> wrote: >> Michael Bayer wrote: >>> dialects can be used on their own without the engine being present >>> (such as, to generate SQL), also you can construct an engine passing >>> in your own module object which might have been procured from >>> somewhere else (or could be a mock object,for example). >> Ok, yes indeed. Those are good reasons. :) >> >>> On Mar 26, 2007, at 11:45 PM, Monty Taylor wrote: >>>> Always one in every bunch. :) >>>> I hear what you're saying about the import errors. But does it really >>>> help to allow work to get done before throwing the error? I would >>>> think >>>> you'd want to know right up front if you don't have a driver loaded >>>> rather then letting a program actually get started up and think you >>>> can >>>> write data (think fat client app) only to get a connection exception. >>>> But I, of course, could be very wrong about this. I am about many >>>> things... >>>> Monty >>>> Michael Bayer wrote: >>>>> yeah i dont like setup.py develop either :)but anyway, patch is >>>>> good. one thing i have to nail down though is ticket #480. the >>>>> main point of that ticket is to cleanly isolate ImportErrors of >>>>> actual DBAPI modules apart from the containing dialect module >>>>> itself. the dialects are catching all the DBAPI-related >>>>> ImportErrors though so its not necessarily blocking this patch (its >>>>> just they cant report them nicely). >>>>> On Mar 26, 2007, at 1:34 PM, Monty Taylor wrote: >>>>>> Michael Bayer wrote: >>>>>>> i think using entry points to load in external database dialects >>>>>>> is a >>>>>>> great idea. >>>>>>> though the current six core dialects i think i still want to >>>>>>> load via >>>>>>> __import__ though since im a big fan of running SA straight out of >>>>>>> the source directory (and therefore thered be no entry points for >>>>>>> those in that case). >>>>>>> so probably a check via __import__('sqlalchemy.databases') first, >>>>>>> then an entry point lookup. does that work ? >>>>>> Here is a patch that implements use of entry points to load >>>>>> dialects. >>>>>> The largest change is actually adding a get_dialect to replace the >>>>>> functionality of get_module, since entry points really want to >>>>>> return >>>>>> classes, and we only ever use the dialect class from the returned >>>>>> module >>>>>> anyway... >>>>>> This does not break code that I have that loads the mysql >>>>>> dialect, and >>>>>> it does work with my new code that adds a new dialect - although I >>>>>> suppose it's possible it could have broken something I didn't find. >>>>>> As a side note, I agree with Gaetan - you can run entry points and >>>>>> stuff >>>>>> out of the current directory, especially if you use setup.py >>>>>> develop ... >>>>>> but this code does the entry points second, after a check for the >>>>>> module >>>>>> the old way. >>>>>> Monty >>>>>> === modified file 'lib/sqlalchemy/engine/strategies.py' >>>>>> --- lib/sqlalchemy/engine/strategies.py2007-02-25 22:44:52 + >>>>>> +++ lib/sqlalchemy/engine/strategies.py2007-03-26 17:03:13 + >>>>>> @@ -42,16 +42,16 @@ >>>>>> u = url.make_url(name_or_url) >>>>>> # get module from sqlalchemy.databases >>>>>> -module = u.get_module() >>>>>> +dialect_cls = u.get_dialect() >>>>>> dialect_args = {} >>>>>> # consume dialect arguments from kwargs >>>>
[sqlalchemy] Re: [PATCH] Using entry points to load database dialects
Michael Bayer wrote: > dialects can be used on their own without the engine being present > (such as, to generate SQL), also you can construct an engine passing > in your own module object which might have been procured from > somewhere else (or could be a mock object,for example). Ok, yes indeed. Those are good reasons. :) > On Mar 26, 2007, at 11:45 PM, Monty Taylor wrote: > >> Always one in every bunch. :) >> >> I hear what you're saying about the import errors. But does it really >> help to allow work to get done before throwing the error? I would >> think >> you'd want to know right up front if you don't have a driver loaded >> rather then letting a program actually get started up and think you >> can >> write data (think fat client app) only to get a connection exception. >> >> But I, of course, could be very wrong about this. I am about many >> things... >> >> Monty >> >> Michael Bayer wrote: >>> yeah i dont like setup.py develop either :)but anyway, patch is >>> good. one thing i have to nail down though is ticket #480. the >>> main point of that ticket is to cleanly isolate ImportErrors of >>> actual DBAPI modules apart from the containing dialect module >>> itself. the dialects are catching all the DBAPI-related >>> ImportErrors though so its not necessarily blocking this patch (its >>> just they cant report them nicely). >>> >>> >>> On Mar 26, 2007, at 1:34 PM, Monty Taylor wrote: >>> >>>> Michael Bayer wrote: >>>>> i think using entry points to load in external database dialects >>>>> is a >>>>> great idea. >>>>> >>>>> though the current six core dialects i think i still want to >>>>> load via >>>>> __import__ though since im a big fan of running SA straight out of >>>>> the source directory (and therefore thered be no entry points for >>>>> those in that case). >>>>> >>>>> so probably a check via __import__('sqlalchemy.databases') first, >>>>> then an entry point lookup. does that work ? >>>> Here is a patch that implements use of entry points to load >>>> dialects. >>>> The largest change is actually adding a get_dialect to replace the >>>> functionality of get_module, since entry points really want to >>>> return >>>> classes, and we only ever use the dialect class from the returned >>>> module >>>> anyway... >>>> >>>> This does not break code that I have that loads the mysql >>>> dialect, and >>>> it does work with my new code that adds a new dialect - although I >>>> suppose it's possible it could have broken something I didn't find. >>>> >>>> As a side note, I agree with Gaetan - you can run entry points and >>>> stuff >>>> out of the current directory, especially if you use setup.py >>>> develop ... >>>> but this code does the entry points second, after a check for the >>>> module >>>> the old way. >>>> >>>> Monty >>>> >>>> >>>> === modified file 'lib/sqlalchemy/engine/strategies.py' >>>> --- lib/sqlalchemy/engine/strategies.py2007-02-25 22:44:52 + >>>> +++ lib/sqlalchemy/engine/strategies.py2007-03-26 17:03:13 + >>>> @@ -42,16 +42,16 @@ >>>> u = url.make_url(name_or_url) >>>> >>>> # get module from sqlalchemy.databases >>>> -module = u.get_module() >>>> +dialect_cls = u.get_dialect() >>>> >>>> dialect_args = {} >>>> # consume dialect arguments from kwargs >>>> -for k in util.get_cls_kwargs(module.dialect): >>>> +for k in util.get_cls_kwargs(dialect_cls): >>>> if k in kwargs: >>>> dialect_args[k] = kwargs.pop(k) >>>> >>>> # create dialect >>>> -dialect = module.dialect(**dialect_args) >>>> +dialect = dialect_cls(**dialect_args) >>>> >>>> # assemble connection arguments >>>> (cargs, cparams) = dialect.create_connect_args(u) >>>> @@ -71,7 +71,7 @@ >>>> raise exceptions.DBAPIError("Connection >>>> failed&quo
[sqlalchemy] Re: [PATCH] Using entry points to load database dialects
Always one in every bunch. :) I hear what you're saying about the import errors. But does it really help to allow work to get done before throwing the error? I would think you'd want to know right up front if you don't have a driver loaded rather then letting a program actually get started up and think you can write data (think fat client app) only to get a connection exception. But I, of course, could be very wrong about this. I am about many things... Monty Michael Bayer wrote: > > yeah i dont like setup.py develop either :)but anyway, patch is > good. one thing i have to nail down though is ticket #480. the > main point of that ticket is to cleanly isolate ImportErrors of > actual DBAPI modules apart from the containing dialect module > itself. the dialects are catching all the DBAPI-related > ImportErrors though so its not necessarily blocking this patch (its > just they cant report them nicely). > > > On Mar 26, 2007, at 1:34 PM, Monty Taylor wrote: > >> Michael Bayer wrote: >>> i think using entry points to load in external database dialects is a >>> great idea. >>> >>> though the current six core dialects i think i still want to load via >>> __import__ though since im a big fan of running SA straight out of >>> the source directory (and therefore thered be no entry points for >>> those in that case). >>> >>> so probably a check via __import__('sqlalchemy.databases') first, >>> then an entry point lookup. does that work ? >> Here is a patch that implements use of entry points to load dialects. >> The largest change is actually adding a get_dialect to replace the >> functionality of get_module, since entry points really want to return >> classes, and we only ever use the dialect class from the returned >> module >> anyway... >> >> This does not break code that I have that loads the mysql dialect, and >> it does work with my new code that adds a new dialect - although I >> suppose it's possible it could have broken something I didn't find. >> >> As a side note, I agree with Gaetan - you can run entry points and >> stuff >> out of the current directory, especially if you use setup.py >> develop ... >> but this code does the entry points second, after a check for the >> module >> the old way. >> >> Monty >> >> >> === modified file 'lib/sqlalchemy/engine/strategies.py' >> --- lib/sqlalchemy/engine/strategies.py 2007-02-25 22:44:52 + >> +++ lib/sqlalchemy/engine/strategies.py 2007-03-26 17:03:13 + >> @@ -42,16 +42,16 @@ >> u = url.make_url(name_or_url) >> >> # get module from sqlalchemy.databases >> -module = u.get_module() >> +dialect_cls = u.get_dialect() >> >> dialect_args = {} >> # consume dialect arguments from kwargs >> -for k in util.get_cls_kwargs(module.dialect): >> +for k in util.get_cls_kwargs(dialect_cls): >> if k in kwargs: >> dialect_args[k] = kwargs.pop(k) >> >> # create dialect >> -dialect = module.dialect(**dialect_args) >> +dialect = dialect_cls(**dialect_args) >> >> # assemble connection arguments >> (cargs, cparams) = dialect.create_connect_args(u) >> @@ -71,7 +71,7 @@ >> raise exceptions.DBAPIError("Connection >> failed", e) >> creator = kwargs.pop('creator', connect) >> >> -poolclass = kwargs.pop('poolclass', getattr(module, >> 'poolclass', poollib.QueuePool)) >> +poolclass = kwargs.pop('poolclass', getattr >> (dialect_cls, 'poolclass', poollib.QueuePool)) >> pool_args = {} >> # consume pool arguments from kwargs, translating a >> few of the arguments >> for k in util.get_cls_kwargs(poolclass): >> >> === modified file 'lib/sqlalchemy/engine/url.py' >> --- lib/sqlalchemy/engine/url.py 2007-03-18 22:35:19 + >> +++ lib/sqlalchemy/engine/url.py 2007-03-26 16:47:01 + >> @@ -2,6 +2,7 @@ >> import cgi >> import sys >> import urllib >> +import pkg_resources >> from sqlalchemy import exceptions >> >> """Provide the URL object as well as the make_url parsing >> function.""" >> @@ -69,6 +70,23 @@ >> s += '?' + "&".join(["%s=%s"
[sqlalchemy] [PATCH] Using entry points to load database dialects
Michael Bayer wrote: > i think using entry points to load in external database dialects is a > great idea. > > though the current six core dialects i think i still want to load via > __import__ though since im a big fan of running SA straight out of > the source directory (and therefore thered be no entry points for > those in that case). > > so probably a check via __import__('sqlalchemy.databases') first, > then an entry point lookup. does that work ? Here is a patch that implements use of entry points to load dialects. The largest change is actually adding a get_dialect to replace the functionality of get_module, since entry points really want to return classes, and we only ever use the dialect class from the returned module anyway... This does not break code that I have that loads the mysql dialect, and it does work with my new code that adds a new dialect - although I suppose it's possible it could have broken something I didn't find. As a side note, I agree with Gaetan - you can run entry points and stuff out of the current directory, especially if you use setup.py develop ... but this code does the entry points second, after a check for the module the old way. Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- === modified file 'lib/sqlalchemy/engine/strategies.py' --- lib/sqlalchemy/engine/strategies.py 2007-02-25 22:44:52 + +++ lib/sqlalchemy/engine/strategies.py 2007-03-26 17:03:13 + @@ -42,16 +42,16 @@ u = url.make_url(name_or_url) # get module from sqlalchemy.databases -module = u.get_module() +dialect_cls = u.get_dialect() dialect_args = {} # consume dialect arguments from kwargs -for k in util.get_cls_kwargs(module.dialect): +for k in util.get_cls_kwargs(dialect_cls): if k in kwargs: dialect_args[k] = kwargs.pop(k) # create dialect -dialect = module.dialect(**dialect_args) +dialect = dialect_cls(**dialect_args) # assemble connection arguments (cargs, cparams) = dialect.create_connect_args(u) @@ -71,7 +71,7 @@ raise exceptions.DBAPIError("Connection failed", e) creator = kwargs.pop('creator', connect) -poolclass = kwargs.pop('poolclass', getattr(module, 'poolclass', poollib.QueuePool)) +poolclass = kwargs.pop('poolclass', getattr(dialect_cls, 'poolclass', poollib.QueuePool)) pool_args = {} # consume pool arguments from kwargs, translating a few of the arguments for k in util.get_cls_kwargs(poolclass): === modified file 'lib/sqlalchemy/engine/url.py' --- lib/sqlalchemy/engine/url.py 2007-03-18 22:35:19 + +++ lib/sqlalchemy/engine/url.py 2007-03-26 16:47:01 + @@ -2,6 +2,7 @@ import cgi import sys import urllib +import pkg_resources from sqlalchemy import exceptions """Provide the URL object as well as the make_url parsing function.""" @@ -69,6 +70,23 @@ s += '?' + "&".join(["%s=%s" % (k, self.query[k]) for k in keys]) return s +def get_dialect(self): +"""Return the SQLAlchemy database dialect class corresponding to this URL's driver name.""" +dialect=None +try: + module=getattr(__import__('sqlalchemy.databases.%s' % self.drivername).databases, self.drivername) + dialect=module.dialect +except ImportError: +if sys.exc_info()[2].tb_next is None: + for res in pkg_resources.iter_entry_points('sqlalchemy.databases'): +if res.name==self.drivername: + dialect=res.load() +else: + raise +if dialect is not None: +return dialect +raise exceptions.ArgumentError('unknown database %r' % self.drivername) + def get_module(self): """Return the SQLAlchemy database module corresponding to this URL's driver name.""" try: === modified file 'setup.py' --- setup.py 2007-03-23 21:33:24 + +++ setup.py 2007-03-26 17:01:51 + @@ -10,6 +10,10 @@ url = "http://www.sqlalchemy.org";, packages = find_packages('lib'), package_dir = {'':'lib'}, +entry_points = { + 'sqlalchemy.databases': [ +'%s = sqlalchemy.databases.%s:dialect' % (f,f) for f in + ['sqlite', 'postgres', 'mysql', 'oracle', 'mssql', 'firebird']]}, license = "MIT License", long_description = """\ SQLAlchemy is:
[sqlalchemy] Re: Possible use of pkg_resources plugins?
Michael Bayer wrote: > i think using entry points to load in external database dialects is a > great idea. > > though the current six core dialects i think i still want to load via > __import__ though since im a big fan of running SA straight out of > the source directory (and therefore thered be no entry points for > those in that case). > > so probably a check via __import__('sqlalchemy.databases') first, > then an entry point lookup. does that work ? Yes. And I think that's the simplest case anyway - no need to load the pkg_resources stuff if you don't need it. I'll see if I can hack that together today. Thanks! Monty > > On Mar 26, 2007, at 11:45 AM, Monty Taylor wrote: > >> Hey all, >> >> I wanted to check and see if a patch would be considered (before I >> spend >> any time on it) to replace this: >> >> return getattr(__import__('sqlalchemy.databases.%s' % >> self.drivername).databases, self.drivername) >> >> from sqlalchemy.engine.url >> >> with something using the pkg_resources plugin stuff from setuptools? >> >> I ask, because I'm trying to write a new database engine that's a >> fairly >> heavy write. (this is the NDB API thing that doesn't use SQL) I'm not >> touching any code so far that isn't in a single file in the databases >> dir, but there are a couple of us who are trying to work on the >> project >> together. I'd really like to just version control that one file so we >> don't have to branch the whole sqlalchemy source. I also think it >> might >> be nice to be able to distribute a sqlalchemy database engine without >> having to get it committed to the trunk. >> >> HOWEVER - I recognize that no one else might care about either of >> these >> things. I don't think it will be a hard patch or one that will be >> disruptive to the current way of doing things, but I wanted to >> check if >> it would be rejected out of hand before I bothered? >> >> Thanks! >> Monty >> > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Possible use of pkg_resources plugins?
Hey all, I wanted to check and see if a patch would be considered (before I spend any time on it) to replace this: return getattr(__import__('sqlalchemy.databases.%s' % self.drivername).databases, self.drivername) from sqlalchemy.engine.url with something using the pkg_resources plugin stuff from setuptools? I ask, because I'm trying to write a new database engine that's a fairly heavy write. (this is the NDB API thing that doesn't use SQL) I'm not touching any code so far that isn't in a single file in the databases dir, but there are a couple of us who are trying to work on the project together. I'd really like to just version control that one file so we don't have to branch the whole sqlalchemy source. I also think it might be nice to be able to distribute a sqlalchemy database engine without having to get it committed to the trunk. HOWEVER - I recognize that no one else might care about either of these things. I don't think it will be a hard patch or one that will be disruptive to the current way of doing things, but I wanted to check if it would be rejected out of hand before I bothered? Thanks! Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: don't automatically stringify compiled statements - patch to base.py
YES. This is good stuff. Thanks again. On 3/16/07, Monty Taylor <[EMAIL PROTECTED]> wrote: > On 3/15/07, Michael Bayer <[EMAIL PROTECTED]> wrote: > > > > well at least make a full blown patch that doesnt break all the other > > DB's. notice that an Engine doesnt just execute Compiled objects, it > > can execute straight strings as well. thats why the dialect's > > do_execute() and do_executemany() take strings - they are assumed to > > go straight to a DBAPI representation. to take the "stringness" out > > of Engine would be a large rework to not just Engine but all the > > dialects. > > > > im surprised the execute_compiled() method works for you at all, as > > its creating a cursor, calling result set metadata off the cursor, > > etc. all these DBAPI things which you arent supporting. it seems > > like it would be cleaner for you if you werent even going through > > that implementation of it. > > Well, I was trying my best to be a good citizen, so I made some > classes that implement all of the methods that the pieces of > execute_compiled seemed to want, faking it for now when I didn't need > it. The semantics of most of the DBAPI map to the NDBAPI fairly well > (it's still all the same underlying db ideas - just no sql strings) > > BUT... > > > the theme here is that the base Engine is assuming a DBAPI > > underneath. if you want an Engine that does not assume string > > statements and DBAPI's it might be easier for you to just provide a > > subclass of Engine instead (or go even lower level, subclass > > sqlalchemy.sql.Executor). either way you can change what > > create_engine() returns by using a new "strategy" to create_engine(), > > which is actually a pluggable API. e.g. > > This seems like what I really want to try, because you are right, > trying to get this to pretend to be totally DBAPI is going to be not > totally fun. I'll see what trouble I get myself into this way... or > I'll send a patch that changes all the internals. :) > > Thanks! > > > from sqlalchemy.engine.strategies import DefaultEngineStrategy > > > > class NDBAPIEngineStrategy(DefaultEngineStrategy): > >def __init__(self): > > DefaultEngineStrategy.__init__(self, 'ndbapi') > > > > def get_engine_cls(self): > > return NDBAPIEngine > > > > # register the strategy > > NDBAPIEngineStrategy() > > > > now you connect via: > > > > create_engine(url, strategy='ndbapi') > > > > if you want to go one level lower, which i think you do because you > > dont really want pooling or any of that either, you dont even need to > > have connection pooling or anything like thatyou can totally > > override what create_engine() does, have different connection > > parameters, whatever. just subclass EngineStrategy directly: > > > > class NDBAPIEngineStrategy(EngineStrategy): > > def __init__(self): > > EngineStrategy.__init__(self, 'ndbapi') > > def create(self, *args, **kwargs): > > # this is some arbitrary set of arguments > > return NDAPIEngine(kwargs.get('connect_string'), kwargs.get > > ('some_other_argument'), new NDBAPIDialect(*args), etc etc) > > # register > > NBAPIEngineStrategy() > > > > then you just say: > > > > create_engine(connect_string='someconnectstring', > > some_other_argument='somethingelse', strategy='ndbapi') > > > > i.e. whatever you want. create_engine() just passes *args/**kwargs > > through to create() after pulling out the "strategy" keyword. > > > > if you dont like having to send over "strategy" i can add a hook in > > there to look it up on the dialect, so it could be more like > > create_engine('ndbapi://whatever'). but anyway this method would > > mean we wouldnt have to rewrite half of Engine's internals. > > > > > > On Mar 14, 2007, at 7:28 PM, Monty Taylor wrote: > > > > > > > > Hi - I'd attach this as a patch file, but it's just too darned > > > small... > > > > > > I would _love_ it if we didn't automatically stringify compiled > > > statements here in base.py, because there is no way to override this > > > behavior in a dialect. I'm working on an NDBAPI dialect to support > > > direct access to MySQL Cluster storage, and so I never actually have a > > > string representation of the query. Most
[sqlalchemy] Re: don't automatically stringify compiled statements - patch to base.py
On 3/15/07, Michael Bayer <[EMAIL PROTECTED]> wrote: > > well at least make a full blown patch that doesnt break all the other > DB's. notice that an Engine doesnt just execute Compiled objects, it > can execute straight strings as well. thats why the dialect's > do_execute() and do_executemany() take strings - they are assumed to > go straight to a DBAPI representation. to take the "stringness" out > of Engine would be a large rework to not just Engine but all the > dialects. > > im surprised the execute_compiled() method works for you at all, as > its creating a cursor, calling result set metadata off the cursor, > etc. all these DBAPI things which you arent supporting. it seems > like it would be cleaner for you if you werent even going through > that implementation of it. Well, I was trying my best to be a good citizen, so I made some classes that implement all of the methods that the pieces of execute_compiled seemed to want, faking it for now when I didn't need it. The semantics of most of the DBAPI map to the NDBAPI fairly well (it's still all the same underlying db ideas - just no sql strings) BUT... > the theme here is that the base Engine is assuming a DBAPI > underneath. if you want an Engine that does not assume string > statements and DBAPI's it might be easier for you to just provide a > subclass of Engine instead (or go even lower level, subclass > sqlalchemy.sql.Executor). either way you can change what > create_engine() returns by using a new "strategy" to create_engine(), > which is actually a pluggable API. e.g. This seems like what I really want to try, because you are right, trying to get this to pretend to be totally DBAPI is going to be not totally fun. I'll see what trouble I get myself into this way... or I'll send a patch that changes all the internals. :) Thanks! > from sqlalchemy.engine.strategies import DefaultEngineStrategy > > class NDBAPIEngineStrategy(DefaultEngineStrategy): >def __init__(self): > DefaultEngineStrategy.__init__(self, 'ndbapi') > > def get_engine_cls(self): > return NDBAPIEngine > > # register the strategy > NDBAPIEngineStrategy() > > now you connect via: > > create_engine(url, strategy='ndbapi') > > if you want to go one level lower, which i think you do because you > dont really want pooling or any of that either, you dont even need to > have connection pooling or anything like thatyou can totally > override what create_engine() does, have different connection > parameters, whatever. just subclass EngineStrategy directly: > > class NDBAPIEngineStrategy(EngineStrategy): > def __init__(self): > EngineStrategy.__init__(self, 'ndbapi') > def create(self, *args, **kwargs): > # this is some arbitrary set of arguments > return NDAPIEngine(kwargs.get('connect_string'), kwargs.get > ('some_other_argument'), new NDBAPIDialect(*args), etc etc) > # register > NBAPIEngineStrategy() > > then you just say: > > create_engine(connect_string='someconnectstring', > some_other_argument='somethingelse', strategy='ndbapi') > > i.e. whatever you want. create_engine() just passes *args/**kwargs > through to create() after pulling out the "strategy" keyword. > > if you dont like having to send over "strategy" i can add a hook in > there to look it up on the dialect, so it could be more like > create_engine('ndbapi://whatever'). but anyway this method would > mean we wouldnt have to rewrite half of Engine's internals. > > > On Mar 14, 2007, at 7:28 PM, Monty Taylor wrote: > > > > > Hi - I'd attach this as a patch file, but it's just too darned > > small... > > > > I would _love_ it if we didn't automatically stringify compiled > > statements here in base.py, because there is no way to override this > > behavior in a dialect. I'm working on an NDBAPI dialect to support > > direct access to MySQL Cluster storage, and so I never actually have a > > string representation of the query. Most of the time this is fine, but > > to make it work, I had to have pre_exec do the actual execution, > > because by the time I got to do_execute, I didn't have my real object > > anymore. > > > > I know this would require some other code changes to actually get > > applied - namely, I'm sure there are other places now where the > > statement should be str()'d to make sense. > > > > Alternately, we could add a method to Compiled. (I know there is > > already get_str()) like "get_final_q
[sqlalchemy] don't automatically stringify compiled statements - patch to base.py
Hi - I'd attach this as a patch file, but it's just too darned small... I would _love_ it if we didn't automatically stringify compiled statements here in base.py, because there is no way to override this behavior in a dialect. I'm working on an NDBAPI dialect to support direct access to MySQL Cluster storage, and so I never actually have a string representation of the query. Most of the time this is fine, but to make it work, I had to have pre_exec do the actual execution, because by the time I got to do_execute, I didn't have my real object anymore. I know this would require some other code changes to actually get applied - namely, I'm sure there are other places now where the statement should be str()'d to make sense. Alternately, we could add a method to Compiled. (I know there is already get_str()) like "get_final_query()" that gets called in this context instead. Or even, although it makes my personal code less readable, just call compiled.get_str() here, which is the least invasive, but requires non-string queries to override a method called get_str() to achieve a purpose that is not a stringification. Other than this, so far I've actually got the darned thing inserting records, so it's going pretty well... other than a whole bunch of test code I put in to find out why it wasn't inserting when the problem was that I was checking the wrong table... *doh* Thanks! Monty === modified file 'lib/sqlalchemy/engine/base.py' --- lib/sqlalchemy/engine/base.py 2007-02-13 22:53:05 + +++ lib/sqlalchemy/engine/base.py 2007-03-14 23:17:40 + @@ -312,7 +312,7 @@ return cursor context = self.__engine.dialect.create_execution_context() context.pre_exec(self.__engine, proxy, compiled, parameters) -proxy(str(compiled), parameters) +proxy(compiled, parameters) context.post_exec(self.__engine, proxy, compiled, parameters) rpargs = self.__engine.dialect.create_result_proxy_args(self, cursor) return ResultProxy(self.__engine, self, cursor, context, typemap=compiled.typemap, columns=compiled.columns, **rpargs) @@ -342,7 +342,7 @@ if cursor is None: cursor = self.__engine.dialect.create_cursor(self.connection) try: -self.__engine.logger.info(statement) +self.__engine.logger.info(str(statement)) self.__engine.logger.info(repr(parameters)) if parameters is not None and isinstance(parameters, list) and len(parameters) > 0 and (isinstance(parameters[0], list) or isinstance(parameters[0], dict)): self._executemany(cursor, statement, parameters, context=context) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL Has Gone Away
You can control the amount of time before your connection is terminated with the MySQL parameter wait_timeout - although addressing the fundamental problem of the app handling dropped connections should obviously be addressed. In higher load environments with MySQL it may actually behoove you to reap and recycle connections more frequently rather than hold thousands of them open as you will typically be able to scale better and handle higher concurrency that way. You also want your app to be able to handle dropped connections in case you ever want to build an HA database system behind the app, since connections through a load balancer or other thing may want to be reaped more rapidly than the app would expect. But as a stop-gap measure (and if you aren't expecting data rates in the 1000's per second), you can set wait_timeout (set in seconds) to something hideously high. Monty On 10/18/06, Michael Bayer <[EMAIL PROTECTED]> wrote: > > pool_recycle is an integer number of seconds which to wait before > reopening a conneciton. However, setting it to "True" is equivalent to > 1 which means it will reopen connections constantly. > > check out the FAQ entry on this: > > http://www.sqlalchemy.org/trac/wiki/FAQ#MySQLserverhasgoneaway/psycopg.InterfaceError:connectionalreadyclosed > > note that the recycle only occurs on the connect() operation. > > also turn on "echo_pool=True" so that you can see the connections being > recycled in the standard output > > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---
[sqlalchemy] Re: non-sql storage
Thanks - this is exactly the information I was looking for! On 10/21/06, Michael Bayer <[EMAIL PROTECTED]> wrote: > > MapperExtension allows you to override the select() and select_by() > methods of Query, and in the trunk it also supports get_by(), get(), > and load(). this would be a place to put a very coarse-grained bypass > of querying in. this is something you could do pretty quickly without > getting too deeply into things. That sounds like it might be a good step one. > if you want your instances to be more integrated with the unit of work, > that would be much more involved and im not sure if anything would > really be left of the ORM once you completed it, particularly if they > truly have no correlation to a table at all. Well I definitely want to be more integrated. The thing I'm trying to integrate still is a database and has tables and transactions - I'm just trying to bypass SQL itself. The MySQL Cluster product (NDB) has a direct C++ API that I've been wrapping with Python. I'm working on this theory that the direct interface actually maps really nicely to ORM concepts better than generating SQL (which in small tests it does), but I thought it would be nice to work from an already exisiting ORM than to write my own. We need another ORM like we need a hole in the head. The down side is what you mention, though. Most ORM's aren't just desinged to provide a relational persistence layer for objects, they are also designed around SQL. And why shouldn't they be? What other readily available mechanism do we have for RDBMS interface? So It may be that I've got to rip out too much and write my classes too far up the chain. Still may be good, though. I'll post what I get working when I do. Thanks, Monty > MapperExtension has a lot of methods that fire off upon insert, update, > delete, etc that would probably be involved. *maybe* a careful usage > of those could accomplish most of the task. > > Youd probably want to get fairly familiar with the internals of how > mapped objects are set up. this would mostly involve carefully > inspecting the __dict__ of instances as they move through various > persistence states, familiarizing yourself with the instances() method > on Query (its been recently moved from Mapper to Query) as well as the > _instance(), save_obj() and delete_obj() methods on mapper, and getting > a general idea of the purpose of the attributes.py module which is > something you can play with all by itself (see the attributes.py test > suite in the test/ directory for examples). > > I would say its worth putting some thought into the motivations for > putting file-based persistence behind SQLAlchemy's API, since > SQLAlchemy itself is designed to be explicitly revealing of database > concepts, intending to be placed behind coarser-grained persistence > layers for applications that want to conceal the details of SQL > database interaction. such a layer would probably be a better place to > stick an abstraction between SQL and file-based persistence. > > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---
[sqlalchemy] non-sql storage
So this may sound like a crazy question... If I wanted to implement an interface to a non-SQL based backend (or I wanted to use a direct API to the db bypassing the SQL layer) is there an appropriate place to plug such a thing in and if so where? For it to make sense, I would want to such code in a place that it could take advantage of the meta-information defined for a class, bypass the sql generation step, but still expose the right interface to calling code. Any thoughts? Monty --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy -~--~~~~--~~--~--~---