[sqlalchemy] Re: Is there a way to globally set DateTime string formatting?
On Sep 24, 7:09 pm, Michael Bayer mike...@zzzcomputing.com wrote: If you have a stringified date format that you'd like to pass to SQL statements as values, such that the string is parsed into a datetime then passed to the DBAPI, just use TypeDecorator around DateTime:http://www.sqlalchemy.org/docs/core/types.html?highlight=typedecorato... Yeah, but I can't make reflection use the TypeDecorator subclass instead of DATETIME can I? I can think of two options: 1. Go through each reflected table and replace each DATETIME type with a TypeDecorator subclass. 2. Find all of the DATETIME columns, and implement a preprocessing step for all of them. So whenever I encounter a DATETIME value, I'd convert it from a string into a Python datetime before passing it into insert/update. I don't necessarily like either of these options as I was hoping to not have my code have to deal with these kinds of typing issues, but I might be out of luck. One other idea though: is there any way to use the compiler extension to do this? For instance, could I add a @compiles function that basically says if this is a datetime column then do this else do whatever is done by default for columns? The type compilation capability almost does what I want, except it's for DDL rather than DML. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Why isn't this Json column type working?
On Sep 18, 9:08 pm, Michael Bayer mike...@zzzcomputing.com wrote: You might after that also throw a pdb into the process_bind_param() method and ensure that its being called Sorry, I should have been more clear about this. If I add a print statement in process_bind_param it doesn't get executed. and as the ultimate sanity check info default.py do_execute(). Could you clarify what you mean by this? Perhaps I'm being a bit dense. :-/ I would also ensure you're on the latest production release of MySQLdb as this error does seem faintly familiar, like there might have been issues with the consumption of bind parameters on the MySQLdb side, or test with a different DBAPI driver. Yes, I'm presently running 1.2.3 which is the latest. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Why isn't this Json column type working?
So I have a JSON column type that I wrote. As it turns out, it's pretty much the same as the one here[1]: from json import dumps, loads class Json(types.TypeDecorator, types.MutableType): impl=types.Text def process_bind_param(self, value, dialect): return dumps(value) def process_result_value(self, value, dialect): return loads(value) [1] http://groups.google.ca/group/sqlalchemy/browse_thread/thread/81168f812c22f3f6 I put this into the following (declarative) class: Base = declarative_base() class SessionMixin(object): This adds a property that allows accessing the session in an object. _session = None @classmethod def set_session(cls, session): cls._session = session @property def session(self): if not self._session: self.__class__._session = object_session(self) return self._session class Content(Base, SessionMixin): __tablename__ = 'content' id = Column(Integer, primary_key=True) entity = Column(String) handle_value = Column(String) cms_guid = Column(String, default=uuid4) last_change_guid = Column(String, ForeignKey('changes.change_guid')) data = Column(Json) export_date = Column(BigInteger) last_change = relationship('Change') I get the following error when I try to run it (with some sensitive data left out): Traceback (most recent call last): File /Users/jason/.envs/main/bin/epfimport, line 8, in module load_entry_point('epf-import==0.1dev', 'console_scripts', 'epfimport')() File /Users/jason/src/epf-import/src/epf_import/run.py, line 35, in main load_rows(fname) File /Users/jason/src/epf-import/src/epf_import/run.py, line 71, in load_rows Session.commit() File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/scoping.py, line 132, in do return getattr(self.registry(), name)(*args, **kwargs) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py, line 595, in commit self.transaction.commit() File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py, line 367, in commit self._prepare_impl() File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py, line 351, in _prepare_impl self.session.flush() File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py, line 1359, in flush self._flush(objects) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py, line 1440, in _flush flush_context.execute() File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/unitofwork.py, line 299, in execute rec.execute(self) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/unitofwork.py, line 443, in execute uow File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/mapper.py, line 1863, in _save_obj execute(statement, params) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py, line 1194, in execute params) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py, line 1274, in _execute_clauseelement return self.__execute_context(context) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py, line 1305, in __execute_context context.parameters[0], context=context) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py, line 1404, in _cursor_execute context) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py, line 1397, in _cursor_execute context) File /Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/default.py, line 299, in do_execute cursor.execute(statement, parameters) File /Users/jason/.envs/main/lib/python2.6/site-packages/MySQLdb/cursors.py, line 174, in execute self.errorhandler(self, exc, value) File /Users/jason/.envs/main/lib/python2.6/site-packages/MySQLdb/connections.py, line 36, in defaulterrorhandler raise errorclass, errorvalue sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \'\'download_size\': \'235905\', \'application_id\': \'292412992\', \'copyright\': \'200\' at line 1') 'INSERT INTO changes (change_guid, rev, op, entity, transaction_guid, cms_guid, data) VALUES (%s, %s, %s, %s, %s, %s, %s)' ('28e0b2703494457c8f4a09b082274cb6', None, 'INSERT', 'itunes_application', '34872a5ee7e5427199fa3a7efb026f97', '3df874eed48740ce9d54ec6181827956', {...}) It looks like the Json column type is getting bypassed and the dictionary is getting converted directly into a string. Can anyone shed some light on what I'm doing wrong (and more importantly how to fix it)? I'm using sqlalchemy 0.6.4 under Python 2.6.1 on OS X. -- You received this
[sqlalchemy] Re: Help with optimizing
On May 30, 8:39 pm, Michael Bayer mike...@zzzcomputing.com wrote: OK well by re018792aea57 I've bumped the callcounts down a *little*, reducing compiler overhead within the flush(). Perhaps it will be a bigger difference for your app which seems to be heavy on flush() calls. As it turns out, that change didn't help much at all. :-/ However, performance might not be as big an issue as I suspected. It turns out the code I've been working with is now much faster since using SQLAlchemy. We are perhaps taking a bit more CPU time than we were before, but SQLAlchemy has given us much greater flexibility to write efficient queries. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Help with optimizing
On Fri, May 28, 2010 at 3:48 PM, Michael Bayer mike...@zzzcomputing.comwrote: This is the ORM affecting 124K statements so must be a very large data persist (it seems like a persist heavy operation, i see 287K objects total treated as part of units of work). It seems like you are calling commit() a very large number of times. So the most obvious enhancement would be to call this a whole lot less - the commit necessitates a flush, and also by default fully expires the session (unless you turn off expire_on_commit), meaning all rows have to be fully reloaded, which is probably making the number of statements executed much larger than it needs to be. Unfortunately committing less isn't doable. :-( We need to have a transaction for each record we process otherwise, we'd have to go back and clean up if something failed. I will try turning off expire_on_commit and see if that helps though. A good deal of time is spent in compiling constructs into SQL strings here, there is a feature whereby this can be cut down dramatically for similar statements executed many times which is the compiled_cache execution option. The ORM uses this a bit internally now though it might be a nice feature for you to be able to switch it on for a given session, and have all SQL statement compilation cached for the life of that session. This feature can be approximated right now though I'd want to modify _save_obj to not overwrite the cache with its own, which defeats the usage of a session-spanning compilation cache. That would be a *huge* help. Would this involve a custom Query subclass? -- Jason Baker Developer ZeOmega 3010 Gaylord Parkway, Suite 210 Frisco, TX 75034 O: 214-618-9880 ext 8024 jba...@zeomega.com www.ZeOmega.com Proven. Progressive. Partner. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Sql alchemy-Oracle Error
On Wed, May 5, 2010 at 7:42 AM, dhanil anupurath dhanilanupur...@gmail.comwrote: Hi I am using sqlalchemy in my TurboGears application. some of my classes has columns with Pickletype dataType. these get converted to BLOB dataType in the database. I was using mySql till recently and everything was working fine. Now i am shifting to oracle. Tables are getting created properly. (setup-app in tg project ran successfully). But when i try to query a table having BLOB column , i get the following error. Exception in thread Thread-22: Traceback (most recent call last): File /usr/lib/python2.4/threading.py, line 442, in __bootstrap self.run() File /svnee/trunk/src/core/services/task_service.py, line 76, in check_calendar_tasks for cal in conn.query(Task).\ File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/orm/query.py, line 1361, in __iter__ return self._execute_and_instances(context) File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/orm/query.py, line 1364, in _execute_and_instances result = self.session.execute(querycontext.statement, params=self._params, mapper=self._mapper_zero_or_none()) File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/orm/session.py, line 754, in execute return self.__connection(engine, close_with_result=True).execute( File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/engine/base.py, line 824, in execute return Connection.executors[c](self, object, multiparams, params) File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/engine/base.py, line 874, in _execute_clauseelement return self.__execute_context(context) File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/engine/base.py, line 896, in __execute_context self._cursor_execute(context.cursor, context.statement, context.parameters[0], context=context) File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/engine/base.py, line 950, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6- py2.4.egg/sqlalchemy/engine/base.py, line 931, in _handle_dbapi_exception raise exc.DBAPIError.instance(statement, parameters, e, connection_invalidated=is_disconnect) DatabaseError: (DatabaseError) ORA-01036: illegal variable name/number i did a normal select query from sqlplus: select * from tasks; ORA-00932: inconsistent datatypes: expected NUMBER got BLOB These are my class and table definitions: class Task(DeclarativeBase): task_id = Column(Integer,Sequence('id_seq'), primary_key=True) task_type = Column(Unicode(50), default=to_unicode('Task')) name = Column(Unicode(256)) entity_id = Column(Unicode(256)) entity_name = Column(Unicode(50)) context = Column(PickleType) params = Column(PickleType) kw_params = Column(PickleType) processors = Column(ImmutablePickleType) _tablename_=tasks TASK_ID Number TASK_TYPEVarchar2 NAME Varchar2 ENTITY_IDVarchar2 ENTITY_NAME Varchar2 CONTEXT Blob PARAMS Blob KW_PARAMSBlob PROCESSORS Blob USER_NAME Varchar2 SUBMITTED_ON Date 7 REPEATING Number Any chance you could send the query that's generating the python traceback? You should be able to see the generated queries if you send echo=True in the create_engine arguments. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Mapper table properties
Given an entity class (or entity instance), I'd like to get the table that is mapped to it. If I get the mapper using object_mapper/class_mapper, then I get a mapper with the following properties defined (among others): local_table, mapped_table, and tables. Can someone help me understand what the difference between each of these properties is (or point me towards documentation on them)? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Can I log queries from where they're created?
On Mon, Apr 5, 2010 at 5:55 PM, Michael Bayer mike...@zzzcomputing.comwrote: I'm unclear on what usage you'd like to see. What I'm trying to do would be roughly equivalent to this: def query_foo(self): logger = logging.getLogger(__name__ + '.' + self.__class__.__name__) query = self.session.query(Foo) logger.debug(query) logger.debug(query.compile().params) return query.all() However, this is pretty unmanageable. The above method could be reduced to one line of code if not for the logging boilerplate. So what I'd like to do is somehow be able to do the above, but without the logger boilerplate. The main reason for doing this is to have queries logged to where they're used. This is really useful because I'm using nose to run tests, and it would really be nice to be able to filter out all queries except the ones that are generated by a particular class or module. Are you looking for all queries to do this (use Session with query_cls)? The only way to subclass query to do what I'm looking for would be to override every method that actually executes the query to log the query and then execute it. Is there a better way to do it that I'm not thinking of? -- Jason Baker Developer ZeOmega 3010 Gaylord Parkway, Suite 210 Frisco, TX 75034 O: 214-618-9880 ext 8024 jba...@zeomega.com www.ZeOmega.com Proven. Progressive. Partner. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] envbuilder setup for sqlalchemy
Hello, Just in case anyone can use this, I've created a sample envbuilder file that will create a virtualenv with sqlalchemy installed from hg. The file is here: http://github.com/jasonbaker/envbuilder/blob/master/examples/sqlalchemy/.env To use it: 1. Install the development version of envbuilder (easy_install envbuilder==dev) 2. Put the above .env file in a directory somewhere. 3. Do envb checkout from the command-line. If you don't have git installed, do envb checkout -p sqlalchemy instead. 4. Do envb setup. And you now have a virtualenv set up with the development version of sqlalchemy. A couple of other things you can do: run tests - envb test update sqlalchemy - envb pull I'm looking for ways to help make this fit into peoples' workflows, so if you have any suggestions or questions let me know! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Can I log queries from where they're created?
Here's what I'd like to do. Suppose I have some class Foo in some_module.py: class Foo(Entity): def all_foos(self): return self.session.query(Foo).all() ...sqlalchemy by default will log this to sqlalchemy.engine. However, this makes it a bit difficult to narrow down what query is coming from where. What I would like to have happen is that this is logged to the name some_module.Foo (or some_other_module.Bar if it comes from a different class). The best approach that I've come up with involves a query option and a decorator: class LoggerOption(MapperOption): This is an option that may be passed in to sqlalchemy's Query.options. This will make the query be logged to any arbitrary namespace. Example:: session.query(Mbr).options(LoggerOption('foo') The above query will be logged on the logger returned by logging.getLogger('foo'). def __init__(self, name): self.name = name propagate_to_loaders = True def process_query(self, query): logger = logging.getLogger(self.name) statement = query.statement logger.debug(str(statement)) logger.debug(str(statement.compile().params)) def query(attr_name): def _query(func): def _run_query(self, *args, **kwargs): query = func(self, *args, **kwargs) query = query.options(LoggerOption(self.__class__.__module__ + '.' + self.__class__.__name__)) attr = getattr(query, attr_name) return attr() _run_query.__doc__ = func.__doc__ return _run_query return _query ...so I could rewrite Foo like this: class Foo(Entity): @query('all') def all_foos(self): return self.session.query(Foo) This works, but it feels like there should be a better way than having to attach a decorator to every method and having to return a query from that method. Is there any better way to do this? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.