[sqlalchemy] Re: How to specify NOLOCK queries in SA (mssql)
Hi Rick, This could involve quite a bit of investigation on our part, so it could take a while before we get to the bottom of it. We're running Windows 2003 as the OS, MS SQL Server 2005, via the SQL Server Native Driver, using PYODBC, Apache 2.2.6 mod_python 3.3.1. The app is based on Pylons, using SQLAlchemy-0.4.4dev_r3557-py2.5 Elixir-0.5.0dev_r2495-py2.5. Now for what it's worth, we made the change I mentioned above to our db. Before the change, we were getting 36 maximum concurrent locks in a day. after the change, that figure went down to 400. It also reduced the load on our CPUs by about 10%. So it helped. We will be doing more examination of what's happening with cursors next week...I'll keep you posted. :) On May 16, 2:20 am, Rick Morrison [EMAIL PROTECTED] wrote: Hi Bruce, I'm considering a switch from pymssql to pyodbc myself in the not-too-distance future, and this thread has me a bit curious about what's going on. This is a subject that may affect SQL more in the future when ODBC and JDBC drivers get more use. I think there's two distinct questions that need to be answered to get to the bottom of this. The first question is why are these queries being issued at all, and from where? Like Mike says, SQLA is playing no part in constructing or issuing these queries. From the bit of googling that I've done so far, it seems that the FMTONLY queries are issued behind the scenes by the data connector to fetch metadata regarding the query. While there's a lot of reasons a data connector might need to have metadata, there's two that seem especially likely when SQLA comes into play: a) There are un-typed bind parameters in the query, and the connector needs to know the data types for some reason. b) There is going to be a client-side cursor constructed, and result metadata is needed to allocate the cursor. From the description you give, I would bet that this is your main issue. If the cause is (a), a fix might be problematic, as SQLA issues all of its queries using bind parameters, and I'm not sure if type information is used for each. But if you're using explicit bind parameters, you may want to specify the type on those. As for the more likely cause (b) I would think this could be gotten around by making sure you specify firehose (read-only, forward-processing, non-scrollable) cursors for retrieval, but I'm not sure what the pyodbc settings for this might be. As a bonus, you'll probably see a bit of a performance boost using these types of cursors as well. The second question is more of a mystery to me: ok, so the data connector issues a FMTONLY queryif it's just fetching metadata, why would that cause database locks?. This one I can't figure out. Unless you're calling stored procedures or UDF's that have locking side effects, It's got to be a bug in the data connector. From what I read a FMTONLY query should be pretty fast (other than the round-trip network time), and should lock nothing. Are you running on Windows, or on Unix? What's your ODBC connector? Please post to the list as you work through this and let us know what you find... Rick --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Access to AS/400 data
Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Opportunistic locking or edit sequence
Hello, new to SQLAlchemy. A question... Does SA implement what is sometimes referred to as opportunistic locking or sequence locking? This is intended to prevent near simultaneous but incompatible UPDATESs of the same record in the database by two different clients. Some databases implement this by using a field like 'edit_sequence' which is an incrementing sequence where the value in the db must match that in the UPDATE. Does SA have such a feature or something like it? Thanks, Michael Hipp Heber Springs, Arkansas, USA --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Opportunistic locking or edit sequence
On May 16, 2008, at 9:38 AM, Michael Hipp wrote: Hello, new to SQLAlchemy. A question... Does SA implement what is sometimes referred to as opportunistic locking or sequence locking? This is intended to prevent near simultaneous but incompatible UPDATESs of the same record in the database by two different clients. yes, this feature is provided through an option on mapper() called version_id_col, described at: http://www.sqlalchemy.org/docs/04/sqlalchemy_orm.html#docstrings_sqlalchemy.orm_modfunc_mapper (its the last option in the list). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. -- Carlos Hanson --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
On May 16, 2008, at 10:55 AM, Carlos Hanson wrote: On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. well, connecting is just the beginning. to take advantage of SQLA, you would also want an AS/400 dialect that knows how to render SQL in the way an AS/400 likes. Im not familiar with anyone working on an AS/ 400 dialect at the moment. I only know of the DB2 dialect which is a separate project (but maybe ask on their list since they work for IBM). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
Carlos Hanson wrote: On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. I use pyodbc now to connect to the 400 as well. But, have not dove in to SQLAlchemy yet as I thought I read somewhere that access to the AS/400 was not supported. I currently use SQLObject for my data access for non-AS/400 work, but I don't believe there is any intention of supporting the 400 there. I guess I'm looking for a compelling reason to move in that direction but wanted to know that it works before I start trying it out. I'm curious to see whether or not it works for you. Thanks. -Jim --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
On Fri, May 16, 2008 at 8:14 AM, Michael Bayer [EMAIL PROTECTED] wrote: On May 16, 2008, at 10:55 AM, Carlos Hanson wrote: On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. well, connecting is just the beginning. to take advantage of SQLA, you would also want an AS/400 dialect that knows how to render SQL in the way an AS/400 likes. Im not familiar with anyone working on an AS/ 400 dialect at the moment. I only know of the DB2 dialect which is a separate project (but maybe ask on their list since they work for IBM). This is a good point. I have to create aliases to a file/member combination to select data. I guess I wouldn't expect SQLAlchemy to implement that by default, since most every other database uses tables. -- Carlos Hanson --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
On Fri, May 16, 2008 at 12:03 PM, Carlos Hanson [EMAIL PROTECTED] wrote: On Fri, May 16, 2008 at 8:14 AM, Michael Bayer [EMAIL PROTECTED] wrote: On May 16, 2008, at 10:55 AM, Carlos Hanson wrote: On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. well, connecting is just the beginning. to take advantage of SQLA, you would also want an AS/400 dialect that knows how to render SQL in the way an AS/400 likes. Im not familiar with anyone working on an AS/ 400 dialect at the moment. I only know of the DB2 dialect which is a separate project (but maybe ask on their list since they work for IBM). This is a good point. I have to create aliases to a file/member combination to select data. I guess I wouldn't expect SQLAlchemy to implement that by default, since most every other database uses tables. What is your connection string in pyodbc for as400? When you create aliases then select statements work as they should for you? Lucas --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Using SQLAlchemy with Plone/Zope
On May 16, 11:06 am, Andreas Jung [EMAIL PROTECTED] wrote: --On 16. Mai 2008 08:58:38 -0700 Mike [EMAIL PROTECTED] wrote: Hi, We are using SQLAlchemy 4.X in Zope using SQLAlchemyDA, found here: http://opensource.zopyx.com/projects/SQLAlchemyDA This uses SA 0.4.0. We are connecting to MS SQL 2000. While this works in Zope for normal SQL commands, if we try to execute a stored procedure, we get the following traceback: [Traceback (most recent call last): File sqltest.py, line 14, in ? print cur.fetchall() File /opt/Plone-2.5/Python-2.4.3/lib/python2.4/site-packages/ SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 1231, in __repr__ return repr(tuple(self)) File /opt/Plone-2.5/Python-2.4.3/lib/python2.4/site-packages/ SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 1223, in __iter__ yield self.__parent._get_col(self.__row, i) File /opt/Plone-2.5/Python-2.4.3/lib/python2.4/site-packages/ SQLAlchemy-0.4.0-py2.4.egg/sqlalchemy/engine/base.py, line 1468, in _get_col rec = self._key_cache[key] AttributeError: 'ResultProxy' object has no attribute '_key_cache' I'm not sure how to go about troubleshooting this. You're basically on the wrong list. This issue belong on the SQLAlchemy mailing list. There is no indication for a anything related to Zope. I'm confused. This IS the SqlAlchemy list... Would upgrading it to SQLAlchemy 0.4.5 fix it? Try it out yourself and see what's happening. Nobody know about your particular setup, code etc. - but you do. Andreas We did end up trying it soon after posting and it works now. Sorry to bother you. Mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Using SQLAlchemy with Plone/Zope
On May 16, 2008, at 11:58 AM, Mike wrote: I'm not sure how to go about troubleshooting this. Would upgrading it to SQLAlchemy 0.4.5 fix it? Any hints would be appreciated. Hi Mike - yeah, i see your stacktrace is from 0.4.0 and i have a vague recollection of some ResultProxy bug of that nature back then, so an upgrade should fix it (current version is 0.4.6). - mike --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
Carlos Hanson wrote: On Fri, May 16, 2008 at 8:14 AM, Michael Bayer [EMAIL PROTECTED] wrote: On May 16, 2008, at 10:55 AM, Carlos Hanson wrote: On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. well, connecting is just the beginning. to take advantage of SQLA, you would also want an AS/400 dialect that knows how to render SQL in the way an AS/400 likes. Im not familiar with anyone working on an AS/ 400 dialect at the moment. I only know of the DB2 dialect which is a separate project (but maybe ask on their list since they work for IBM). This is a good point. I have to create aliases to a file/member combination to select data. I guess I wouldn't expect SQLAlchemy to implement that by default, since most every other database uses tables. Are you saying that you had to create aliases to make them work with pyodbc? I don't have to create aliases but it works fine for me. I do have to qualify with a library name, but that isn't too painful. -Jim --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SQLAlchemy experts wanted
Hi all, SQLAlchemy is a great project and a growing niche. As it becomes even more popular, there will be increasing demand for experts in the field. I am compiling a contact list of SQLAlchemy experts who may be interested in opportunities under the right circumstances. I am not a recruiter - I'm a regular developer who sometimes gets asked for referrals when I'm not personally available. If you'd like to be on my shortlist of go-to experts, please contact me off-list at: [EMAIL PROTECTED] Please be prepared to show your expertise by any of the following: * Committer status or patches accepted * Commit access to another open source project which uses SQLAlchemy * Bugs reported which were either resolved or are still open (real bugs) * Articles / blog entries written about SQLAlchemy concepts or development * Speaking engagements or user groups at which you've presented * Significant contributions to documentation * Other? (I'm sure I didn't think of everything) I'll be happy to answer any questions, and I look forward to hearing from you! -- Jim R. Wilson (jimbojw) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Is inserting new data with column_mapped_collection inconsistent?
I have just started using column_mapped_collections. (http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_collections_dictcollections I must say, these are very powerful and extremely nice when reading data. But I have run into one thing that seems confusing when it comes to creating new objects in a session. Namely, it is possible to add data to the mapped dictionary in such a way that the data structure is inconsistent and not what it would be when reading the same data back. Using the example from the documentation as a start: mapper(Item, items_table, properties={ 'notes': relation(Note, collection_class=column_mapped_collection(notes_table.c.keyword)), }) # ... item = Item() item.notes['color'] = Note('color', 'blue') # Set keyword attribute to 'color' print item.notes['color'] Everything is good here, but what if I did it this way instead item.notes['not-color'] = Note('color', 'blue') This last line is the problem because it has inserted a link to a new Note that has a keyword of 'color' but is showing up in the dictionary as 'not-color'. If we flush all of this and reload from the database using a query, there will be no 'not-color' entry in the database. Anyway, I think this is a bit non-intuitive. What I propose instead is that SA could automatically set the 'keyword' attribute of the Note object as part of the process of assigning it to the mapped collection dictionary. This way the insert could look more like: item.notes['not-color'] = Note(value='blue') and behind the scenes SA would call: new Note.keyword = 'not-color' Any thoughts on this? Has anyone tried this in the past? -Allen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Access to AS/400 data
On Fri, May 16, 2008 at 10:49 AM, Jim Steil [EMAIL PROTECTED] wrote: Carlos Hanson wrote: On Fri, May 16, 2008 at 8:14 AM, Michael Bayer [EMAIL PROTECTED] wrote: On May 16, 2008, at 10:55 AM, Carlos Hanson wrote: On Fri, May 16, 2008 at 6:13 AM, Jim Steil [EMAIL PROTECTED] wrote: Hi: Can anyone tell me if it is possible to access data on an AS/400 through SQLAlchemy? -Jim I'm connecting to an AS/400 using pyodbc, so I am sure that I can do it through SQLAlchemy. If I have a chances to test it, I'll post my success. But if you get an ODBC connection set up, the re should be no problem. well, connecting is just the beginning. to take advantage of SQLA, you would also want an AS/400 dialect that knows how to render SQL in the way an AS/400 likes. Im not familiar with anyone working on an AS/ 400 dialect at the moment. I only know of the DB2 dialect which is a separate project (but maybe ask on their list since they work for IBM). This is a good point. I have to create aliases to a file/member combination to select data. I guess I wouldn't expect SQLAlchemy to implement that by default, since most every other database uses tables. Are you saying that you had to create aliases to make them work with pyodbc? I don't have to create aliases but it works fine for me. I do have to qualify with a library name, but that isn't too painful. -Jim If you only have one member in a file, or you are only interested in the first member, then you do not need an alias. The application running on our AS/400 was designed for multiple groups, each group with its own member of a file, so I need the alias. There may be some cases where I don't actually need the alias, but it is too much effort to make the determination. It's easy to create and drop aliases. Here is a brief example of what I'm doing: alias_map = {'library': 'FASFILES', 'file': 'LFAS310C', 'member': 'TSD2300'} sql_alias = 'CREATE ALIAS X%(file)s FOR %(library)s.%(file)s (%(member)s)' alias = sql_alias % alias_map alias 'CREATE ALIAS XLFAS310C FOR FASFILES.LFAS310C (TSD2300)' drop_alias = 'DROP ALIAS X%(file)s' % alias_map drop_alias 'DROP ALIAS XLFAS310C' query = 'select FA# from X%(file)s where FA# = 37250' % alias_map query 'select FA# from XLFAS310C where FA# = 37250' import pyodbc connection = pyodbc.connect('DSN=as400_64;UID=username;PWD=password;') cursor = connection.cursor() cursor.execute(alias) 0 cursor.execute(query) pyodbc.Cursor object at 0x2b5194aef690 for row in cursor: ... print row ... (Decimal(37250), ) cursor.execute(drop_alias) 0 cursor.close() connection.close() -- Carlos Hanson --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Is inserting new data with column_mapped_collection inconsistent?
Allen Bierbaum wrote: I have just started using column_mapped_collections. (http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_collections_dictcollections I must say, these are very powerful and extremely nice when reading data. But I have run into one thing that seems confusing when it comes to creating new objects in a session. Namely, it is possible to add data to the mapped dictionary in such a way that the data structure is inconsistent and not what it would be when reading the same data back. Using the example from the documentation as a start: mapper(Item, items_table, properties={ 'notes': relation(Note, collection_class=column_mapped_collection(notes_table.c.keyword)), }) # ... item = Item() item.notes['color'] = Note('color', 'blue') # Set keyword attribute to 'color' print item.notes['color'] Everything is good here, but what if I did it this way instead item.notes['not-color'] = Note('color', 'blue') This last line is the problem because it has inserted a link to a new Note that has a keyword of 'color' but is showing up in the dictionary as 'not-color'. If we flush all of this and reload from the database using a query, there will be no 'not-color' entry in the database. Anyway, I think this is a bit non-intuitive. What I propose instead is that SA could automatically set the 'keyword' attribute of the Note object as part of the process of assigning it to the mapped collection dictionary. This way the insert could look more like: item.notes['not-color'] = Note(value='blue') and behind the scenes SA would call: new Note.keyword = 'not-color' Any thoughts on this? Has anyone tried this in the past? MappedCollection doesn't currently have a mismatch guard on __setitem__ (d[key] = val) or setdefault(), but easily could. There *is* a guard protecting against item.notes = {'not-color': Note('color', 'blue')}, so that machinery is available and applying it to the other setters is straightforward. Automatically setting the value for the attribute_ and column_mapped dict collections would be pretty convenient and DRY. This is a great time to integrate that feature, if you want to try your hand at putting together a patch and tests. If it's not too disruptive to existing users it could slide right in as a new feature of 0.5. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---