Re: [sqlalchemy] 0.6b1 and pymssql
On February 4, 2010, Michael Bayer wrote: In 0.6 we can probably just shoot for supporting the new pymssql since its supposed to be much better than the old. I've seen pymssql 1.0.2 segfault the interpreter when under medium load. And yes, I had a core to prove that pymssql was as fault. I would probably consider pyodbc is I was stuck with MSSQL. -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
Re: [sqlalchemy] 0.6b1 and pymssql
On February 4, 2010, Michael Bayer wrote: I've seen pymssql 1.0.2 segfault the interpreter when under medium load. And yes, I had a core to prove that pymssql was as fault. I would probably consider pyodbc is I was stuck with MSSQL. did you contact the pymssql maintainer about that ? Also, even if pymssql was at fault I've observed that sometimes native libs segfault due to specific usages, which the dialect can be made to ensure never occur. If I recall correctly, we were still trying to come up with an easy to reproduce test case for that one when the project got canned and we moved to MySQL. For the record, the segfault occurred in PyTuple_SET_ITEM() on line 2186 of mssqldbmodule.c. I don't have an MSSQL instance handy right now so it's hard to give details. -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
Re: [sqlalchemy] Working with temp tables on Postgres
On January 29, 2010, Michael Bayer wrote: I usually go with the IN clause but I wonder if its possible to write PG stored procedures that can get to xapian as well (since you can write them in python or any other language). Beside the fact that you need admin rights to create the proc, the solution with PL/Python is very clean: CREATE TYPE ft_res AS ( org_id INTEGER, rank INTEGER ); CREATE OR REPLACE FUNCTION org_ft_match (terms text) RETURNS SETOF ft_res as $$ import xappy DB_PATH = /.../data/org_ft/ conn = xappy.SearchConnection(DB_PATH) q = conn.query_field(name, terms) res = conn.search(q, 0, 500) for i, r in enumerate(res): yield (int(r.id), i) $$ language plpythonu IMMUTABLE; select name from organization o join org_ft_match('bob AND jane') on org_id = o.id # [...] more joins for other criteria order by rank -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Working with temp tables on Postgres
Greetings, Alchemists, what's the best way to work with temp tables on Postgres? It's fairly easy to have one created: tmp_foo = Table('tmp_foo', metadata, Column('id', Integer, unique=True), Column('bar', Integer), prefixes=['TEMPORARY']) tmp_foo.create() The problem is that if I am not sure that the table was created, I can't use it. The following: tmp_foo.create(checkfirst=True) does not work. It issues the following SQL that won't find a match for temp tables: select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where n.nspname=current_schema() and lower(relname)=%(name)s One work around would be to use ON COMMIT DROP but I don't now how to do that since Table() has no `suffixes` parameter. Any help on that one? While I'm at it, I might as well state the high level problem that pushed me to use temp tables. I'm using Xapian to do full text indexing. Xapian is good to give me a list of document ids that I can then retrieve from the database but if I want to apply additional criteria, I have to do the filtering on the database side. On way to do that is with a huge IN clause, the other is with a temp table. I like the temp table because I can also use it to order by Xapian ranking and do the paging on the alchemy side. I could also duplicate all the criteria on the Xapian side but I want to avoid that if possible. Any suggestions for either problems? -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
Re: [sqlalchemy] Working with temp tables on Postgres
On January 29, 2010, Michael Bayer wrote: One work around would be to use ON COMMIT DROP but I don't now how to do that since Table() has no `suffixes` parameter. from sqlalchemy.schema import CreateTable from sqlalchemy.ext.compiler import compiles @compiles(CreateTable) def check_temporary(create, compiler, **kw): table = create.element ret = compiler.visit_create_table(create) if 'TEMPORARY' in table._prefixes: ret += ON COMMIT DROP return ret Very nice, thanks! On way to do that is with a huge IN clause, the other is with a temp table. I like the temp table because I can also use it to order by Xapian ranking and do the paging on the alchemy side. I usually go with the IN clause but I wonder if its possible to write PG stored procedures that can get to xapian as well (since you can write them in python or any other language). I would not be too hard to make it run on in PL/Python but PL/Python is a non-safe language so it's a bit of a pain to have new versions of the proc deployed since you need to admin in PG to update it. This is why I'd rather go with a solution on the client side. When you go with the big IN, you sort by full text ranking on the Python side? This forces you to fetch the full result set to have the desired page. My experience is that the IN solution get unbearably slow quite fast. With 4+ results from Xapian, it can take several seconds to get my results with IN. I doubt that any sane human will go through all those 40k results so it's probably safe to only send the first fer thousands full text ids to the database but our requirements call for an accurate page count. The more I think about it, the more it looks like the stored proc in PL/Python is the only same way to do it. Thank again for all the infos. -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Looking for a schema / database migration tool
Greetings Alchemists, I'm looking for a schema migration tool, ideally one that works well with SQLAlchemy. I know about sqlalchemy-migrate but I find its monotone numbering scheme hard to reconcile with distributed development. More details on that specific problem on their mailing list: http://groups.google.com/group/migrate-users/browse_thread/thread/f95ac435aa27280c Are there any other tool that could do the job? How hard would it be to modify sqlalchemy-migrate to work with dependencies instead of sequential version numbers? Best regards, -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Slow session.commit()?
Greetings Alchemists, maybe I'm doing something wrong but it seems to me that commiting a session, especially one with many objects (say, 150k), requires a lot of processing power. In the following script, commiting the session takes roughly three times longer then generating the objects, no matter what the database is: I get roughly the same ration with Postgres, MySQL, and SQLite. That's to be expected since it's at commit time that we actually send the objects to the database. What puzzles me though is that `top` suggests that the process is CPU bound on the Python side, not io-bound on the database side. Am I doing something wrong? Beside the obvious raw SQL statements, is there a way to speed things up? Here is my example script: -- from timeit import timeit from sqlalchemy import create_engine, Column, Integer, String, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from sqlalchemy.orm import relation engine = create_engine('sqlite:tmp/foo.db', echo=False) Base = declarative_base() class Department(Base): __tablename__ = 'department' id = Column(Integer, primary_key=True) class Employee(Base): __tablename__ = 'employee' id = Column(String, primary_key=True) dept_id = Column(Integer, ForeignKey(department.id)) dept = relation(Department, backref=employees) Base.metadata.create_all(engine) session = sessionmaker(bind=engine)() def insert(): for i in range(5): dept = Department(id=i) session.add(dept) for j in range(3): emp = Employee(id=%d-%d % (i, j)) session.add(emp) dept.employees.append(emp) print timeit(insert, number=1) print timeit(session.commit, number=1) -- -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
[sqlalchemy] License of the example files
Greetings Alchemists, I plan to base a chunk of my code on one of the examples in sqlalchemy/examples. I just want to make sure that those are free to use. SQLAlchemy is licenced under the X11 (MIT) license. Is it also the case for the examples? -- Yannick Gingras http://ygingras.net http://confoo.ca -- track coordinator http://montrealpython.org -- lead organizer signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Duck-typing style of relations
Greetings Alchemists, this is more of a general data modeling question but maybe Alchemy has a neat trick to resolve this issue. It happens quite often that I want to use instances of a class as attributes of unrelated objects. One example is Addresses. Both Companies and Persons have addresses and it would be somewhat awkward to derive both from a common ancestor, even though mixins would do the trick. However, the concept of mixins is not straightforward to transpose to data mapping. The Address example could be implemented as follow: class Address(DeclarativeBase): __tablename__ = 'address' id = Column(Integer, primary_key=True) city = Column(Unicode(255)) postal_code = Column(Unicode(15)) # ... class Company(DeclarativeBase): __tablename__ = 'company' address_id = Column(Integer, ForeignKey('address.id')) address = relation(Address) class Person(DeclarativeBase): __tablename__ = 'person' address_id = Column(Integer, ForeignKey('address.id')) address = relation(Address) One problem with that is that it's hard to prevent orphan addresses. Another thing that may or may not be a problem is that you could have the same address being used for more than one company or person, which could lead to a funny situation if one of them moves. To allow for more than one address per entity, one could do: class Company(DeclarativeBase): __tablename__ = 'company' shipping_address_id = Column(Integer, ForeignKey('address.id')) shipping_address = relation(Address, primary_join=...) billing_address_id = Column(Integer, ForeignKey('address.id')) billing_address = relation(Address, primary_join=...) Similarly, an open ended number of addresses can be implemented with a join table: class CompanyAddress(DeclarativeBase): __tablename__ = 'company_address' address_id = Column(Integer, ForeignKey('address.id')) address = relation(Address, primary_join=...) company_id = Column(Integer, ForeignKey('company.id')) company_address = relation(Company, primary_join=...) class PersonAddress(DeclarativeBase): __tablename__ = 'preson_address' address_id = Column(Integer, ForeignKey('address.id')) address = relation(Address, primary_join=...) person_id = Column(Integer, ForeignKey('person.id')) person_address = relation(Person, primary_join=...) But we still have the problem of orphan addresses and it get somewhat tricky to prevent addresses re-use. It's easier to check for address re-use with a multi-slot join table: class AddressMap(DeclarativeBase): __tablename__ = 'address_map' address_id = Column(Integer, ForeignKey('address.id')) address = relation(Address, primary_join=...) company_id = Column(Integer, ForeignKey('company.id')) company_address = relation(Company, primary_join=...) person_id = Column(Integer, ForeignKey('person.id')) person_address = relation(Person, primary_join=...) However, it's hard to extend when you need a new kind of objects that has addresses. Is there a better solution? -- Yannick Gingras http://ygingras.net signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Re: Duck-typing style of relations
On October 9, 2009, Conor wrote: I think your best solution is similar to your AddressMap idea above, but just make it part of Address instead with check and unique constraints on your FK columns: [...] The check constraint above is a bit overkill for just 2 FK columns (you could just use (company_id IS NULL) (person_id IS NULL), but it is easy to extend to 2 FK columns. Thats a very nice solution indeed. Thanks! -- Yannick Gingras http://ygingras.net signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Re: Portable Enum Columns
On October 7, 2009, Michael Bayer wrote: the ticket is here: http://www.sqlalchemy.org/trac/ticket/1109 you'll see from the discussion the issue is not nearly so straightforward. Right, looks like I should either go with PGEnum or stick to varchar FKs if I need something that is portable. Thanks for the info. -- Yannick Gingras http://ygingras.net signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Portable Enum Columns
Greetings Alchemists, I want to define a column that will only accept a handful of possible values. Sure enough, I can to that with a check constraint or with a lookup table an a foreign key. However, for some reason, I really like the semantic of an enum column. That is, I like to read a definition that looks like that: class Order(DeclarativeBase): __tablename__ = 'order' id = Column(Integer, primary_key=True) code = Column(Enum('CMTP'), nullable=False, default='C') # ... There is the low level MySQL enum type and there is a somewhat dated recipe on the wiki: http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Enum Is there another way to do it? Something that would be portable and to both MySQL and Postgres would be great. Regards, -- Yannick Gingras http://ygingras.net signature.asc Description: This is a digitally signed message part.
[sqlalchemy] ConFoo.ca call for speakers
Greetings Alchemists, I'm coordinator for the Python track at the ConFoo.ca conference and I have an announcement to make that will certainly be of interest to many of you. PHP-Québec, Montréal-Python, Ruby Montréal, W3Qc, and OWASP Montréal are organizing the first edition of the ConFoo.ca conference, which will be held in Montréal on March 10th through 12th at the Hilton Bonaventure Hotel. With over 500 expected attendees, ConFoo.ca is the largest Web development conference in North America. We are looking for the best speakers willing to share their experience and skills with programmers, managers, marketers and decision makers. The conference is divided into two parts: A technical part, encompassing different aspects of Web development: PHP, Python, Ruby, security, project management, CMSs and frameworks, databases, systems administration, Web standards, accessibility and agile methods. A decision-making part: referencing (SEO), Web marketing analysis, and social networking. Presenters can decide to present in English or French. Presentations are roughly one hour long and these may be recorded for later broadcast in digital format. All relevant details concerning the conference are available on the call for speaker website [1]. Even though Python can be used for a wide range of programming tasks, the Python track at ConFoo.ca will focus on Web development with Python. For all the other fascinating aspects of Python, do not hesitate to submit a talk to PyCon [2], which is also running its call for speakers as I write this. Share the word! [1]: http://confoo.ca/en/cfp [2]: http://us.pycon.org/2010/about/ -- Yannick Gingras http://ygingras.net signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Re: Problems with pyodbc on RHEL 64
On Wednesday 13 May 2009 19:43:04 Rick Morrison wrote: Are you using pymssql 0.8.0 or the 1.0.x branch? They somewhat change the API in 1.0.x to make it more compliant with the python db api 2.0 and I assume that it has impacts on the Alchemy support. I'm using 0.8.0, the release notes for 1.0 implied it was more or less a total rewrite; I stayed away for the time being. Last question: are on 32 bit or on 64 bit? Our dev boxen are in 32 bit and there a lot of stuff that goes fine on them until we push the code to the staging server that runs on 64 bit. -- Yannick Gingras http://ygingras.net/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Problems with pyodbc on RHEL 64
On Tuesday 12 May 2009 21:53:55 Rick Morrison wrote: AFAIK, there's nothing in SQLA that will address this -- the issue sounds new to me, and it seems to me that it's pretty clearly some kind of pyodbc/FreeTDS issue. Check your character encoding settings, there's quite a few reported issues with MSSQL + pyodbc + unicode statements. You may want to browse the FreeTDS lists as well. Please report back anything pertinent that you find. How about the pymssql route? Have you guys tried it? I get weird formatting errors when I go pymssql 1.0.2. I looks like it does not like stuff like: conn.execute(select foo from bar where qux like '%quux%') I get errors regarding 0x27 (single quote) not being a proper formatting character. We could escape all the percent signs in our code but I'd rather avoid it and Pyodbc does not seem to have a problem with them, which is a big plus. We are mostly using Alchemy for the connection pooling; we have a few mapped objects but most of our queries are still hand written and we don't want to convert them all to the SQL abstraction layer provided by Alchemy right now. Any advice on using Pymssql with Alchemy? -- Yannick Gingras http://ygingras.net/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Problems with pyodbc on RHEL 64
On May 13, 2009, Rick Morrison wrote: I personally use pymssql with the 0.4 series, I don't know if it's broken in 0.5x or not. It's not a panacea by any stretch: pymssql has a somewhat long list of caveats (30 char identifier limit, no unicode, no varchar(255), etc.) - see the wiki for some of the details. The issue with the quoted parameters is news to me - while I primarily use the SQLA orm and sql-api layers, we do have a handful of text-based SQL with quoted string-interpolated parameters and also with quoted-string constants that all seem to work fine. Are you using pymssql 0.8.0 or the 1.0.x branch? They somewhat change the API in 1.0.x to make it more compliant with the python db api 2.0 and I assume that it has impacts on the Alchemy support. -- Yannick Gingras http://ygingras.net signature.asc Description: This is a digitally signed message part.
[sqlalchemy] Problems with pyodbc on RHEL 64
Hi, I also reported this problem on the pyodbc mailing list but maybe one of you know a workaround. I'm trying to use pyodbc on RHEL 5.3 64 bit but all my strings are filled with garbage after position 1024. Here is an example: import pyodbc conn = pyodbc.connect('DRIVER={SQL Server};UID=foo;PWD=bar;DATABASE=qux;SERVER=quux;TDS_Version=8.0') conn.execute(select %r % (= * 1030)) This is what I get back: [('===\x00\x01\x00i;S+', )] This is the content of my /etc/odbcinst.ini: -- [SQL Server] Description = FreeTDS Driver Driver = /usr/lib64/libtdsodbc.so.0 UsageCount = 1 -- When I try to do the same on Ubuntu 8.10, both 32 bit and 64 bit, I get expected result, that is, a string of = 1030 character long. On RHEL 5.3, unixodbc is 2.2.11-7.1, on Ubuntu it's 2.2.11-16build2, what ever that means. I'm running Pyodbc 2.1.5 on Python 2.5. I get the same error with SQLAlchemy 0.5.3 with the following: from sqlalchemy import create_engine eng = create_engine(mssql://foo:b...@qux/quux?DRIVER={SQL Server}TDS_Version=7.0) conn = eng.connect() conn.execute(...).fetchall() Anyone has an idea on what can cause this and how it can be solved? My idea was to fall back on pymssql but Alchemy 0.5.3 does not seem to like pymssql 1.0.2 and I find 0.8 has documented problems on 64 bit systems. What do you guys recommend? Running the experimental 0.6 Alchemy branch? -- Yannick Gingras http://ygingras.net/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Comparable ColumnDefaults for shema diffing
Greeting Alchemists, in order to implement schema diffing, it would be nice if two similar ColumnDefault objects would be comparable as such. I attach a path to implement such test. Would it make sense to add this support in Alchemy's core or should a schema diffing library add it through monkey patching? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- Index: lib/sqlalchemy/schema.py === --- lib/sqlalchemy/schema.py(revision 4842) +++ lib/sqlalchemy/schema.py(working copy) @@ -970,2 +970,2 @@ return column_default __visit_name__ = property(_visit_name) +def __eq__(self, other): +if self.__class__ != other.__class__: +return NotImplemented +if callable(self.arg) and callable(other.arg): +return NotImplemented +return self.arg == other.arg + +def __ne__(self, other): +return not self.__eq__(other) + def __repr__(self): return ColumnDefault(%s) % repr(self.arg) + class Sequence(DefaultGenerator): Represents a named database sequence.
[sqlalchemy] Re: Comparable ColumnDefaults for shema diffing
Michael Bayer [EMAIL PROTECTED] writes: can't the schema diff utility include a function such as compare_defaults(a, b) ? a ColumnDefault isn't really like a SQL expression object so the __eq__()/__ne__() seems inappropriate (in general, overriding __eq__() is an endeavor to be taken on carefully, since it heavily changes the behavior of that object when used in lists and such). You are right that defining __eq__() can have nasty side effects but it seems strange to me that ColumnDefault(20) == ColumnDefault(20) is False. If you think that there might be other side effect that I didn't foresee, I will implement the comparator in the diffing library. It the same way, what do you think about __eq__() for types? This is False: types.Integer(10) == types.Integer(10) which was unexpected to say the least but there might be a good reason for it. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Schema and database migration: how to diff?
[EMAIL PROTECTED] writes: see dbcook.misc.metadata.diff.py as an attempt to do this over 2 metadata's. svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/metadata It works pretty well. How about a small cleanup to make it truly general an a promotion to a package of its own? With the `changeset` parts of sqlalchemy_migration, we could generate most of the upgrade script from the computed diff. Unless you already do that and I missed that part somehow. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Schema and database migration: how to diff?
Greetings alchemists, I'm trying to setup the migration strategy for our project and I'm looking for some kind of schema differ. I took a look at sqlalchemy_migrate: the changeset module provides interesting functionalities to add columns, alter tables, and create constraints on an existing database but the versioning parts seems clumsy and fragile: I have to perform all the schema upgrade by hand and I have to keep track of the upgrade with some kind of crude revision control system. There is that and the fact that the doc is out of date which convince me that I have to find some other solution. All our table definitions are written in Python with the Alchemy ORM facility; nothing is inferred through autoload. I guess we could get a rough idea of the changes by walking the list of table with and comparing the column names in both version. The diff for a new column would be easy enough to compute and we could trigger and error if the column is changed so a human could write the appropriate alter statement. I'm sure I'm not the first to look for a solution to schema upgrade. Is there a package out there to compute the differences between two versions of a schema? If not, what gotchas should I know about before I try my hand at such a framework? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Pre-commit hooks
Michael Bayer [EMAIL PROTECTED] writes: easy enough to build yourself a generic MapperExtension that scans incoming objects for a _pre_commit() method. Yeah indeed. I used this: -- class HookExtension(MapperExtension): Extention to add pre-commit hooks. Hooks will be called in Mapped classes if they define any of these methods: * _pre_insert() * _pre_delete() * _pre_update() def before_insert(self, mapper, connection, instance): if getattr(instance, _pre_insert, None): instance._pre_insert() return EXT_CONTINUE def before_delete(self, mapper, connection, instance): if getattr(instance, _pre_delete, None): instance._pre_delete() return EXT_CONTINUE def before_update(self, mapper, connection, instance): if getattr(instance, _pre_update, None): instance._pre_update() return EXT_CONTINUE -- It works great. Thanks for the pointers. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Pre-commit hooks
[EMAIL PROTECTED] writes: speed wise, this is better: hasattr is implemented as getattr + try except. i would do it even: f = getattr(instance, _pre_insert, None) if f: f() Thus the func name is spelled only once - avoids stupid mistakes. Spelling only once is the killer feature of your approach. I just refactored my implementation. Thanks! -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Connecting to an MS SQL server ?
TkNeo [EMAIL PROTECTED] writes: Hi, Hello Tarun, This is my first encounter with sqlalchemy. I am trying to connect to an MS SQL server 2000 that is not on local host. I want to connect using Integrated Security and not use a specific username and password. Can anyone tell me the format of the connection string ? I don't know about Integrated Security but we use alchemy to connect to a MSSQL from a GNU/Linux box and it works really well. We use Unix ODBC with TDS with the DSN registered with the local ODBC. Take a look at http://www.lucasmanual.com/mywiki/TurboGears#head-4a47fe38beac67d9d03e49c4975cfc3dd165fa31 My obdb.ini looks like [JDED] Driver = TDS Trace = No Server = 192.168.33.53 Port= 1433 and my alchemy connection string is mssql://user:pass@/?dsn=JDEDscope_identity=1 -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query filtering like isinstance()
Michael Bayer [EMAIL PROTECTED] writes: If I want to query on containers and on area, I can simply do q = Container.query().filter(...) but, if I receive a query on Item and a base class, say either Item, Container or Area, how can I filter() my query to receive only the sub-items from this base class? filter on type_.in([area, container]) is one approach. Easier though is session.query(Container); it'll load from the join of items/ containers so you wouldn't get any non-Container objects. Sounds good. I didn't find how to get the polymorphic_identity of a mapped class. Is it possible to retried it if I have only the class object? This is not a big problem since I can use Item.__name__ as the polymorphic_identity. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Pre-commit hooks
Greetings Alchemists, Is it possible to define a hook in a mapped class that will be called to test the sanity of an instance before it gets committed? As an example: class Item(object): def _pre_commit(self): assert (self.dry_weight + self.fluids) 50 mapper(Item, items_table) I don't want to put the test mutators of dry_weight or fluids since it's OK to have a temporary inconsistent state as long as the state is consistent at commit time. I see that some of this functionality if covered by MapperExtention but since the test is only related to Item I'd rather put the test in it. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query filtering like isinstance()
Michael Bayer [EMAIL PROTECTED] writes: Sounds good. I didn't find how to get the polymorphic_identity of a mapped class. Is it possible to retried it if I have only the class object? This is not a big problem since I can use Item.__name__ as the polymorphic_identity. class_mapper(cls).polymorphic_identity should work It does. Thanks! -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query objects and empty slices
Michael Bayer [EMAIL PROTECTED] writes: yes. but the question was, limit is ignored when offset is zero, which is not the issue. LIMIT is ignored when its zero, period (it evaluates to false). the decision to be made is, should Query circumvent querying altogether when limit is zero. this will be easy to fix in 0.5 since slices will no longer be generative, so we'll just return an empty iterator. ticket 1035 in trac. Sounds good. Keep up the good work. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Query objects and empty slices
Greetings Alchemists, I really like the transparent slicing of Query object but there seems to be a bug some with empty slices. First or all, fire up your favorite Python shell and convince yourself that all or those return empty lists: range(10)[:0] range(10)[0:0] range(10)[1:1] Now, if you were to apply the same slices to a query object, you would be shocked as you notice that only the last returns an empty list; the fist two return the whole database. I looked in orm.Query.__getitem__ and the handling of slices seems to be correct. As and example, both q[:0]._limit and q[1:1]._limit return 0. My guess is that the underlying Select object does not take limit into account when offset is 0. You will all agree that this is a bug and that it needs to be fixed. I don't mind digging into the sources to fix it. Can anyone point me in the right direction? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query objects and empty slices
Michael Bayer [EMAIL PROTECTED] writes: My guess is that the underlying Select object does not take limit into account when offset is 0. you mean, a limit of zero itself is ignored. an offset of zero doesn't affect limit. When _offset is 0 or None, _limit seems to be ignored. There might be something else but I see: print q[:0] # no limit statement in the SQL print q[0:0] # no limit statement in the SQL print q[1:1] # limit statement is there You will all agree that this is a bug and that it needs to be fixed. I don't mind digging into the sources to fix it. Can anyone point me in the right direction? if the limit is zero Id imagine that no SQL would be issued at all. do you agree ? That's one way to see it and it would make sense. However, issuing the query anyway validates that aside from returning nothing, all the selected tables are there. My preference goes for returning an empty list without emitting SQL. -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Aliasing automatic joins with relation.any()
Michael Bayer [EMAIL PROTECTED] writes: Although, I think it may be wise here if SQLA set the correlate value on the expression returned by any() to prevent these errors from occuring at all. Below is a patch that does it. It needs a little bit of tweaking to work with inheritance though so i might add a ticket for this. OK, you'll have to wait for 0.5 for this feature :). I have it working over there along with some other refinements. Is there a workaround in the mean time? Calling reset_joinpoint() after filter_by() won't do it: Item.query().join(ref_ids, aliased=True).filter_by(ref_id = OP-10)\ .reset_joinpoint().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))\ .all() class 'sqlalchemy.exceptions.InvalidRequestError': Select statement 'SELECT 1 FROM items, item_ids AS item_ids_1 WHERE items.id = item_ids_1.item_id AND item_ids_1.ref_id = :item_ids_ref_id_1' is overcorrelated; returned no 'from' clauses Could I do it with raw exists()? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Aliasing automatic joins with relation.any()
Michael Bayer [EMAIL PROTECTED] writes: Calling reset_joinpoint() after filter_by() won't do it that I'm not able to reproduce. If i create a similar situation which creates the same error, reset_joinpoint() turns off all the aliasing for subsequent filter() calls and then it works. It also works on 0.4.5. Can you make sure you're on 0.4.5 and then create a test case for me ? I was with 0.4.4. It works perfecly fine with 0.4.5 and reset_joinpoint(). Thanks! -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Aliasing automatic joins with relation.any()
Hi, I have two classes, Item and ItemId where one Item can have multiple ItemIds accessible from its ref_ids relation. I can do: Item.query().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000))) if I want all the items except the ones with an ItemId with ref_id set to OP-10-47000 and I can do Item.query().filter(not_(Item.ref_ids.any(ref_id = OP-10-47000)))\ .join(ref_ids, aliased=True).filter_by(ref_id=OP-10) and I will get all the Items with an ItemId of OP-10 except the ones with OP-10-47000. This is great. However, if I flip the order and I do: Item.query().join(ref_ids, aliased=True).filter_by(ref_id=OP-10)\ .filter(not_(Item.ref_ids.any(ref_id = OP-10-47000))) I get the following error: class 'sqlalchemy.exceptions.InvalidRequestError': Select statement 'SELECT 1 FROM items, item_ids AS item_ids_1 WHERE items.id = item_ids_1.item_id AND item_ids_1.ref_id = :item_ids_ref_id_1' is overcorrelated; returned no 'from' clauses I had the same error with the first query before I aliased it so I assume that it's an aliasing problem. How can I alias the ref_ids.any() clause? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ORM as a view
jason kirtland [EMAIL PROTECTED] writes: A couple approaches come to mind: - map Obj to a select() with the restrictions baked in I adapted the example the manual: http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_selects It works great! The only thing that wasn't explicit from the example was that the properties of the mapped objected has to use the columns of the select object instead of the tables ones: -_next_join = (po_details_table.c.status_code_next +_pod_select = select([po_details_table], + and_(po_details_table.c.line_type == J , + po_details_table.c.line_no 0) + ).alias(pod_select) + +_next_join = (_pod_select.c.status_code_next == order_rules_table.c.status_code) -_last_join = (po_details_table.c.status_code_last +_last_join = (_pod_select.c.status_code_last == order_rules_table.c.status_code) -mapper(PODetail, po_details_table, + +mapper(PODetail, _pod_select, properties=dict(status_rule_next=relation(OrderRule, primaryjoin=_next_join), status_rule_last=relation(OrderRule, It would be nice if the example could illustrate this subtlety. Otherwise, I love it. Thanks! -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] ORM as a view
Hi, I use Alchemy to connection to a legacy system from which I pull data to be inserter into a new system that is being built with Python. I'm only interested by a tiny fraction of the legacy data and I'm wondering if it's possible to specify constraints to the mapper do that Obj.query() would only fetch rows with col_a == foo and col_b == bar. I know how to do that for a field of the object with relation(..., primary_join=...) How would I do that at the object level? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Read-only Sessions
Greetings Alchemists, I'm using Alchemy for a Pylons application and I just migrated to 0.4. My application has a read only mode and even though I do all the relevant checks all over the place, I know I'm going to forget a critical spot one of there days so I used to do: def abort_ro(): log.error(...) abort(403) if c.site_readonly: model.ctx.current.flush = abort_ro I'd like to do something like this with Alchemy 0.4 but I'm a bit lost. I use a scoped session like this: db_sess = scoped_session(sessionmaker(autoflush=True, transactional=True, bind=config['pylons.g'].sa_engine)) So the session object is reused all over the place and overwriting one of its methods is not a good idea. What would be my best option to implement readonly mode? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Read-only Sessions
Michael Bayer [EMAIL PROTECTED] writes: If you were setting ctx.current.flush to something for every new session, the approach here would be the same (Session ().flush = abort_ro). if you were only setting ctx.current.flush to abort_ro() at the module level, then your 0.3 approach wasn't working either (since 'current' is a functional property) ;). Oops! Yeah I was setting it on a per-session basis. There is a global read-only mode but there is also a per-request read-only mode. I use it to implement soft-bans: a banned user can read the site but he can't make changes. What do you recommend for a per-session read-only mode? -- Yannick Gingras --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---