[sqlalchemy] Re: Operational Error raised by except_
Thanks Michael, just for following readers I precise the ORDER BY clause causing the OperationalError is the one coming *before* the EXCEPT so I had to add .order_by(None) to the first query, now it looks like: Session.query(model.Sensor) \ .order_by(None) \ .except_( Session.query(model.Sensor) \ .filter(model.Sensor.id_cu==model.ViewOpt.id_cu) \ .filter(model.Sensor.id_meas==model.ViewOpt.id_meas) \ .filter(model.ViewOpt.id_view==1) ) and works perfectly, thanks again! Cheers neurino On Jan 12, 5:28 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 11:20 AM, neurino wrote: Well as I wrote ordering involves everything, also forms creation with formalchemy (make a select where all sensors are ordered that way etc) anyway I understand your point of view. quickest is a where sensor id not in (query), as a simple WHERE clause Problem comes when Sensor primary key is composite (id_cu + id_meas)... The good 'ol python comes in handy anyway: all = Session.query(model.Sensor).all() selected = Session.query(model.Sensor).filter( ... model.Sensor.id_cu==model.ViewOpt.id_cu).filter( ... model.Sensor.id_meas==model.ViewOpt.id_meas).filter( ... model.ViewOpt.id_view==1).all() diff = [sens for sens in all if sens not in selected] len(all), len(selected), len(diff) (154, 6, 148) We're talking of working on max total 200/300 sensors. The OR way did not filter anything (maybe I made somwthing wrong). Oh you know what, I completely forgot the best solution. It *is* documented on query.order_by() though which is an argument forchecking! pass None to query.order_by(). That disables all order_by's for that query. So go back to your except_() and use except_(q.order_by(None)). Greetings On Jan 12, 4:04 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 8:46 AM, neurino wrote: I need always the same order_by in all app and it could be subject of modification and / or integration in the near future so which better place than mapper to define it once instead of any time I do a query? It sounds like the ordering here is for the purposes of view logic so I'd have view logic that is factored down to receive Query objects that return Sensor rows, the view logic then applies the .order_by() to the Query. I.e. in a web app I use a Paginator object of some kind that does this, given a Query. This is probably a reason I don't like order_by to be within mapper(), it doesn't define persistence, rather a view. Anyway do you think there are alternate paths to get `all sensors but already choosen` which are order_by compatible? quickest is a where sensor id not in (query), as a simple WHERE clause, or use OR, query sensor where sensor.cu != cu OR sensor.meas != meas OR sensor.view != view. Except is not as widely used and I think its not even supported by all backends, even though it is a nice logical set operator, its got annoying quirks like this one. Thanks for your support On Jan 12, 2:38 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jan 12, 2011, at 7:28 AM, neurino wrote: I have this model: ``I organize views with many view_options each one showing a sensor. A sensor can appear just once per view.`` sensors = Table('sensors', metadata, Column('id_cu', Integer, ForeignKey('ctrl_units.id'), primary_key=True, autoincrement=False), Column('id_meas', Integer, primary_key=True, autoincrement=False), Column('id_elab', Integer, nullable=False), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) ctrl_units = Table('ctrl_units', metadata, Column('id', Integer, primary_key=True, autoincrement=False), Column('name', Unicode(40), nullable=False) ) views = Table('views', metadata, Column('id', Integer, primary_key=True), Column('name', Unicode(40), nullable=False), Column('desc', Unicode(80), nullable=True), ) view_opts = Table('view_opts', metadata, Column('id', Integer, primary_key=True), Column('id_view', Integer, ForeignKey('views.id'), nullable=False), Column('id_cu', Integer, ForeignKey('ctrl_units.id'), nullable=False), Column('id_meas', Integer, nullable=False), Column('ord', Integer, nullable=False), ForeignKeyConstraint(('id_cu', 'id_meas'), ('sensors.id_cu', 'sensors.id_meas')), #sensor can appear just once per view UniqueConstraint('id_view', 'id_cu', 'id_meas'), ) Now I let the user add view_options letting him select the sensor. I'd like to show him only the sensors not already selected in other options of the same parent view so I tried to use except_ this way: q = Session.query(model.Sensor) \ .except_( Session.query(model.Sensor) \
Re: [sqlalchemy] SQLAlchemy 0.6.6 Released
[Michael Bayer, 2011-01-09] The 0.6 series is not surprisingly our most successful series ever, with SQLA 0.6.5 racking up 36,000 downloads from Pypi in a period of 76 days, approximately 15K per month, plus about 2K a month from Sourceforge. + at least 4241 installations via .deb files (note that only a tiny fraction of Debian machines send the popcon¹ data so the number is probably a lot higher) [¹] http://qa.debian.org/popcon.php?package=sqlalchemy -- Piotr Ożarowski Debian GNU/Linux Developer www.ozarowski.pl www.griffith.cc www.debian.org GPG Fingerprint: 1D2F A898 58DA AF62 1786 2DF7 AEF6 F1A2 A745 7645 -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Implementing a specific commit/refresh strategy for one table only
Dear all, I've decided to plug SQLAlchemy to my web framework in order to let SQLAlchemy handle the framework's user sessions. Theses sessions require a lot of SELECT and UPDATE all the time. Therefore, I'd like to toggle expire_on_commit and (possibly) autocommit *for the SESSIONS table only.* The strategy for other tables should not change. Is it possible ? Here is how I define my scoped_session for the application : orm = scoped_session(sessionmaker(bind=engine)) Here's an example of use : now = datetime.datetime.now() s = Session.get(key) s.atime = now orm.commit() Thanks a lot ! Franck -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Several many to many table joins with sqlalchemy
Hi, New with SQLalchemy, here is my problem: My model is: user_group_association_table = Table('user_group_association', Base.metadata, Column('user_id', Integer, ForeignKey('user.id')), Column('group_id', Integer, ForeignKey('group.id')) ) department_group_association_table = Table('department_group_association', Base.metadata, Column('department', Integer, ForeignKey('department.id')), Column('group_id', Integer, ForeignKey('group.id')) ) class Department(Base): __tablename__ = 'department' id = Column(Integer, primary_key=True) name = Column(String(50)) class Group(Base): __tablename__ = 'group' id = Column(Integer, primary_key=True) name = Column(String) users = relationship(User, secondary=user_group_association_table, backref=groups) departments = relationship(Department, secondary=department_group_association_table, backref=groups) class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) firstname = Column(String(50)) surname = Column(String(50)) So, this code reflects the following relationships: - -- | User | --- N:M --- | Group | --- N:M --- | Department | - -- I tried to work with joins but still not succeeded in doing the following : One sqlalchemy request to get all the users instances while knowing a departement name (let's say 'RD) This should start with: session.query(User).join(... or session.query(User).options(joinedLoad(... Anyone could help ? Thanks for your time, Pierre -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] can't build c extensions for 0.6.6 on CentOS 5.5
assuming this is also your ticket #2023 ? http://www.sqlalchemy.org/trac/ticket/2023 On Jan 12, 2011, at 11:40 AM, Jon Nelson wrote: I'm unable to build the C extensions for SQLAlchemy 0.6.6 on CentOS 5.5: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC -I/usr/include/python2.4 -c lib/sqlalchemy/cextension/resultproxy.c -o build/temp.linux-x86_64-2.4/lib/sqlalchemy/cextension/resultproxy.o lib/sqlalchemy/cextension/resultproxy.c: In function 'BaseRowProxy_getitem': lib/sqlalchemy/cextension/resultproxy.c:332: warning: implicit declaration of function 'PyInt_FromSsize_t' lib/sqlalchemy/cextension/resultproxy.c:332: warning: passing argument 2 of 'BaseRowProxy_subscript' makes pointer from integer without a cast lib/sqlalchemy/cextension/resultproxy.c: At top level: lib/sqlalchemy/cextension/resultproxy.c:515: error: 'ssizeargfunc' undeclared here (not in a function) lib/sqlalchemy/cextension/resultproxy.c:515: error: expected '}' before 'BaseRowProxy_getitem' -- Jon -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] can't build c extensions for 0.6.6 on CentOS 5.5
On Thu, Jan 13, 2011 at 9:12 AM, Michael Bayer mike...@zzzcomputing.com wrote: assuming this is also your ticket #2023 ? http://www.sqlalchemy.org/trac/ticket/2023 That is not my ticket, but surely appears to be related! -- Jon -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] guard all session.query with try except?
I'd certainly never do that - no matter what type of application is running, its always best to use transaction-per-logical operation, which means usually, transactions are short. If the transactions are necessarily long, a database failure in the middle means the operation just fails, but there was no option there since you needed that long running transaction. If the app is a console app that stays open for a long time in some kind of loop, use a new Session() for each iteration of the loop, or for each 5- minute operation its doing, or whatever. So the answer is not to try/except everything, its to frame the usage of a particular transaction around the logical span of the operations you're performing. On Jan 13, 2011, at 2:47 AM, can xiang wrote: Hi, sqlalchemy is such a piece of great work. I'm very happy with it, while I indeed get a problem. I use sqlalchemy 0.6.6 in a non-threaded app. session is created at module level at once. autocommit is set to False. I see log shows: user = user_session.query(User).filter(User.hw_id == args['hw_id']).first() with the following exception: InvalidRequestError: Can't reconnect until invalid transaction is rolled back the mysql server restarted at that time. I check the docs, it says this may caused by a error while commit and without rollback for that session. But the above query is single query without a transaction and with no data commit. If I have to wrap all QUERY operations in try except and rollback manually? It's so tedious and not so nice, any advice? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Implementing a specific commit/refresh strategy for one table only
You'd implement the expire yourself using SessionExtension.after_commit(). On Jan 13, 2011, at 9:18 AM, Franck wrote: Dear all, I've decided to plug SQLAlchemy to my web framework in order to let SQLAlchemy handle the framework's user sessions. Theses sessions require a lot of SELECT and UPDATE all the time. Therefore, I'd like to toggle expire_on_commit and (possibly) autocommit for the SESSIONS table only. The strategy for other tables should not change. Is it possible ? Here is how I define my scoped_session for the application : orm = scoped_session(sessionmaker(bind=engine)) Here's an example of use : now = datetime.datetime.now() s = Session.get(key) s.atime = now orm.commit() Thanks a lot ! Franck -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Mapper.get_property question
On Wednesday, January 12, 2011, Michael Bayer mike...@zzzcomputing.com wrote: The name of the property from the mapper perspective is name. That's the contract of declarative: class MyClass(some_declarative_base): __tablename__ = 'j' x = Column(Integer, key='z') y = Column('p', Integer, key='w') == t = Table('j', metadata, Column('x', Integer, key='z'), Column('p', Integer, key='w') ) mapper(MyClass, t, properties={ 'x':t.c.z, 'y':t.c.w }) if you were just using mapper(), then Column.key is what specifies the attribute names in the mapping. To go from name_key, you could say class_mapper(MyClass)._columntoproperty[t.c.name_key]. It's now clear. Thanks a lot. -- Eric Lemoine Camptocamp France SAS Savoie Technolac, BP 352 73377 Le Bourget du Lac, Cedex Tel : 00 33 4 79 44 44 96 Mail : eric.lemo...@camptocamp.com http://www.camptocamp.com -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Best way to reduce boilerplate?
I am finding myself doing a fair amount of copy-and-paste in the data model I'm currently working on, and I'd like to reduce that if possible. For example, I have several different types of objects that have names belonging to namespaces. So every such table gets boilerplate looking like this (using declarative): namespace_id = Column(Integer, ForeignKey(Namespace.id), nullable=False) namespace = relationship(Namespace, backref='widgets') name = Column(Unicode(256), nullable=False) __table_args__ = ( UniqueConstraint(namespace_id, name), {}) In the above example, the only thing which changes from class to class is the backref='widgets' part. As another example, I would like to be able to attach notes to various kinds of objects, and since the objects are not related by inheritance, I think the best way is to create a separate notes table for each class of object. So for example a Widget class (mapped to widgets table) would lead to a WidgetNote class (mapped to widget_notes). (The alternative, having a single notes table, requires all notable objects to be related by polymorphic inheritance, which as I said isn't really a great idea in my case.) Ideally, I could just stick a decorator on the Widget class and have the notes handled automagically. I am thinking that metaclasses are the appropriate thing here. I have never delved deeply into metaclasses, though, and I know that SqlAlchemy is already doing a lot of metaclass magic, and I worry that I may screw something up. And I really need several different types of metaclasses that can be combined, so I need some kind of metaclass mix-in, and that all sounds very disaster-prone. Am I barking up the wrong tree here? Can anybody point me to some examples of metaclasses being used for something like this? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Best way to reduce boilerplate?
On Jan 13, 2011, at 12:11 PM, Randall Nortman wrote: I am finding myself doing a fair amount of copy-and-paste in the data model I'm currently working on, and I'd like to reduce that if possible. For example, I have several different types of objects that have names belonging to namespaces. So every such table gets boilerplate looking like this (using declarative): namespace_id = Column(Integer, ForeignKey(Namespace.id), nullable=False) namespace = relationship(Namespace, backref='widgets') name = Column(Unicode(256), nullable=False) __table_args__ = ( UniqueConstraint(namespace_id, name), {}) In the above example, the only thing which changes from class to class is the backref='widgets' part. As another example, I would like to be able to attach notes to various kinds of objects, and since the objects are not related by inheritance, I think the best way is to create a separate notes table for each class of object. So for example a Widget class (mapped to widgets table) would lead to a WidgetNote class (mapped to widget_notes). (The alternative, having a single notes table, requires all notable objects to be related by polymorphic inheritance, which as I said isn't really a great idea in my case.) Ideally, I could just stick a decorator on the Widget class and have the notes handled automagically. I am thinking that metaclasses are the appropriate thing here. I have never delved deeply into metaclasses, though, and I know that SqlAlchemy is already doing a lot of metaclass magic, and I worry that I may screw something up. And I really need several different types of metaclasses that can be combined, so I need some kind of metaclass mix-in, and that all sounds very disaster-prone. Am I barking up the wrong tree here? Can anybody point me to some examples of metaclasses being used for something like this? The metaclass approach is a total PITA, and we've abandoned it: - composition of metaclasses is an extremely tedious affair, since to compose metaclasses A and B means you have to make a new metaclass C. For an awkward example of that see http://www.sqlalchemy.org/trac/wiki/UsageRecipes/MultiKeyIndexesInMixins - constructs like relationship() and Column() need to be distinct for each parent object. The metaclass approach with declarative means an elaborate system of copying must be employed so that each new class gets its own version of these objects.None of this is straightforward. - explicit metaclasses necessarily have a lot of nuts and bolts and just aren't very declarative. The recipe at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DeclarativeMixins shows what's involved to make a metaclass that produces a declarative mixin approach. From the last example, you can see that we've rolled all this functionality into declarative itself, and you now use mixins to eliminate boilerplate. I use them extensively in my current project to specify columns, relationships, and even full __table__ definitions that only vary slightly among a subset of classes, driving off of other classbound elements to fill in the blanks. Here's an example I'll probably add to the docs (hence longer than what I usually bother with in emails).You have a stock portfolio application, you have a table of security rows, and many other tables that need to store lists of those securities associated with a count of units. Below we create a Security class, a User class that illustrates one owner of securities, then a generic Asset mixin that stores units, relates a Security to something. Below that UserAsset creates the linkage between User and Security, in just three lines of code with no additional configuration. from sqlalchemy.collections import attribute_mapped_collection from sqlalchemy.ext.declarative import declared_attr class Security(Base): Represent a stock, bond, etc. __tablename__ = 'security' id = Column(Integer, primary_key=True) symbol = Column(String(50), nullable=False, unique=True) class User(Base): Represent a user account. __tablename__ = 'user' id = Column(Integer, primary_key=True) username = Column(String(50), nullable=False, unique=True) Username. assets = relationship( UserAsset, collection_class=attribute_mapped_collection(security.symbol) ) Dictionary of symbol-UserAsset objects. class Asset(object): Generic association between a Security and an owner. @declared_attr def __table__(cls): return Table(cls.tablename, cls.metadata, Column(%s_id % cls.parent_table, ForeignKey(%s.id % cls.parent_table), primary_key=True) Column(security_id, ForeignKey(security.id), primary_key=True) Column(units, Integer, default=0) ) @declared_attr
[sqlalchemy] dirtying attributes of a user-defined type
Hello, I have created an SQLAlchemy type which represents a postgresql aclitem (which represents postgresql access control lists). I am able to load and save newly-created ACLItems from the database, however, modifying the values of an instance of the type does not dirty it for flushing. Is there some decorator for dirtying accessors to the type instance convenience methods? Specifically, modifying any of grantee, grantor, permissions, and grant_option, does not trigger a proper update. Cheers, M import sqlalchemy.types as types import re import sqlalchemy.exc #include/utils/acl.h #define ACL_ALL_RIGHTS_STR arwdDxtXUCTc class ACLItem(types.UserDefinedType): def __init__(self,grantee=None,permissions=None,grantor=None,grant_option=False): #note that sqlalchemy calls this with None arguments for processing self.grantee = grantee self.permissions = [] if permissions: for p in permissions: self.permissions.append(p) self.grantor = grantor self.grant_option = grant_option def get_col_spec(self): return 'aclitem' def bind_processor(self,dialect): def acl2string(aclitem): return aclitem._as_pgsql_string() return acl2string def compare_values(self,a,b): return a._as_pgsql_string() == b._as_pgsql_string() def _as_pgsql_string(self): #convert to string 'user grantee=perms/grantor' string_perms = '' for perm in self.permissions: string_perms += perm if self.grant_option: grant_option = '*' else: grant_option = '' return user %s=%s%s/%s % (self.grantee,string_perms,grant_option,self.grantor) @classmethod def _from_pgsql_string(klass,aclstring): grantee=perms*/grantor matches = re.match('([^=]+)=([^/\*]+)(\*?)/(\w+)',aclstring) if matches is None: raise sqlalchemy.exc.DataError(aclstring,[],'') grantee = matches.group(1) permissions = matches.group(2) grant_option = len(matches.group(3)) grantor = matches.group(4) return ACLItem(grantee,permissions,grantor,grant_option) def result_processor(self,dialect,column_type): def string2acl(aclstring): return ACLItem._from_pgsql_string(aclstring) return string2acl def has_permission(self,permission_test): return permission_test in self.permissions def set_permission(self,permission,on=True): if not self.has_permission(permission): if on: self.permissions.append(permission) else: self.permissions.remove(permission) def clear_permissions(self): del self.permissions[:] def __str__(self): return self._as_pgsql_string() -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Insert record error
Hi, I am getting an error trying to insert records into a PostgreSQL database with SQLAlchemy. My table definition: class dream4_eta_15km_pm10(Base): __tablename__='pm10_dream_rasters' gid=Column(Integer,Sequence('pm10_dream_rasters_gid_seq'),primary_key=True) rundate=Column(DateTime(timezone=True),nullable=False) datetime=Column(DateTime(timezone=True),nullable=False) location_raw=Column(VARCHAR,nullable=False) location_class=Column(VARCHAR,nullable=False) timezone=Column(DateTime(timezone=False),nullable=False) the_geom=GeometryColumn(Polygon(2),nullable=False) max_pm10=Column(Float,nullable=False) mean_pm10=Column(Float,nullable=False) and, my postgresql table definition: reason=# \d pm25_dream_rasters Table public.pm25_dream_rasters Column | Type |Modifiers + +-- gid| integer| not null default nextval('pm25_dream_rasters_gid_seq'::regclass) the_geom | geometry | rundate| timestamp with time zone | datetime | timestamp with time zone | location_raw | character varying | location_class | character varying | timezone | timestamp(6) without time zone | max_pm25 | double precision | mean_pm25 | double precision | Indexes: pm25_dream_rasters_pkey PRIMARY KEY, btree (gid) pm25_24hour_select btree (((datetime - rundate) '24:00:00'::interval)) pm25_dream_raster_timezone_idx btree (timezone) Check constraints: enforce_dims_the_geom CHECK (ndims(the_geom) = 2) enforce_geotype_the_geom CHECK (geometrytype(the_geom) = 'MULTIPOLYGON'::text OR the_geom IS NULL) enforce_srid_the_geom CHECK (srid(the_geom) = 4326) And I get the following error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt 'INSERT INTO pm10_dream_rasters (gid, rundate, datetime, location_raw, location_class, timezone, the_geom, max_pm10, mean_pm10) VALUES (% (gid)s, %(rundate)s, %(datetime)s, %(location_raw)s, % (location_class)s, %(timezone)s, GeomFromText(%(GeomFromText_1)s, % (GeomFromText_2)s), %(max_pm10)s, %(mean_pm10)s)' {'rundate': '2011-01-10 00:00:00Z', 'location_raw': '/geodata/edac_dream/ eta_dream4/dream_raw_tiff/20110110/D011011_t00_pm10.tif', 'GeomFromText_1': 'POLYGON((-120.000 43.833,-96.833 43.833,-96.833 26.000,-120.000 26.000,-120.000 43.833))', 'location_class': '/geodata/ edac_dream/eta_dream4/dream_reclass_tiff/20110110/ D011011_t00_pm10.tif', 'datetime': '2011-01-10 00:00:00Z', 'mean_pm10': 0.0, 'gid': 1372073L, 'timezone': '2011-01-10 00:00:00', 'GeomFromText_2': 4326, 'max_pm10': 0.0} I am a little concerned about the value that is being set for the 'gid' variable (e.g. 1372073L). Does this simply indicate some kind of long integer? Is it a possible source of my error? Thanks, Bill -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] dirtying attributes of a user-defined type
There's mistakes in how this is structured. UserDefinedType represents a type object applied to a Column. The actual data handled by such a type is not meant to be an instance of the type itself. ACLItem() here would be its own class, and UserDefinedType would be the superclass of a class like ACLItemType.ACLItemType() is placed on the Column, and its bind_processor() and result_processor() deal with ACLItem objects. Once you have the roles of type and value set up, you'd want to mixin sqlalchemy.types.MutableType, which alerts the ORM that the value of this type can change inline. Note that MutableType is not recommended for high volume applications as it performs terribly, due to the need for flush() to scan all mutables in the session for changes every time it's called - 0.7 has a new extension that allows mutable values to send change events in an efficient manner. On Jan 13, 2011, at 1:33 PM, A.M. wrote: Hello, I have created an SQLAlchemy type which represents a postgresql aclitem (which represents postgresql access control lists). I am able to load and save newly-created ACLItems from the database, however, modifying the values of an instance of the type does not dirty it for flushing. Is there some decorator for dirtying accessors to the type instance convenience methods? Specifically, modifying any of grantee, grantor, permissions, and grant_option, does not trigger a proper update. Cheers, M import sqlalchemy.types as types import re import sqlalchemy.exc #include/utils/acl.h #define ACL_ALL_RIGHTS_STR arwdDxtXUCTc class ACLItem(types.UserDefinedType): def __init__(self,grantee=None,permissions=None,grantor=None,grant_option=False): #note that sqlalchemy calls this with None arguments for processing self.grantee = grantee self.permissions = [] if permissions: for p in permissions: self.permissions.append(p) self.grantor = grantor self.grant_option = grant_option def get_col_spec(self): return 'aclitem' def bind_processor(self,dialect): def acl2string(aclitem): return aclitem._as_pgsql_string() return acl2string def compare_values(self,a,b): return a._as_pgsql_string() == b._as_pgsql_string() def _as_pgsql_string(self): #convert to string 'user grantee=perms/grantor' string_perms = '' for perm in self.permissions: string_perms += perm if self.grant_option: grant_option = '*' else: grant_option = '' return user %s=%s%s/%s % (self.grantee,string_perms,grant_option,self.grantor) @classmethod def _from_pgsql_string(klass,aclstring): grantee=perms*/grantor matches = re.match('([^=]+)=([^/\*]+)(\*?)/(\w+)',aclstring) if matches is None: raise sqlalchemy.exc.DataError(aclstring,[],'') grantee = matches.group(1) permissions = matches.group(2) grant_option = len(matches.group(3)) grantor = matches.group(4) return ACLItem(grantee,permissions,grantor,grant_option) def result_processor(self,dialect,column_type): def string2acl(aclstring): return ACLItem._from_pgsql_string(aclstring) return string2acl def has_permission(self,permission_test): return permission_test in self.permissions def set_permission(self,permission,on=True): if not self.has_permission(permission): if on: self.permissions.append(permission) else: self.permissions.remove(permission) def clear_permissions(self): del self.permissions[:] def __str__(self): return self._as_pgsql_string() -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Best way to reduce boilerplate?
On Jan 13, 1:00 pm, Michael Bayer mike...@zzzcomputing.com wrote: [...] Mixins are extensively documented at http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#mixin-classes So they are! I really should read the What's New in 0.x document before switching to 0.x. What a nice addition. Looks like this addresses most of my problems. (My notes problem will need some additional work, but I think I can probably manage to create a function that takes an existing mapped class as an argument and returns a class definition for a Notes class/table that references it, without needing to modify the original class the way a metaclass does.) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Insert record error
The gid is fine here, its the presence of GeoAlchemy objects sent as bind parameters where they're not able to be rendered as function calls inline with the SQL. You should check with the GeoAlchemy list (http://groups.google.com/group/geoalchemy) with a full example of your insert statement, they can show you the correct way to structure it. On Jan 13, 2011, at 2:58 PM, wilbur wrote: Hi, I am getting an error trying to insert records into a PostgreSQL database with SQLAlchemy. My table definition: class dream4_eta_15km_pm10(Base): __tablename__='pm10_dream_rasters' gid=Column(Integer,Sequence('pm10_dream_rasters_gid_seq'),primary_key=True) rundate=Column(DateTime(timezone=True),nullable=False) datetime=Column(DateTime(timezone=True),nullable=False) location_raw=Column(VARCHAR,nullable=False) location_class=Column(VARCHAR,nullable=False) timezone=Column(DateTime(timezone=False),nullable=False) the_geom=GeometryColumn(Polygon(2),nullable=False) max_pm10=Column(Float,nullable=False) mean_pm10=Column(Float,nullable=False) and, my postgresql table definition: reason=# \d pm25_dream_rasters Table public.pm25_dream_rasters Column | Type |Modifiers + +-- gid| integer| not null default nextval('pm25_dream_rasters_gid_seq'::regclass) the_geom | geometry | rundate| timestamp with time zone | datetime | timestamp with time zone | location_raw | character varying | location_class | character varying | timezone | timestamp(6) without time zone | max_pm25 | double precision | mean_pm25 | double precision | Indexes: pm25_dream_rasters_pkey PRIMARY KEY, btree (gid) pm25_24hour_select btree (((datetime - rundate) '24:00:00'::interval)) pm25_dream_raster_timezone_idx btree (timezone) Check constraints: enforce_dims_the_geom CHECK (ndims(the_geom) = 2) enforce_geotype_the_geom CHECK (geometrytype(the_geom) = 'MULTIPOLYGON'::text OR the_geom IS NULL) enforce_srid_the_geom CHECK (srid(the_geom) = 4326) And I get the following error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt 'INSERT INTO pm10_dream_rasters (gid, rundate, datetime, location_raw, location_class, timezone, the_geom, max_pm10, mean_pm10) VALUES (% (gid)s, %(rundate)s, %(datetime)s, %(location_raw)s, % (location_class)s, %(timezone)s, GeomFromText(%(GeomFromText_1)s, % (GeomFromText_2)s), %(max_pm10)s, %(mean_pm10)s)' {'rundate': '2011-01-10 00:00:00Z', 'location_raw': '/geodata/edac_dream/ eta_dream4/dream_raw_tiff/20110110/D011011_t00_pm10.tif', 'GeomFromText_1': 'POLYGON((-120.000 43.833,-96.833 43.833,-96.833 26.000,-120.000 26.000,-120.000 43.833))', 'location_class': '/geodata/ edac_dream/eta_dream4/dream_reclass_tiff/20110110/ D011011_t00_pm10.tif', 'datetime': '2011-01-10 00:00:00Z', 'mean_pm10': 0.0, 'gid': 1372073L, 'timezone': '2011-01-10 00:00:00', 'GeomFromText_2': 4326, 'max_pm10': 0.0} I am a little concerned about the value that is being set for the 'gid' variable (e.g. 1372073L). Does this simply indicate some kind of long integer? Is it a possible source of my error? Thanks, Bill -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Insert record error
On Thursday, January 13, 2011, wilbur bhudsp...@edac.unm.edu wrote: Hi, I am getting an error trying to insert records into a PostgreSQL database with SQLAlchemy. My table definition: class dream4_eta_15km_pm10(Base): __tablename__='pm10_dream_rasters' gid=Column(Integer,Sequence('pm10_dream_rasters_gid_seq'),primary_key=True) rundate=Column(DateTime(timezone=True),nullable=False) datetime=Column(DateTime(timezone=True),nullable=False) location_raw=Column(VARCHAR,nullable=False) location_class=Column(VARCHAR,nullable=False) timezone=Column(DateTime(timezone=False),nullable=False) the_geom=GeometryColumn(Polygon(2),nullable=False) max_pm10=Column(Float,nullable=False) mean_pm10=Column(Float,nullable=False) and, my postgresql table definition: reason=# \d pm25_dream_rasters Table public.pm25_dream_rasters Column | Type | Modifiers + +-- gid | integer | not null default nextval('pm25_dream_rasters_gid_seq'::regclass) the_geom | geometry | rundate | timestamp with time zone | datetime | timestamp with time zone | location_raw | character varying | location_class | character varying | timezone | timestamp(6) without time zone | max_pm25 | double precision | mean_pm25 | double precision | Indexes: pm25_dream_rasters_pkey PRIMARY KEY, btree (gid) pm25_24hour_select btree (((datetime - rundate) '24:00:00'::interval)) pm25_dream_raster_timezone_idx btree (timezone) Check constraints: enforce_dims_the_geom CHECK (ndims(the_geom) = 2) enforce_geotype_the_geom CHECK (geometrytype(the_geom) = 'MULTIPOLYGON'::text OR the_geom IS NULL) enforce_srid_the_geom CHECK (srid(the_geom) = 4326) And I get the following error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt 'INSERT INTO pm10_dream_rasters (gid, rundate, datetime, location_raw, location_class, timezone, the_geom, max_pm10, mean_pm10) VALUES (% (gid)s, %(rundate)s, %(datetime)s, %(location_raw)s, % (location_class)s, %(timezone)s, GeomFromText(%(GeomFromText_1)s, % (GeomFromText_2)s), %(max_pm10)s, %(mean_pm10)s)' {'rundate': '2011-01-10 00:00:00Z', 'location_raw': '/geodata/edac_dream/ eta_dream4/dream_raw_tiff/20110110/D011011_t00_pm10.tif', 'GeomFromText_1': 'POLYGON((-120.000 43.833,-96.833 43.833,-96.833 26.000,-120.000 26.000,-120.000 43.833))', 'location_class': '/geodata/ edac_dream/eta_dream4/dream_reclass_tiff/20110110/ D011011_t00_pm10.tif', 'datetime': '2011-01-10 00:00:00Z', 'mean_pm10': 0.0, 'gid': 1372073L, 'timezone': '2011-01-10 00:00:00', 'GeomFromText_2': 4326, 'max_pm10': 0.0} I am a little concerned about the value that is being set for the 'gid' variable (e.g. 1372073L). Does this simply indicate some kind of long integer? Is it a possible source of my error? You're sending a Polygon while you have a geometrytype = MultiPolygon constraint. Could it be your problem? -- Eric Lemoine Camptocamp France SAS Savoie Technolac, BP 352 73377 Le Bourget du Lac, Cedex Tel : 00 33 4 79 44 44 96 Mail : eric.lemo...@camptocamp.com http://www.camptocamp.com -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Insert record error
Thanks! On Jan 13, 1:32 pm, Michael Bayer mike...@zzzcomputing.com wrote: The gid is fine here, its the presence of GeoAlchemy objects sent as bind parameters where they're not able to be rendered as function calls inline with the SQL. You should check with the GeoAlchemy list (http://groups.google.com/group/geoalchemy) with a full example of your insert statement, they can show you the correct way to structure it. On Jan 13, 2011, at 2:58 PM, wilbur wrote: Hi, I am getting an error trying to insert records into a PostgreSQL database with SQLAlchemy. My table definition: class dream4_eta_15km_pm10(Base): __tablename__='pm10_dream_rasters' gid=Column(Integer,Sequence('pm10_dream_rasters_gid_seq'),primary_key=True) rundate=Column(DateTime(timezone=True),nullable=False) datetime=Column(DateTime(timezone=True),nullable=False) location_raw=Column(VARCHAR,nullable=False) location_class=Column(VARCHAR,nullable=False) timezone=Column(DateTime(timezone=False),nullable=False) the_geom=GeometryColumn(Polygon(2),nullable=False) max_pm10=Column(Float,nullable=False) mean_pm10=Column(Float,nullable=False) and, my postgresql table definition: reason=# \d pm25_dream_rasters Table public.pm25_dream_rasters Column | Type | Modifiers + +-- gid | integer | not null default nextval('pm25_dream_rasters_gid_seq'::regclass) the_geom | geometry | rundate | timestamp with time zone | datetime | timestamp with time zone | location_raw | character varying | location_class | character varying | timezone | timestamp(6) without time zone | max_pm25 | double precision | mean_pm25 | double precision | Indexes: pm25_dream_rasters_pkey PRIMARY KEY, btree (gid) pm25_24hour_select btree (((datetime - rundate) '24:00:00'::interval)) pm25_dream_raster_timezone_idx btree (timezone) Check constraints: enforce_dims_the_geom CHECK (ndims(the_geom) = 2) enforce_geotype_the_geom CHECK (geometrytype(the_geom) = 'MULTIPOLYGON'::text OR the_geom IS NULL) enforce_srid_the_geom CHECK (srid(the_geom) = 4326) And I get the following error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt 'INSERT INTO pm10_dream_rasters (gid, rundate, datetime, location_raw, location_class, timezone, the_geom, max_pm10, mean_pm10) VALUES (% (gid)s, %(rundate)s, %(datetime)s, %(location_raw)s, % (location_class)s, %(timezone)s, GeomFromText(%(GeomFromText_1)s, % (GeomFromText_2)s), %(max_pm10)s, %(mean_pm10)s)' {'rundate': '2011-01-10 00:00:00Z', 'location_raw': '/geodata/edac_dream/ eta_dream4/dream_raw_tiff/20110110/D011011_t00_pm10.tif', 'GeomFromText_1': 'POLYGON((-120.000 43.833,-96.833 43.833,-96.833 26.000,-120.000 26.000,-120.000 43.833))', 'location_class': '/geodata/ edac_dream/eta_dream4/dream_reclass_tiff/20110110/ D011011_t00_pm10.tif', 'datetime': '2011-01-10 00:00:00Z', 'mean_pm10': 0.0, 'gid': 1372073L, 'timezone': '2011-01-10 00:00:00', 'GeomFromText_2': 4326, 'max_pm10': 0.0} I am a little concerned about the value that is being set for the 'gid' variable (e.g. 1372073L). Does this simply indicate some kind of long integer? Is it a possible source of my error? Thanks, Bill -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Insert record error
Thanks for responding, I tried changing the geometry object definition, as well as the model definition and I get the same result... dream_geom=MULTIPOLYGON((-120.000 43.833,-96.833 43.833,-96.833 26.000,-120.000 26.000,-120.000 43.833)) class dream4_eta_15km_pm10(Base): __tablename__='pm10_dream_rasters' gid=Column(Integer,Sequence('pm10_dream_rasters_gid_seq'),primary_key=True) rundate=Column(DateTime(timezone=True),nullable=False) datetime=Column(DateTime(timezone=True),nullable=False) location_raw=Column(VARCHAR,nullable=False) location_class=Column(VARCHAR,nullable=False) timezone=Column(DateTime(timezone=False),nullable=False) the_geom=GeometryColumn(MultiPolygon(2),nullable=False) max_pm10=Column(Float,nullable=False) mean_pm10=Column(Float,nullable=False) Bill -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] sqlalchemy rocks my socks off!
To Michael Bayer: sqlalchemy simplifies my life every day and makes me vastly more productive! Many thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Unnecessary SQL emitted after a commit to get a PK?
On Wednesday, January 12, 2011 2:16:00 PM UTC-5, Michael Bayer wrote: Suppose a concurrent thread or process deleted your row in a new transaction and committed it, or didn't even commit yet hence locked the row, in between the time you said commit() and later attempted to access the attributes of the row. That's the rationale in a nutshell. Thanks, this make sense. For my purposes (where business logic ensures no post-commit shenanigans) on this one I can just snag the id after a pre-commit flush() and that will be fine. For my issues with object detachment I'll post another topic. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Can expire_on_commit be made to apply at an object level (instead of everything in the session)?
On Wednesday, January 12, 2011 2:16:00 PM UTC-5, Michael Bayer wrote: see expire_on_commit=False as well as Session.commit() for further detail: http://www.sqlalchemy.org/docs/orm/session.html#committing http://www.sqlalchemy.org/docs/orm/session.html#sqlalchemy.orm.session.Session.commit To reassociate detached objects with a session, use Session.add() or Session.merge(). Detachment is described at http://www.sqlalchemy.org/docs/orm/session.html#quickie-intro-to-object-states . I think I'm in a catch-22 situation I'm not sure how to get out of (short of copying my object to an unmapped one, which seems wasteful)... I have N objects in a scoped session and I want to commit the changes on them all. After this is done, I want to keep just one of them around in a cache as a simple detached object that won't ever result in emitted SQL. The business rules are such that for this object there is no way the object will differ from what is in the DB (and even if the DB was changed behind the scenes I want the values at time of commit). The issue seems to be that on commit(), everything is expired, and the comment in the docs that says They are still functional in the detached state *if the user has ensured that their state has not been expired before detachment* indicates that I can't have a cleanly detached object after a commit because of this expiration. At least not without setting expire_on_commit = False... but that applies to all objects in the session. I only want one object to be severed from the database linkage. Is there any clean way to set expire_on_commit behaviour at a per-object level? It seems to be all or nothing at the moment. I dug into the code that seems to do the expiry (session._remove_snapshot through to InstanceState.expire_attributes) and nothing is leaping out at me. Can I force an un-expire after the commit without legitimately reflecting the persistent state? Russ -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Can expire_on_commit be made to apply at an object level (instead of everything in the session)?
On Jan 13, 2011, at 10:18 PM, Russ wrote: On Wednesday, January 12, 2011 2:16:00 PM UTC-5, Michael Bayer wrote: see expire_on_commit=False as well as Session.commit() for further detail: http://www.sqlalchemy.org/docs/orm/session.html#committing http://www.sqlalchemy.org/docs/orm/session.html#sqlalchemy.orm.session.Session.commit To reassociate detached objects with a session, use Session.add() or Session.merge(). Detachment is described at http://www.sqlalchemy.org/docs/orm/session.html#quickie-intro-to-object-states . I think I'm in a catch-22 situation I'm not sure how to get out of (short of copying my object to an unmapped one, which seems wasteful)... I have N objects in a scoped session and I want to commit the changes on them all. After this is done, I want to keep just one of them around in a cache as a simple detached object that won't ever result in emitted SQL. The business rules are such that for this object there is no way the object will differ from what is in the DB (and even if the DB was changed behind the scenes I want the values at time of commit). The issue seems to be that on commit(), everything is expired, and the comment in the docs that says They are still functional in the detached state if the user has ensured that their state has not been expired before detachment indicates that I can't have a cleanly detached object after a commit because of this expiration. At least not without setting expire_on_commit = False... but that applies to all objects in the session. I only want one object to be severed from the database linkage. So you're looking to do a write-through cache here, i.e. write the DB, then write the value straight to the cache. I think if you were to say session.flush(), which emits the SQL, then detach all the objects using session.expunge(), they're no longer affected by subsequent session operations, then session.commit(), commits the transaction, that would produce the effect you're looking for. Is there any clean way to set expire_on_commit behaviour at a per-object level? It seems to be all or nothing at the moment. Bizarrely, this question had never been asked before, until about five hours ago. That happens quite often, for some reason. For that user I suggested using SessionExtension.after_commit() to re-implement expiration on subsets of objects only, though you'd need some plumbing to indicate what objects get the expiration - perhaps you'd set some attribute on them.I think the flush()-expunge()-commit() idea is easier. I dug into the code that seems to do the expiry (session._remove_snapshot through to InstanceState.expire_attributes) and nothing is leaping out at me. Can I force an un-expire after the commit without legitimately reflecting the persistent state? Well un-expire is just hitting the object's attributes so it loads back again, assuming you're looking to not have the extra SELECT. If you were to SELECT all the rows at once in a new query() that also would unexpire all the existing objects it hit within the one result set, if that's useful information. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Can expire_on_commit be made to apply at an object level (instead of everything in the session)?
On Thursday, January 13, 2011 10:29:10 PM UTC-5, Michael Bayer wrote: So you're looking to do a write-through cache here, i.e. write the DB, then write the value straight to the cache. I think if you were to say session.flush(), which emits the SQL, then detach all the objects using session.expunge(), they're no longer affected by subsequent session operations, then session.commit(), commits the transaction, that would produce the effect you're looking for. Perfect... flush()-expunge()-commit() does exactly what I want. Now to learn about expunge cascades to tune it up! Is there any clean way to set expire_on_commit behaviour at a per-object level? It seems to be all or nothing at the moment. Bizarrely, this question had never been asked before, until about five hours ago. That happens quite often, for some reason. For that user I suggested using SessionExtension.after_commit() to re-implement expiration on subsets of objects only, though you'd need some plumbing to indicate what objects get the expiration - perhaps you'd set some attribute on them.I think the flush()-expunge()-commit() idea is easier. Ack - sorry about missing that topic. I'm glad I did as the expunge route is easier as you say. It is great to keep the after_commit() in mind, though. Well un-expire is just hitting the object's attributes so it loads back again, assuming you're looking to not have the extra SELECT. If you were to SELECT all the rows at once in a new query() that also would unexpire all the existing objects it hit within the one result set, if that's useful information. You are correct, I'm limiting my SELECTs as I've got a few spots where tables are heavily trafficked and every SELECT hurts. I've used joinedload a lot to cut down on selects as well. I used to specify lazy loads in the relationship definition, but joined loads at the query is more explicit and cuts down on excess data loading for times when it really isn't needed. A bit off topic, but there you go. Given how fast and great your response is, I also have to send some kudos your way. Not only is SQLAlchemy a great piece of software and extremely useful, but your attentiveness to this group and obvious passion for the project is truly amazing. I have no clue where you find the time for it, but I assure you it is greatly appreciated! Russ -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: guard all session.query with try except?
Thanks for your advise. My application is a tornadoweb app. So I'm going to create Session() for each request. On Jan 13, 11:23 pm, Michael Bayer mike...@zzzcomputing.com wrote: I'd certainly never do that - no matter what type of application is running, its always best to use transaction-per-logical operation, which means usually, transactions are short. If the transactions are necessarily long, a database failure in the middle means the operation just fails, but there was no option there since you needed that long running transaction. If the app is a console app that stays open for a long time in some kind of loop, use a new Session() for each iteration of the loop, or for each 5- minute operation its doing, or whatever. So the answer is not to try/except everything, its to frame the usage of a particular transaction around the logical span of the operations you're performing. On Jan 13, 2011, at 2:47 AM, can xiang wrote: Hi, sqlalchemy is such a piece of great work. I'm very happy with it, while I indeed get a problem. I use sqlalchemy 0.6.6 in a non-threaded app. session is created at module level at once. autocommit is set to False. I see log shows: user = user_session.query(User).filter(User.hw_id == args['hw_id']).first() with the following exception: InvalidRequestError: Can't reconnect until invalid transaction is rolled back the mysql server restarted at that time. I check the docs, it says this may caused by a error while commit and without rollback for that session. But the above query is single query without a transaction and with no data commit. If I have to wrap all QUERY operations in try except and rollback manually? It's so tedious and not so nice, any advice? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.