Re: [sqlalchemy] Re: Strange ObjectDeletedError
I never did understand the pros or cons of running autocommit=True, aside from this flushing issue. Are there performance implications ? On Mon, Feb 7, 2011 at 1:22 PM, Michael Bayer mike...@zzzcomputing.com wrote: I actually just did a little bit of reverse course on this in 0.7. I've moved all the accounting into the try/except block so that the flush completes fully before the autocommit expires everything. This is a change to a pattern that's been the same way since 0.4 so hoping nothing bad happens when we put 0.7 out into betas. On Feb 7, 2011, at 2:52 PM, Romy Maxwell wrote: Are there any downsides to setting expire_on_commit=False when using autocommit=True ? In other words, should I expect to see stale data or other side effects ? On Mon, Feb 7, 2011 at 8:30 AM, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 7, 2011, at 7:42 AM, Romy Maxwell wrote: Hey Michael, I didn't wanna revive a really old thread [http://www.mail-archive.com/sqlalchemy@googlegroups.com/msg16598.html], so I figured maybe you won't mind the email. I don't understand why, in the thread, using autocommit=True makes the object unreachable, or what that actually means. I'm assuming it was referring to the latter part of the code: o = s.query(T).get(2) o.id = 10 o.description = Changed s.flush() With autocommit=True, I've always thought flushes created their own transactions, like so: s.begin() s.flush() s.commit() But if that was true, then the commit happens after the flush. How would the commit expire anything and/or make anything unreachable for the flush ? The commit is expiring because that's what it does when expire_on_commit is True. So the flush goes into its post-commit accounting, it in fact has to reload all the objects, one at a time, so is hugely inefficient and I'm going to add a big warning for that in 0.7, ticket 2041. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Association object
Dear all, I've to create an association object where the many-to-many relation has to be created with only one table: atominfo_table = Table('atom_info', metadata, Column('id', types.Integer, primary_key=True), Column('number', types.Integer, nullable=False), Column('coord_x', types.Unicode(15), nullable=False), Column('coord_y', types.Unicode(15), nullable=False), Column('coord_z', types.Unicode(15), nullable=False), Column('residue_id', types.Integer, ForeignKey('residue_info.id'),primary_key=True), Column('periodic_id', types.Integer,ForeignKey('periodic_glos.id'), primary_key=True), ) atom_atom_table = Table('atom_atom', metadata, Column('atom1_id', types.Integer, ForeignKey('atom_info.id'),primary_key=True), Column('atom2_id', types.Integer, ForeignKey('atom_info.id'), primary_key=True), Column('interaction_type', types.Unicode(50),nullable=False), Column('distance', types.Unicode(10),nullable=False), ) Is it possible? If yes how can create the mapper? The following attempt give me the error: Could not determine join condition between parent/child tables on relationship AtomInfo.atom1. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well. mapper(AtomInfo, atominfo_table, properties={ 'residue': relationship(ResidueInfo, backref='atominfo'), 'periodic': relationship(Periodic, backref='atominfo'), 'atom1': relationship(AtomAtom) }) mapper(AtomAtom, atom_atom_table, properties={ 'atom2': relationship(AtomInfo) }) Thanks -- --- (o_ (o_//\ Coltivate Linux che tanto Windows si pianta da solo. (/)_ V_/_ +--+ | ENRICO MORELLI | email: more...@cerm.unifi.it | | * * * *| phone: +39 055 4574269 | | University of Florence| fax : +39 055 4574253 | | CERM - via Sacconi, 6 - 50019 Sesto Fiorentino (FI) - ITALY| +--+ -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Association object
On Wed, 9 Feb 2011 10:34:22 +0100 Enrico Morelli more...@cerm.unifi.it wrote: mapper(AtomInfo, atominfo_table, properties={ 'residue': relationship(ResidueInfo, backref='atominfo'), 'periodic': relationship(Periodic, backref='atominfo'), 'atom1': relationship(AtomAtom) }) mapper(AtomAtom, atom_atom_table, properties={ 'atom2': relationship(AtomInfo) }) Using the following mapper, I haven't the error: mapper(AtomInfo, atominfo_table, properties={ 'residue': relationship(ResidueInfo, backref='atominfo'), 'periodic': relationship(Periodic, backref='atominfo'), 'atom1': relationship(AtomAtom, primaryjoin=atominfo_table.c.id==atom_atom_table.c.atom1_id) }) mapper(AtomAtom, atom_atom_table, properties={ 'atom2': relationship(AtomInfo, primaryjoin=atominfo_table.c.id==atom_atom_table.c.atom2_id) }) Is that the correct solution? Thanks again. -- --- (o_ (o_//\ Coltivate Linux che tanto Windows si pianta da solo. (/)_ V_/_ +--+ | ENRICO MORELLI | email: more...@cerm.unifi.it | | * * * *| phone: +39 055 4574269 | | University of Florence| fax : +39 055 4574253 | | CERM - via Sacconi, 6 - 50019 Sesto Fiorentino (FI) - ITALY| +--+ -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] how would you do that with SQLAlchemy ..?
Hello, I'm implementing an authentication system in my webapp, with the usual users / roles. I have a table human, a table role and a table human_role. I have one special user anonymous which has no password, no email, etc and two special roles : anonymous user and authenticated user. If an user is authenticated it has automatically the role authenticated user in it's roles. On the other way, if the user is just visiting the website without authenticating it should be an anonymous user and this user should automatically get the anonymous user in it's roles. Those two roles and the anonymous user should always exist. To avoid unnecessary SQL queries, and to simplify the implementation I would like to avoid storing those roles and the anonymous user in the database. I wondered if there is a way to treat those two roles the same way as the other ones except in all the orm/save/update/cascade/... operations .. ? For example I have an user foo, which has roles: reviewer and reader. I would like to do something like: foo_user = User.query.filter_by(login='foo').one() then a property like foo_user.roles which should return ['authenticated', 'reviewer', 'reader'], so 'authenticated' should be automatically added (but, again, ignored in all the orm/save/update/cascade/... operations) I hope it's clear :-) Thanks, Julien -- No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. attachment: jcigar.vcf
Re: [sqlalchemy] Re: Strange ObjectDeletedError
sure, you commit too often, and if you're expiring too, then you're re-selecting all the time. On Feb 9, 2011, at 3:54 AM, Romy Maxwell wrote: I never did understand the pros or cons of running autocommit=True, aside from this flushing issue. Are there performance implications ? On Mon, Feb 7, 2011 at 1:22 PM, Michael Bayer mike...@zzzcomputing.com wrote: I actually just did a little bit of reverse course on this in 0.7. I've moved all the accounting into the try/except block so that the flush completes fully before the autocommit expires everything. This is a change to a pattern that's been the same way since 0.4 so hoping nothing bad happens when we put 0.7 out into betas. On Feb 7, 2011, at 2:52 PM, Romy Maxwell wrote: Are there any downsides to setting expire_on_commit=False when using autocommit=True ? In other words, should I expect to see stale data or other side effects ? On Mon, Feb 7, 2011 at 8:30 AM, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 7, 2011, at 7:42 AM, Romy Maxwell wrote: Hey Michael, I didn't wanna revive a really old thread [http://www.mail-archive.com/sqlalchemy@googlegroups.com/msg16598.html], so I figured maybe you won't mind the email. I don't understand why, in the thread, using autocommit=True makes the object unreachable, or what that actually means. I'm assuming it was referring to the latter part of the code: o = s.query(T).get(2) o.id = 10 o.description = Changed s.flush() With autocommit=True, I've always thought flushes created their own transactions, like so: s.begin() s.flush() s.commit() But if that was true, then the commit happens after the flush. How would the commit expire anything and/or make anything unreachable for the flush ? The commit is expiring because that's what it does when expire_on_commit is True.So the flush goes into its post-commit accounting, it in fact has to reload all the objects, one at a time, so is hugely inefficient and I'm going to add a big warning for that in 0.7, ticket 2041. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Association object
On Feb 9, 2011, at 4:34 AM, Enrico Morelli wrote: Dear all, I've to create an association object where the many-to-many relation has to be created with only one table: atominfo_table = Table('atom_info', metadata, Column('id', types.Integer, primary_key=True), Column('number', types.Integer, nullable=False), Column('coord_x', types.Unicode(15), nullable=False), Column('coord_y', types.Unicode(15), nullable=False), Column('coord_z', types.Unicode(15), nullable=False), Column('residue_id', types.Integer, ForeignKey('residue_info.id'),primary_key=True), Column('periodic_id', types.Integer,ForeignKey('periodic_glos.id'), primary_key=True), ) atom_atom_table = Table('atom_atom', metadata, Column('atom1_id', types.Integer, ForeignKey('atom_info.id'),primary_key=True), Column('atom2_id', types.Integer, ForeignKey('atom_info.id'), primary_key=True), Column('interaction_type', types.Unicode(50),nullable=False), Column('distance', types.Unicode(10),nullable=False), ) Is it possible? If yes how can create the mapper? The following attempt give me the error: Could not determine join condition between parent/child tables on relationship AtomInfo.atom1. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well. mapper(AtomInfo, atominfo_table, properties={ 'residue': relationship(ResidueInfo, backref='atominfo'), 'periodic': relationship(Periodic, backref='atominfo'), 'atom1': relationship(AtomAtom) }) mapper(AtomAtom, atom_atom_table, properties={ 'atom2': relationship(AtomInfo) }) Ideally you'd be using ForeignKeyConstraint to specify a composite foreign key. Otherwise it looks like theres multiple foreign keys between them, that's why it doesn't know how to join. http://www.sqlalchemy.org/docs/core/schema.html#defining-foreign-keys Thanks -- --- (o_ (o_//\ Coltivate Linux che tanto Windows si pianta da solo. (/)_ V_/_ +--+ | ENRICO MORELLI | email: more...@cerm.unifi.it | | * * * *| phone: +39 055 4574269 | | University of Florence| fax : +39 055 4574253 | | CERM - via Sacconi, 6 - 50019 Sesto Fiorentino (FI) - ITALY| +--+ -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: dynamic classes and tables
On Feb 9, 2011, at 1:52 AM, farcat wrote: Thank you, that works. Is there any way to later add or remove attributes, using the declarative system? add yes, just plug them on, http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#defining-attributes, removal not so much since the mapper has internal state corresponding to each attribute.Usually object oriented application models don't change constantly as the application runs. Cheers, Lars On Feb 8, 10:46 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 8, 2011, at 3:52 PM, farcat wrote: Hi everyone, I am new to sqlalchemy and figuring out whether it is right for my project. What I am looking for is the ability to change classes and tables on the flight, with as much freedom as possible, potentially having metaclasses figuring out the difference between versions and updating the database accordingly. Additionally I would like to import databases and automatically generate class definitions. Some code I tried: code from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relation, sessionmaker Base = declarative_base() def Init(self, title=None, year=None): self.title = title self.year = year def Repr(self): return Movie(%r, %r, %r) % (self.title, self.year, self.director) Movie = type(Movie, (Base,),{'__tablename__': movies, id:Column(Integer, primary_key=True), title: Column(String(255), nullable=False), year: Column(Integer), directed_by: Column(Integer, ForeignKey('directors.id')), director: relation(Director, backref='movies', lazy=False)}) setattr(Movie, __init__, classmethod(Init)) setattr(Movie, __repr__, classmethod(Repr)) class Director(Base): __tablename__ = 'directors' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False, unique=True) def __init__(self, name=None): self.name = name def __repr__(self): return Director(%r) % (self.name) engine = create_engine('sqlite:///meta.db', echo=True) Base.metadata.create_all(engine) if __name__ == __main__: Session = sessionmaker(bind=engine) session = Session() m1 = Movie(Star Trek, 2009) m1.director = Director(JJ Abrams) d2 = Director(George Lucas) d2.movies = [Movie(Star Wars, 1977), Movie(THX 1138, 1971)] try: session.add(m1) session.add(d2) session.commit() except: session.rollback() alldata = session.query(Movie).all() for somedata in alldata: print somedata \code with as error: 2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L PRAGMA table_info(directors) 2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L () Traceback (most recent call last): File D:\Documents\Code\NetBeans\test\alchemy\src\alchemy.py, line 49, in module m1.director = Director(JJ Abrams) File C:\Python27\lib\site-packages\sqlalchemy\orm\attributes.py, line 158, in __set__ 2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L PRAGMA table_info(movies) 2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L () self.impl.set(instance_state(instance), AttributeError: 'Movie' object has no attribute '_sa_instance_state' Can anyone shed some light or explain the error message? your __init__ monkeypatch is interfering with SQLA's wrapping of this method. Try Movie = type(Movie, (Base,),{'__tablename__': movies, __init__:Init, id:Column(Integer, primary_key=True), title: Column(String(255), nullable=False), year: Column(Integer), directed_by: Column(Integer,ForeignKey('directors.id')), director: relation(Director, backref='movies',lazy=False)}) instead. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to
[sqlalchemy] Insert time/date into sqlite
Hello, I have the following table in sqlite: date DATE field1 VARCHAR(100) field2 VARCHAR(100) I'd like to insert the current date/time into this table: result = DBSQLITE.execute (INS, date = strftime(%Y-%m-%d %H:%M:%S, gmtime()), field1 = string1, field2 = string2) But it doesn't work - I get an error: TypeError: SQLite Date type only accepts Python date objects as input. How should I convert the date/time to be able to insert it into this table ? Regards Przemek -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] AttributeError: 'NoneType' object has no attribute 'groups'
Hi, I'm very new to sql alchemy..want help in debugging my code. I'm writing a web application using pylons, formalchemy and sqlalchemy. The issue is as follows - I've 2 tables namely source and pstn. i've declared them as follows - class Source(Base): __tablename__ = 'source' id = Column(Integer, primary_key=True) sh_code = Column(Integer, nullable=False) pstn_no = Column(Integer) details = Column(Text(), nullable=False) active = Column(Boolean, default=True) class PSTN(Base): __tablename__ = 'pstn' id = Column(Integer, primary_key=True) prev_no = Column(Integer) latest_no = Column(Integer, nullable=False) last_con = Column(Date()) expiry_code = Column(Text()) active = Column(Boolean, default=True) now i've joined the two tables to create a view namely History like this - pstn_table = PSTN.__table__ source_table = Source.__table__ j = join(pstn_table, source_table, pstn_table.c.latest_no == source_table.c.pstn_no) class History(Base): __table__ = j in my admincontroller, i've set up the code so that whenever a new entry is added to 'source', the same 'pstn_no' would be inserted to 'pstn' table's 'latest_no' column. This piece is working fine and whenever i try to access class History from the grid, it loads all fine. The underlying sql query for the join is like this - SELECT pstn.id AS pstn_id, source.id AS source_id, pstn.prev_no AS pstn_prev_no, pstn.latest_no AS pstn_latest_no, pstn.last_con AS pstn_last_con, pstn.expiry_code AS pstn_expiry_code, pstn.active AS pstn_active, source.active AS source_active, source.sh_code AS source_sh_code, source.pstn_no AS source_pstn_no, source.details AS source_details FROM pstn JOIN source ON pstn.latest_no = source.pstn_no WHERE pstn.active = ? LIMIT 3 OFFSET 0 10:05:13,089 INFO [sqlalchemy.engine.base.Engine.0x...f490] (True,) Now, the problem is, if I manually add some entry into the 'pstn' and 'source' table where source.pstn_no = pstn.latest_no and try to access the History grid, the underlying sql remains the same but on the webpage, i get this error - ⇝ AttributeError: 'NoneType' object has no attribute 'groups' View as: Interactive (full) | Text (full) | XML (full) clear this clear this Module testapp.lib.admincontroller:292 in index view def index(self, format='html', **kwargs): REST api page = self.get_page() fs = self.get_grid() fs = fs.bind(instances=page) page = self.get_page() Module testapp.controllers.admin:45 in get_page view options = dict(page=int(request.params.get('page', '1'))) return Page(query,options) def hide(self, id): return Page(query,options) Module webhelpers.paginate:434 in __init__ view self.items = self.collection else: self.items = list(self.collection[self.first_item-1:self.last_item]) # Links to previous and next page self.items = list(self.collection[self.first_item-1:self.last_item]) Module webhelpers.paginate:263 in __getitem__ view if not isinstance(range, slice): raise Exception, __getitem__ without slicing not supported return self.obj[range] def __len__(self): return self.obj[range] Module sqlalchemy.orm.query:1528 in __getitem__ view return list(res)[None:None:item.step] else: return list(res) else: return list(self[item:item+1])[0] return list(res) Module sqlalchemy.orm.query:1797 in instances view process[0](row, rows) elif single_entity: rows = [process[0](row, None) for row in fetch] else: rows = [util.NamedTuple([proc(row, None) for proc in process], rows = [process[0](row, None) for row in fetch] Module sqlalchemy.orm.mapper:2275 in _instance view instancekey=identitykey, isnew=isnew) is \ EXT_CONTINUE: populate_state(state, dict_, row, isnew, only_load_props) else: populate_state(state, dict_, row, isnew, only_load_props) Module sqlalchemy.orm.mapper:2153 in populate_state view for key, populator in populators: populator(state, dict_, row) session_identity_map = context.session.identity_map populator(state, dict_, row) Module sqlalchemy.orm.strategies:129 in new_execute view if col is not None and col in row: def new_execute(state, dict_, row): dict_[key] = row[col] return new_execute, None, None else: dict_[key] = row[col] Module sqlalchemy.engine.base:2023 in
Re: [sqlalchemy] Insert time/date into sqlite
SQLite doesn't have a DATE type specifically. SQLAlchemy's Date() type expects Python datetime, i.e. import datetime; date = datetime.date(year, month day). In this case if you want to put a date + time that would be Sqlalchemy DateTime(), you'd use datetime.datetime(), or if you want to deal with strings yourself, use String() for the column type. Its all the same to SQLite (SQLite doesn't actually have fixed column types and is unique in this regard). On Feb 9, 2011, at 7:36 AM, anonymous wrote: Hello, I have the following table in sqlite: date DATE field1 VARCHAR(100) field2 VARCHAR(100) I'd like to insert the current date/time into this table: result = DBSQLITE.execute (INS, date = strftime(%Y-%m-%d %H:%M:%S, gmtime()), field1 = string1, field2 = string2) But it doesn't work - I get an error: TypeError: SQLite Date type only accepts Python date objects as input. How should I convert the date/time to be able to insert it into this table ? Regards Przemek -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] AttributeError: 'NoneType' object has no attribute 'groups'
It seems like your issue is that a query is executing and you are expecting a certain result, but you are not getting it. So there's not a lot of relevant detail here for someone that doesn't have your application open in front of them (test data, exact code that produces query, illustration of expected result row not present, extraneous stack traces regarding pagination logic and secondary side effects like 'NoneType has no attribute groups' removed). So on your end, turn on echo ='debug', ensure the expected rows are returned from the statement as it executes, and keep in mind an ORM query against a single entity, i.e. session.query(Source).filter(...), will only return a Source object if a primary key for Source is fully present in the row. On Feb 9, 2011, at 5:21 AM, dalia wrote: Hi, I'm very new to sql alchemy..want help in debugging my code. I'm writing a web application using pylons, formalchemy and sqlalchemy. The issue is as follows - I've 2 tables namely source and pstn. i've declared them as follows - class Source(Base): __tablename__ = 'source' id = Column(Integer, primary_key=True) sh_code = Column(Integer, nullable=False) pstn_no = Column(Integer) details = Column(Text(), nullable=False) active = Column(Boolean, default=True) class PSTN(Base): __tablename__ = 'pstn' id = Column(Integer, primary_key=True) prev_no = Column(Integer) latest_no = Column(Integer, nullable=False) last_con = Column(Date()) expiry_code = Column(Text()) active = Column(Boolean, default=True) now i've joined the two tables to create a view namely History like this - pstn_table = PSTN.__table__ source_table = Source.__table__ j = join(pstn_table, source_table, pstn_table.c.latest_no == source_table.c.pstn_no) class History(Base): __table__ = j in my admincontroller, i've set up the code so that whenever a new entry is added to 'source', the same 'pstn_no' would be inserted to 'pstn' table's 'latest_no' column. This piece is working fine and whenever i try to access class History from the grid, it loads all fine. The underlying sql query for the join is like this - SELECT pstn.id AS pstn_id, source.id AS source_id, pstn.prev_no AS pstn_prev_no, pstn.latest_no AS pstn_latest_no, pstn.last_con AS pstn_last_con, pstn.expiry_code AS pstn_expiry_code, pstn.active AS pstn_active, source.active AS source_active, source.sh_code AS source_sh_code, source.pstn_no AS source_pstn_no, source.details AS source_details FROM pstn JOIN source ON pstn.latest_no = source.pstn_no WHERE pstn.active = ? LIMIT 3 OFFSET 0 10:05:13,089 INFO [sqlalchemy.engine.base.Engine.0x...f490] (True,) Now, the problem is, if I manually add some entry into the 'pstn' and 'source' table where source.pstn_no = pstn.latest_no and try to access the History grid, the underlying sql remains the same but on the webpage, i get this error - ⇝ AttributeError: 'NoneType' object has no attribute 'groups' View as: Interactive (full) | Text (full) | XML (full) clear this clear this Module testapp.lib.admincontroller:292 in index view def index(self, format='html', **kwargs): REST api page = self.get_page() fs = self.get_grid() fs = fs.bind(instances=page) page = self.get_page() Module testapp.controllers.admin:45 in get_page view options = dict(page=int(request.params.get('page', '1'))) return Page(query,options) def hide(self, id): return Page(query,options) Module webhelpers.paginate:434 in __init__ view self.items = self.collection else: self.items = list(self.collection[self.first_item-1:self.last_item]) # Links to previous and next page self.items = list(self.collection[self.first_item-1:self.last_item]) Module webhelpers.paginate:263 in __getitem__ view if not isinstance(range, slice): raise Exception, __getitem__ without slicing not supported return self.obj[range] def __len__(self): return self.obj[range] Module sqlalchemy.orm.query:1528 in __getitem__ view return list(res)[None:None:item.step] else: return list(res) else: return list(self[item:item+1])[0] return list(res) Module sqlalchemy.orm.query:1797 in instances view process[0](row, rows) elif single_entity: rows = [process[0](row, None) for row in fetch] else: rows = [util.NamedTuple([proc(row, None) for proc in process], rows = [process[0](row, None) for row in fetch] Module sqlalchemy.orm.mapper:2275 in _instance view
[sqlalchemy] Creating table with oracle
Hi, I want to create the following table: Table('Error', metadata, Column('Type', String), Column('reference', String), Column('context', String), Column ('Timestamp', DateTime, primary_key=True),) with the oracle DB. I receive this: sqlalchemy.exc.Error: (DatabaseError) ORA-00910: specified length too long for its datatype '\nCREATE TABLE Error (\n\tType VARCHAR(None CHAR), \n \treference VARCHAR(None CHAR), \n\tcontext VARCHAR(None CHAR), \n \tTimestamp DATE\n)\n\n' {} I tried it with the postgres it works. Is there any configuration option that makes the oracle accept this table without changing the given data types? Thanks -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Passing sessions around
I'm having issues with the sqlalchemy session and completely realize this is from my own lack of understanding but can't seem to find a solution. My plan was to create a library with functions like so... def func1 (id, session=None): result = do somework session.add(something) return result def func2 (id, session=None): result = do somework session.add(something) return result then from someplace else use the functions like so... session = scoped_session() result = func1(1, session) result2=func2(2, session) session.commit() session.close() This doesn't work though. Do I need to return the session for something like this to work? Any advice on the correct pattern to use when trying to create a transaction across several functions? Thanks, Brent -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Creating table with oracle
String() needs a length with Oracle, MySQL, and several others. On Feb 9, 2011, at 10:39 AM, Eduardo wrote: Hi, I want to create the following table: Table('Error', metadata, Column('Type', String), Column('reference', String), Column('context', String), Column ('Timestamp', DateTime, primary_key=True),) with the oracle DB. I receive this: sqlalchemy.exc.Error: (DatabaseError) ORA-00910: specified length too long for its datatype '\nCREATE TABLE Error (\n\tType VARCHAR(None CHAR), \n \treference VARCHAR(None CHAR), \n\tcontext VARCHAR(None CHAR), \n \tTimestamp DATE\n)\n\n' {} I tried it with the postgres it works. Is there any configuration option that makes the oracle accept this table without changing the given data types? Thanks -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Passing sessions around
On Feb 9, 2011, at 12:01 PM, Brent McConnell wrote: I'm having issues with the sqlalchemy session and completely realize this is from my own lack of understanding but can't seem to find a solution. My plan was to create a library with functions like so... def func1 (id, session=None): result = do somework session.add(something) return result def func2 (id, session=None): result = do somework session.add(something) return result then from someplace else use the functions like so... session = scoped_session() result = func1(1, session) result2=func2(2, session) session.commit() session.close() This doesn't work though. Do I need to return the session for something like this to work? Any advice on the correct pattern to use when trying to create a transaction across several functions? Looks fine to me. doesn't work doesn't carry a lot of detail. I'd note that the purpose of scoped_session() is so that session is safe to use as a global variable, though there's no reason you can't pass it explicitly to functions as well. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Passing sessions around
On Feb 9, 2011, at 1:59 PM, Brent McConnell wrote: Thanks for the quick reply. I've tried to include more specific info from my program for you to look over as well as the log. def approve_request(self, request_ids, session=None): if session is None: session = scopedSession() try: for request_id in request_ids: request = self.get_request(request_id) if request is None: raise RequestStoreError(ID does not exist) #set request to True request.status = True session.add(request) store_logger.debug(approve_reqeuest before: session=%s, session dirty=%s % (id(session), session.dirty)) #add to the corresponding url as exception to standard rules filter_store.find_or_create_exception_url(request.owned_by_id, request.request_url, session=session) store_logger.debug(approve_request after: session=%s, session dirty=%s % (id(session), session.dirty)) session.commit() except Exception as e: print e return False finally: session.close() return True def find_or_create_exception_url(self, fg, url, displayName=None, session=None): if session is None: session = scopedSession() result = self.get_exception_url(fg, url) store_logger.debug(result=%s % result) if result is None: store_logger.debug(result not Found) banned = self.get_banned_url(fg, url) if banned is not None: session.delete(banned) session.flush() result = ExceptionUrl(url, fg, displayName) session.add(result) store_logger.debug(adding ExceptionUrl=%s, result) store_logger.debug(find_or_create_exception_url: session=%s, session_dirty=%s % (id(session), session.dirty)) return result Notice that I have some debug statements in each function. When the approve_request function is called with an array of ids, the logging results are below. The session is always the same which is what is expected but a couple of unexpected things happen 1) When in the find_or_create_exception_url function the IdentifySet of the session.dirty is empty even though those records ACTUALLY DO get created in the database. 2) Upon returning to approve_request the IdentifySet is empty which means the status of the request is never set to True. 2011-02-09 13:36:42,817 - surf.store - DEBUG - approve_reqeuest before: session=4335326160, session dirty=IdentitySet([OpenRequest('test19.com', 'http://test19.com', '330eee30-381c-48f7-837e-34be7d2cb1c0', 'True', 'None', 'None', 'None')]) 2011-02-09 13:36:42,985 - surf.store - DEBUG - result=None 2011-02-09 13:36:42,985 - surf.store - DEBUG - result not Found 2011-02-09 13:36:43,116 - surf.store - DEBUG - adding ExceptionUrl=PageUrl('http://test19.com','67868150974853021936294845903799103936','None') 2011-02-09 13:36:43,117 - surf.store - DEBUG - find_or_create_exception_url: session=4335326160, session_dirty=IdentitySet([]) 2011-02-09 13:36:43,117 - surf.store - DEBUG - approve_request after: session=4335326160, session dirty=IdentitySet([]) monitor running 2011-02-09 13:36:43,365 - surf.store - DEBUG - approve_reqeuest before: session=4335326160, session dirty=IdentitySet([OpenRequest('test18.com', 'http://test18.com', '330eee30-381c-48f7-837e-34be7d2cb1c0', 'True', 'None', 'None', 'None')]) ['186', '185'] approved 2011-02-09 13:36:43,540 - surf.store - DEBUG - result=None 2011-02-09 13:36:43,540 - surf.store - DEBUG - result not Found 2011-02-09 13:36:43,665 - surf.store - DEBUG - adding ExceptionUrl=PageUrl('http://test18.com','67868150974853021936294845903799103936','None') 2011-02-09 13:36:43,666 - surf.store - DEBUG - find_or_create_exception_url: session=4335326160, session_dirty=IdentitySet([]) 2011-02-09 13:36:43,666 - surf.store - DEBUG - approve_request after: session=4335326160, session dirty=IdentitySet([]) Any guidance on this? Have to turn on SQL logging. The request.request_url call could very well be autoflushing which is why you'd see nothing in session.dirty subsequent to that.The code above doesn't illustrate any other failures to me since I don't know what it's supposed to do. You'd need to narrow down what you think is the erroneous behavior on the part of the Session into an application-agnostic test script that calls two sample functions. In the likely case that an isolated test works fine, you'd work backwards to see what's different between the isolated test and your code. pdb is always helpful here. Brent On Feb 9, 2011, at 12:09 PM, Michael Bayer wrote: On Feb 9, 2011, at 12:01 PM, Brent McConnell wrote: I'm having issues with the sqlalchemy
Re: [sqlalchemy] Logging raw SQL statements
On Feb 9, 2011, at 2:58 PM, George V. Reilly wrote: Under SQLAlchemy 0.5, there used to be some logging setting that would show the actual SQL queries that were being made. Alas, I forget the exact invocation. Under SA 0.6, I have not been able to find a way to do this, short of hacking the SA source. Here's the closest that I've been able to come logging.basicConfig(level=logging.DEBUG) logging.getLogger('sqlalchemy').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.pool').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.dialects.mysql').setLevel(logging.DEBUG) logging.getLogger('sqlalchemy.orm').setLevel(logging.DEBUG) logging.getLogger('mysqldb').setLevel(logging.DEBUG) but I see no SELECTs in the output. Obviously, this is useful when we want the server to explain query plans and the like, to understand performance issues. I specifically care about MySQL and SQLite queries. The above statements are correct. 'sqlalchemy.engine' specifically targets the SQL statements emitted. There is also echo=True on create_engine() which is a shortcut for the sqlalchemy.engine statement above. A description of logging as applied to SQLAlchemy is at http://www.sqlalchemy.org/docs/core/engines.html#configuring-logging . Note that when using logging, the context of the application may suppress standard output. The test harness nose does so unless the -s flag is used, and a daemonizing server like paster server also shuts off standard output. You may need to route the logging to a file instead in a case like the latter. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality
OK that ticket is complete if you feel like trying the default tip or 0.6 branch: 0.6: http://hg.sqlalchemy.org/sqlalchemy/archive/rel_0_6.tar.gz tip: http://hg.sqlalchemy.org/sqlalchemy/archive/default.tar.gz On Feb 9, 2011, at 12:52 AM, Michael Bayer wrote: Yeah, thats because the compiler extension, wrapped directly around _BindParamClause, is not called on an insert, when the bind is generated within the compile. That is why I made ticket #2042. Not going to work until I get around to that. It would work if you generated the bindparam() yourself, as occurs with literal(), i.e. table.insert().values(date=literal(some_date, type=UTCNowColumn()), but that's not the general INSERT form that gets used with the ORM and such. On Feb 8, 2011, at 11:25 PM, chris e wrote: I understand that's the case for defaults, but to test the code I actually set a value for one of the columns that is of type UTCTimestamp, and the compiler extension was never called. I'm stumped. That said, I can keep moving, I've decided to just use oracle timestamps without timezones, and always convert to UTC since cx_Oracle doesn't handle timezones properly. On Feb 8, 7:18 pm, Michael Bayer mike...@zzzcomputing.com wrote: The TypeDecorator is not used for defaults. You set default=utc_timestamp() on your Column.The SQL expression is rendered directly in the INSERT when no value given, no bind params used. On Feb 8, 2011, at 10:13 PM, chris e wrote: I now have the code below, but the _oracle_utc_timestamp function is never called, even when I do explicitly set a value. class UTCTimestamp(TypeDecorator): impl = TIMESTAMP # add the UTC time zone info to naive timestamps def process_result_value(self, value, dialect) : if value != None : value = UTC.localize(value) return value def process_bind_param(self, value, dialect): # if we have a value convert it to UTC if needed if value != None : if value.tzinfo : value = value.astimezone(UTC) return value class utc_timestamp(FunctionElement): type = DateTime() @compiles(utc_timestamp) def _oracle_utc_timestamp(element, compiler, **kw): import pdb pdb.set_trace() On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 8, 2011, at 4:56 PM, chris e wrote: To simplify date handling in a project on which I am working, I am storing UTC dates in the database in a timestamp with timezone field, however, because cx_Oracle does not have any timezone functionality, I need to cast the UTC timestamp I'm inserting into the database as a timestamp in UTC so that the database does not convert it to the db timezone. This also needs to apply to default values. I have the following, however, it is not called for default values: The @compiles for _BindParamClause was never expected and is not covered within a visit_insert() right now. Ticket #2042 is added. However, you're better off using SQL level UTC functions for defaults in any case which would be the workaround here. Here's one I use for PG + SQL Server. You can add another for Oracle that includes your CAST expression if needed: class utcnow(expression.FunctionElement): type = DateTime() @compiles(utcnow, 'postgresql') def _pg_utcnow(element, compiler, **kw): return TIMEZONE('utc', CURRENT_TIMESTAMP) @compiles(utcnow, 'mssql') def _ms_utcnow(element, compiler, **kw): return GETUTCDATE() from pytz import UTC class UTCDateTime(TypeDecorator): impl = TIMESTAMP # add the UTC time zone info to naive timestamps def process_result_value(self, value, dialect) : if value != None : value = UTC.localize(value) return value @compiles(_BindParamClause) def _compile_utc_date(element, compiler, **kw): if isinstance(element.type, UTCDateTime) : return from_tz(cast(%s as timestamp), 'UTC') \ % compiler.visit_bindparam(element, **kw) return compiler.visit_bindparam(element, **kw) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email
Re: [sqlalchemy] Logging raw SQL statements
The combination of create_engine(..., echo=True) and logging.basicConfig(level=logging.DEBUG) logging.getLogger('sqlalchemy').setLevel(logging.DEBUG) does the trick for me now. Thanks! /George -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Logging raw SQL statements
in theory that would log all statements twice, only one should be needed. On Feb 9, 2011, at 5:33 PM, George V. Reilly wrote: The combination of create_engine(..., echo=True) and logging.basicConfig(level=logging.DEBUG) logging.getLogger('sqlalchemy').setLevel(logging.DEBUG) does the trick for me now. Thanks! /George -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.