[sqlalchemy] Setting up indexes in __table_args__ that depend on columns created in __declare_first__
I'm trying to create a mixin that will setup FK columns that are dynamically named based on the name of the parent as opposed to a static name like `parent_id`. If was going to do the latter, I could easily use `declarted_attr` but since I want the former, I thought I could use `__declare_first__()`. It works except that I also need to setup an index on the FK column. When trying to do that with `__table_args__()`, I get an exception b/c, `__table_args__()` gets called before `__declare_first__()`. class FlawMixin: @sa.orm.declared_attr def __tablename__(cls): return f'{cls.__flaw_ident__}_flaws' @sa.orm.declared_attr def __table_args__(cls): return ( sa.Index(f'ix_{cls.__flaw_ident__}_flaws_{cls.__flaw_ident__}', f'{cls.__flaw_ident__}_id'), ) @classmethod def __declare_first__(cls): setattr(cls, f'{cls.__flaw_ident__}_id', sa.Column( sa.Integer, sa.ForeignKey(cls.__flaw_parent__.id, ondelete='cascade'), nullable=False )) setattr(cls, cls.__flaw_ident__, sa.orm.relationship(cls.__flaw_parent__, lazy='raise_on_sql')) I realize I have an event ordering issue with the way this is setup. Just not sure what the correct way is to solve it. Thanks in advance for any help you can provide. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/45921724-ba6a-40e5-8ae9-cad92169ddfbn%40googlegroups.com.
[sqlalchemy] Can I get joined loading on a relationship to work from child to parent?
Is there any way to get relationship joined loading to work from child back up to the parent? from sqlalchemy import create_engine, Column, Integer, String, ForeignKey from sqlalchemy.orm import relationship, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///:memory:') Base = declarative_base() class GrandParent(Base): __tablename__ = 'grands' id = Column(Integer, primary_key=True) name = Column(String()) class Parent(Base): __tablename__ = 'parents' gp_id = Column(ForeignKey(GrandParent.id, ondelete='cascade'), nullable= False) gp = relationship(GrandParent, lazy='joined', innerjoin=True) id = Column(Integer, primary_key=True) name = Column(String()) class Child(Base): __tablename__ = 'childs' parent_id = Column(ForeignKey(Parent.id, ondelete='cascade'), nullable= False) parent = relationship(Parent, lazy='joined', innerjoin=True) id = Column(Integer, primary_key=True) name = Column(String()) Base.metadata.create_all(engine) engine.echo = True Session = sessionmaker(bind=engine) session = Session() gp = GrandParent(name='foo') p = Parent(name='bar', gp=gp) c = Child(name='baz', parent=p) session.add(c) session.commit() # How many SQL statements? Why not 1 that would load Child, it's parent, and it's grand parent? assert c.parent.gp.id As the last comment says, any way to get that last statement to issue one join with all the inner joins to load the parent objects? -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Proposal to discontinue pymssql in favor of pyodbc
Mike, While I have not done work recently committing to the project, I am one of the maintainers. I'm an owner of the pymssql Github organization, pay for the pymssql.org domain, and try to generally keep up with things as time permits. Regarding being an "active" project, that is debatable. Ramiro has done great work over the past year, but we have generally been unable to maintain the project as it should be maintained. We have important issues out there, like a request for a new release for code that is already in GitHub, but just don't have the manpower to make the release. These issues could easily be addressed if we had someone who could devote their effort in that direction, but we don't and have historically struggled to have anything like consistent effort put towards maintaining/improving pymssql. That's not a complaint, the work that has been done is appreciated, it's just what we face. I'm not proposing that pymssql be discontinued _simply_ because another project exists. I'm proposing that it be discontinued because: * Microsoft has come out in-favor of ODBC and pyodbc and, with their support, pyodbc could be a technically superior product. * If Microsoft is supporting pyodbc, many new users will start there and probably not even look for another solution (like pymssql). * pymssql has struggled to find maintainers who can devote time to it and it is starting to languish. So, my thought is, if we don't bring anything to the table that pyodbc doesn't bring, then why shouldn't we point people in that direction instead. However, I appreciate your input here and on the GH issue that pymssql is more stable than pyodbc. That is exactly the kind of information I'm looking for. *Randy Syring* Chief Executive Developer Direct: 502.276.0459 Office: 812.285.8766 Level 12 <https://www.level12.io/> On 01/25/2017 10:47 AM, mike bayer wrote: I don't see how it's appropriate to even suggest that an open source project close its doors simply because another project exists.If you were the maintainer of pymssql, that would be one thing, but looking at the commits it seems to continue to be an active project. pymssql handles our tests more cleanly than pyodbc which has constant datatype issues, and I have had several non-response-situations from the maintainer on the pyodbc side in the somewhat distant past (whereas I've had great response with pymssql issues), so unless the situation has vastly changed I'd prefer pymssql continue its excellent work. On 01/25/2017 10:24 AM, Randy Syring wrote: There is a proposal open to discontinue pymssql development and point people towards pyodbc. Since pymssql is a documented backend for SA, I figured there might be some people here who are interested. If you have any skin in that game and want to comment, please visit the issue: https://github.com/pymssql/pymssql/issues/477 Thanks. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com <mailto:sqlalchemy+unsubscr...@googlegroups.com>. To post to this group, send email to sqlalchemy@googlegroups.com <mailto:sqlalchemy@googlegroups.com>. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Proposal to discontinue pymssql in favor of pyodbc
There is a proposal open to discontinue pymssql development and point people towards pyodbc. Since pymssql is a documented backend for SA, I figured there might be some people here who are interested. If you have any skin in that game and want to comment, please visit the issue: https://github.com/pymssql/pymssql/issues/477 Thanks. -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Memory leak? with connection & insert
Mike, Just to be clear, I'm not doing any kind of selects. I'm only inserting records. And these aren't large binary blobs of any kind, they are rather small strings and ints. I apologize in advance if I misunderstood your answer. *Randy Syring* Chief Executive Developer Direct: 502.276.0459 Office: 812.285.8766 Level 12 <https://www.level12.io/> On 02/25/2016 07:46 PM, Mike Bayer wrote: On 02/25/2016 06:31 PM, Randy Syring wrote: I'm working on a project to parse through a large text file (1GB) of records. Once parsed, each record gets sent to the DB. Due to the size of the file, I've been working on a streaming/functional approach that will keep my memory usage constant. I've been able to simply take the DB out of the equation and parse through all of the records and memory usage stays constant. But, as soon as I bring SA into the picture, memory usage continues to climb through the lifetime of the program. Well as soon as you bring psycopg2, or really any DBAPI, into the picture. DBAPI drivers fully buffer both rows and columns. However, this wouldn't lead to unbounded growth, only that memory would grow as big as the biggest single result set you've fetched. I originally started using the ORM, and thought the Session would be the culprit, but have now drilled down deep enough into the problem that it appears to be an issue even when using simple connections. *using psycopg: * | connection =db.engine.connect().connection withconnection.cursor()ascursor: forcount,statement inenumerate(MEXChunker(mex_file).yield_merchants()): forpc_data instatement.program_charges: insert_sql ="INSERT INTO stage.tsys_program_charges (reporting_month,"\ "reporting_year, charge_amount, merchant_number, officer_code) "\ "VALUES (%s, %s, %s, %s, %s)" cursor.execute(insert_sql,pc_data) I see you aren't using server side cursors, which is the minimum required to not get psycopg2 to buffer rows as they are sent. But even then, server side cursors don't have any impact on individual column values being buffered - I'm not sure if these text fields are large binary objects themselves, but the only DBAPI right now that supports streaming of BLOB objects is cx_Oracle and SQLAlchemy also hides this API. For streaming of large objects in psycopg2, you have to use this obscure Postgresql feature nobody uses called "large objects" that requires use of a special table, that's described at http://initd.org/psycopg/docs/usage.html#access-to-postgresql-large-objects. | The above, when ran, shows memory ("RES" in `top`) quickly climb and then hold around 183K. The resources module reports "max rss" at 182268 at the end of running the script. Those memory numbers are just about the same if I simply run the loop and keep the DB out of it. *using SA * | withdb.engine.begin()asconnection: forcount,statement inenumerate(MEXChunker(mex_file).yield_merchants()): forpc_data instatement.program_charges: insert_sql ="INSERT INTO stage.tsys_program_charges (reporting_month,"\ "reporting_year, charge_amount, merchant_number, officer_code) "\ "VALUES (%s, %s, %s, %s, %s)" connection.execute(insert_sql,pc_data) | The above, when ran, shows memory usage climbing through the life of the script. "max rss" tops out at 323984. I'd like to ultimately be able to use the ORM for this project, but if even the simple inserts using SA don't result in constant memory, I can't really more forward with that plan. Thanks in advance for any help you can provide. Basically, people use server side cursors for this, but I find those to be troubling since they are temperamental and aren't platform dependent. If the size of your data is based on that there's a lot of rows, I'd fetch it using windows at a time, e.g. an approach similar to that described at https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/WindowedRangeQuery. I've written routines that generalize this such that I can even pass different chunks of data as I fetch them into individual worker processes using a multiprocessing.Pool and I've written routines that write data out to files and use heapq to sort them back (that job in particular we had to read/write out a 4G XML file, where a DOM tree of such would run out of memory immediately, so we used all SAX parsing/streaming and heapq). IMO once you have things chunked, you can do anything with it. *system info *Python 2.7.6 SA 1.0.10 & SA 1.0.12 Ubuntu Linux 14.04 -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com <mailto:sqlalchemy+unsubscr...@googlegroups.com>. To post to this group, send email to sq
[sqlalchemy] Memory leak? with connection & insert
I'm working on a project to parse through a large text file (1GB) of records. Once parsed, each record gets sent to the DB. Due to the size of the file, I've been working on a streaming/functional approach that will keep my memory usage constant. I've been able to simply take the DB out of the equation and parse through all of the records and memory usage stays constant. But, as soon as I bring SA into the picture, memory usage continues to climb through the lifetime of the program. I originally started using the ORM, and thought the Session would be the culprit, but have now drilled down deep enough into the problem that it appears to be an issue even when using simple connections. *using psycopg:* connection = db.engine.connect().connection with connection.cursor() as cursor: for count, statement in enumerate(MEXChunker(mex_file). yield_merchants()): for pc_data in statement.program_charges: insert_sql = "INSERT INTO stage.tsys_program_charges (reporting_month," \ "reporting_year, charge_amount, merchant_number, officer_code) " \ "VALUES (%s, %s, %s, %s, %s)" cursor.execute(insert_sql, pc_data) The above, when ran, shows memory ("RES" in `top`) quickly climb and then hold around 183K. The resources module reports "max rss" at 182268 at the end of running the script. Those memory numbers are just about the same if I simply run the loop and keep the DB out of it. *using SA* with db.engine.begin() as connection: for count, statement in enumerate(MEXChunker(mex_file). yield_merchants()): for pc_data in statement.program_charges: insert_sql = "INSERT INTO stage.tsys_program_charges (reporting_month," \ "reporting_year, charge_amount, merchant_number, officer_code) " \ "VALUES (%s, %s, %s, %s, %s)" connection.execute(insert_sql, pc_data) The above, when ran, shows memory usage climbing through the life of the script. "max rss" tops out at 323984. I'd like to ultimately be able to use the ORM for this project, but if even the simple inserts using SA don't result in constant memory, I can't really more forward with that plan. Thanks in advance for any help you can provide. *system info*Python 2.7.6 SA 1.0.10 & SA 1.0.12 Ubuntu Linux 14.04 -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] UTC timestamps for Column's server_default?
So helpful! Thanks. *Randy Syring* Chief Executive Developer Direct: 502.276.0459 Office: 812.285.8766 Level 12 https://www.level12.io/ On 07/24/2015 03:45 PM, Mike Bayer wrote: On 7/24/15 3:17 PM, Randy Syring wrote: I have some generic timestamp columns as part of a mixin. I'd like for these columns to have server defaults of the current UTC time. If I wanted local time, I could just do: created_ts = Column(DateTime, ..., server_default=sasql.text('CURRENT_TIMESTAMP')) The problem I'm running into is that the DB servers all have a different way of getting UTC time: SQLITE: select CURRENT_TIMESTAMP Microsoft SQL: select GETUTCDATE() PostgreSQL: select (now() at time zone 'utc') So how do I set the server default in a db specific way when I want to be able to define the columns in a library that will be used on different DB servers? yup this is a common one, so much that here it is right here: http://docs.sqlalchemy.org/en/rel_1_0/core/compiler.html#utc-timestamp-function -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to a topic in the Google Groups sqlalchemy group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/sqlalchemy/NpK5n59QbV8/unsubscribe. To unsubscribe from this group and all its topics, send an email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] UTC timestamps for Column's server_default?
I have some generic timestamp columns as part of a mixin. I'd like for these columns to have server defaults of the current UTC time. If I wanted local time, I could just do: created_ts = Column(DateTime, ..., server_default=sasql.text('CURRENT_TIMESTAMP')) The problem I'm running into is that the DB servers all have a different way of getting UTC time: SQLITE: select CURRENT_TIMESTAMP Microsoft SQL: select GETUTCDATE() PostgreSQL: select (now() at time zone 'utc') So how do I set the server default in a db specific way when I want to be able to define the columns in a library that will be used on different DB servers? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Do passive deletes apply to many to many relationships?
I am trying to get SQLAlchemy to let my database's foreign keys on delete cascade do the cleanup on the association table between two objects. I have setup the cascade and passive_delete options on the relationship as seems appropriate from the docs. However, when a related object is loaded into the collection of a primary object and the primary object is deleted from the session, then SQLAlchemy issues a delete statement for the related object. I have a code example to reproduce the problem at stackoverflow: http://stackoverflow.com/questions/23669198/passive-deletes-in-sqlalchemy-with-a-many-to-many-relationship-dont-prevent-del Is there a way to configure SQLAlchemy to never emit a DELETE for a related object even when that object is loaded in a collection of an entity that is deleted from the session? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Turning a Query instance into SQL (with parameters), need help w/ LIMIT/OFFSET in 0.7.x
I added the new recipe to the wiki. - Randy Syring Development Executive Director Level 12 Technologies https://www.lev12.com/ (formerly Intelicom) Direct: 502-276-0459 Office: 502-212-9913 Intelicom is now Level 12 Technologies,learn more about our name change https://www.lev12.com/our-history. Please update your address book with my new email address. Principled People, Technology that Works On 03/09/2012 09:24 PM, Michael Bayer wrote: i haven't followed this closely, worth updating the recipe on the SQLA wiki ? On Mar 9, 2012, at 2:06 PM, Randy Syring wrote: I checked out that function and it turns out that I actually do have problems with: q = db.sess.query(Blog).filter(Blog.title == u'foo').limit(10).offset(5) results in: SELECT blogs.id, blogs.createdts, blogs.updatedts, blogs.title, blogs.ident FROM blogs WHERE blogs.title = ? LIMIT ? OFFSET ? However, I was mistaken in my original post. The problem was not with the helper function but with the way I was doing my testing. The full implementation of the helper function is here: http://stackoverflow.com/a/5698357/182111 - Randy Syring Development Executive Director Level 12 Technologies https://www.lev12.com/ (formerly Intelicom) Direct: 502-276-0459 Office: 502-212-9913 Intelicom is now Level 12 Technologies,learn more about our name change https://www.lev12.com/our-history. Please update your address book with my new email address. Principled People, Technology that Works On 03/09/2012 03:25 AM, Alex K wrote: We use this recipe and in 0.7.5 it works ok with limit and offset. http://www.sqlalchemy.org/trac/wiki/UsageRecipes/old/DebugInlineParams On Fri, Mar 9, 2012 at 10:32 AM, Randy Syring rsyr...@gmail.com mailto:rsyr...@gmail.com wrote: I found a recipe on stackoverflow for turning a query instance into a string, including parameters. I only do this for testing purposes and the implementation is here: https://bitbucket.org/rsyring/sqlalchemybwc/src/292597b37736/sqlalchemybwc/lib/testing.py However, I just upgraded to 0.7.5 and it would appear this recipe does not handle LIMIT/OFFSET becoming parameterized. I get the following when using the function: ...persons.last_name AS persons_last_name FROM persons LIMIT :param_1 OFFSET :param_2 I'm in over my head on SA internals on this one and would appreciate suggestions. Thanks in advance. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/ryYu6nVG-RQJ. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy%2bunsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Turning a Query instance into SQL (with parameters), need help w/ LIMIT/OFFSET in 0.7.x
Excellent, thank you! - Randy Syring Development Executive Director Level 12 Technologies https://www.lev12.com/ (formerly Intelicom) Direct: 502-276-0459 Office: 502-212-9913 Intelicom is now Level 12 Technologies,learn more about our name change https://www.lev12.com/our-history. Please update your address book with my new email address. Principled People, Technology that Works On 03/09/2012 03:25 AM, Alex K wrote: We use this recipe and in 0.7.5 it works ok with limit and offset. http://www.sqlalchemy.org/trac/wiki/UsageRecipes/old/DebugInlineParams On Fri, Mar 9, 2012 at 10:32 AM, Randy Syring rsyr...@gmail.com mailto:rsyr...@gmail.com wrote: I found a recipe on stackoverflow for turning a query instance into a string, including parameters. I only do this for testing purposes and the implementation is here: https://bitbucket.org/rsyring/sqlalchemybwc/src/292597b37736/sqlalchemybwc/lib/testing.py However, I just upgraded to 0.7.5 and it would appear this recipe does not handle LIMIT/OFFSET becoming parameterized. I get the following when using the function: ...persons.last_name AS persons_last_name FROM persons LIMIT :param_1 OFFSET :param_2 I'm in over my head on SA internals on this one and would appreciate suggestions. Thanks in advance. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/ryYu6nVG-RQJ. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy%2bunsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Turning a Query instance into SQL (with parameters), need help w/ LIMIT/OFFSET in 0.7.x
I checked out that function and it turns out that I actually do have problems with: q = db.sess.query(Blog).filter(Blog.title == u'foo').limit(10).offset(5) results in: SELECT blogs.id, blogs.createdts, blogs.updatedts, blogs.title, blogs.ident FROM blogs WHERE blogs.title = ? LIMIT ? OFFSET ? However, I was mistaken in my original post. The problem was not with the helper function but with the way I was doing my testing. The full implementation of the helper function is here: http://stackoverflow.com/a/5698357/182111 - Randy Syring Development Executive Director Level 12 Technologies https://www.lev12.com/ (formerly Intelicom) Direct: 502-276-0459 Office: 502-212-9913 Intelicom is now Level 12 Technologies,learn more about our name change https://www.lev12.com/our-history. Please update your address book with my new email address. Principled People, Technology that Works On 03/09/2012 03:25 AM, Alex K wrote: We use this recipe and in 0.7.5 it works ok with limit and offset. http://www.sqlalchemy.org/trac/wiki/UsageRecipes/old/DebugInlineParams On Fri, Mar 9, 2012 at 10:32 AM, Randy Syring rsyr...@gmail.com mailto:rsyr...@gmail.com wrote: I found a recipe on stackoverflow for turning a query instance into a string, including parameters. I only do this for testing purposes and the implementation is here: https://bitbucket.org/rsyring/sqlalchemybwc/src/292597b37736/sqlalchemybwc/lib/testing.py However, I just upgraded to 0.7.5 and it would appear this recipe does not handle LIMIT/OFFSET becoming parameterized. I get the following when using the function: ...persons.last_name AS persons_last_name FROM persons LIMIT :param_1 OFFSET :param_2 I'm in over my head on SA internals on this one and would appreciate suggestions. Thanks in advance. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/ryYu6nVG-RQJ. To post to this group, send email to sqlalchemy@googlegroups.com mailto:sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com mailto:sqlalchemy%2bunsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Turning a Query instance into SQL (with parameters), need help w/ LIMIT/OFFSET in 0.7.x
I found a recipe on stackoverflow for turning a query instance into a string, including parameters. I only do this for testing purposes and the implementation is here: https://bitbucket.org/rsyring/sqlalchemybwc/src/292597b37736/sqlalchemybwc/lib/testing.py However, I just upgraded to 0.7.5 and it would appear this recipe does not handle LIMIT/OFFSET becoming parameterized. I get the following when using the function: ...persons.last_name AS persons_last_name FROM persons LIMIT :param_1 OFFSET :param_2 I'm in over my head on SA internals on this one and would appreciate suggestions. Thanks in advance. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/ryYu6nVG-RQJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Help using SqlSoup with database views
FWIW, I tried the map_to() method but still received the PK error. The following method, however, worked fine: ss = SqlSoup(db.engine) meta = ss._metadata tbl_vrmf = sa.Table(vRMF, meta, autoload=True) vrmf_pks = [tbl_vrmf.c.dateId, tbl_vrmf.c.ident, tbl_vrmf.c.mnum] vrmf = ss.map(tbl_vrmf, primary_key=vrmf_pks) On Jun 10, 8:18 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jun 9, 2011, at 12:41 AM, Reece Hart wrote: I'd like to use SqlSoup with an existing database that contains views. Accessing a table goes swimmingly, but accessing a view results in PKNotFoundError: table '[viewname]' does not have a primary key defined... Do I correctly infer that SqlSoup does not work with database views (by default, at least)? I've been unable to find anything directly relevant on Google, SO, or the SqlAlchemy mailing list. If you were faced with this, how would you proceed if you wanted to access non-updatable views? I'm new to SQLAlchemy and SQLSoup. Here's a specific example: from sqlalchemy.ext.sqlsoup import SqlSoup u = SqlSoup('postgresql+psycopg2://pub...@unison-db.org:5432/unison') seq = u.pseq.filter(u.pseq.pseq_id==76).all() # okay aliases = u.pseqalias.filter(u.pseqalias.pseq_id==76).all() PKNotFoundError: table 'pseqalias' does not have a primary key defined... You would need to pass the columns to be considered part of the primary key to the underlying mapper, using sqlsoup.map_to(), but unfortunately there is not a simple interface for that at the moment, since you need the Table object as well to get at the columns. So until this interface could be improved, for now it would look like: metadata = u._metadata t = Table(pseqaliases, metadata, autoload=True) u.map_to(pseqaliases, selectable=t, mapper_args={primary_key:[t.c.col1, t.c.col2]}) This is just the primary_key argument to mapper, there are some examples athttp://www.sqlalchemy.org/docs/orm/mapper_config.htmlnear the top. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: can I use SA to dump a table's DDL to a file?
If anyone is interested, I have some code that writes basic details for tables, constraints, indexes, and triggers out to files. Code is here: https://bitbucket.org/rsyring/mssqlddlwriter/ -- Randy Syring Intelicom Direct: 502-276-0459 Office: 502-212-9913 For the wages of sin is death, but the free gift of God is eternal life in Christ Jesus our Lord (Rom 6:23) On 06/02/2011 11:03 AM, Michael Bayer wrote: On Jun 2, 2011, at 3:14 AM, Randy Syring wrote: Looks like I beat you to the answer by about 3 mins, thanks for answering though. :) I have a follow-up though. The foreign key constraints are not getting created with ON DELETE CASCADE as expected. I tried this in both MSSQL and Postgres. Is this expected behavior? yeah I don't think this aspect of the FK is reflected right now.Plenty of room for that as a new feature of course If so, is it possible to modify the FKs after they are reflected to set ondelete and onupdate as appropriate? My test code: http://paste.pocoo.org/show/399307/ And my output: CREATE TABLE blog_comments ( id INTEGER NOT NULL, blog_id INTEGER, CONSTRAINT blog_comments_pkey PRIMARY KEY (id), CONSTRAINT blog_comments_blog_id_fkey FOREIGN KEY(blog_id) REFERENCES blog (id) ) It would seem to be a reflection issue though, b/c if I create a table manually, the ON DELETE CASCADE is added correctly. I think if you went through the Table object's .constraints collection and looked for ForeignKeyConstraint objects you could set up the onupdate and ondelete attributes on them, sure. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: can I use SA to dump a table's DDL to a file?
Looks like I beat you to the answer by about 3 mins, thanks for answering though. :) I have a follow-up though. The foreign key constraints are not getting created with ON DELETE CASCADE as expected. I tried this in both MSSQL and Postgres. Is this expected behavior? If so, is it possible to modify the FKs after they are reflected to set ondelete and onupdate as appropriate? My test code: http://paste.pocoo.org/show/399307/ And my output: CREATE TABLE blog_comments ( id INTEGER NOT NULL, blog_id INTEGER, CONSTRAINT blog_comments_pkey PRIMARY KEY (id), CONSTRAINT blog_comments_blog_id_fkey FOREIGN KEY(blog_id) REFERENCES blog (id) ) It would seem to be a reflection issue though, b/c if I create a table manually, the ON DELETE CASCADE is added correctly. On Jun 1, 8:36 pm, Michael Bayer mike...@zzzcomputing.com wrote: sure, you'd use the mock executor as in the second example here:http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPT... On Jun 1, 2011, at 8:01 PM, Randy Syring wrote: I'd like to be able to dump an MS SQL server's objects to text on the local file system. I have a working solution for views, stored procedures, and functions, but tables are a different story. Can i use SA's reflection and table creation abilities to write create table DDL to a text file? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: can I use SA to dump a table's DDL to a file?
That was it, thanks. I was trying to go through the column and looking at it's foreign_keys collection. When I set those values, it didn't affect the output. Reflects my ignorance of SA, obviously. Thanks again. -- Randy Syring Intelicom Direct: 502-276-0459 Office: 502-212-9913 For the wages of sin is death, but the free gift of God is eternal life in Christ Jesus our Lord (Rom 6:23) On 06/02/2011 11:03 AM, Michael Bayer wrote: I think if you went through the Table object's .constraints collection and looked for ForeignKeyConstraint objects you could set up the onupdate and ondelete attributes on them, sure. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] can I use SA to dump a table's DDL to a file?
I'd like to be able to dump an MS SQL server's objects to text on the local file system. I have a working solution for views, stored procedures, and functions, but tables are a different story. Can i use SA's reflection and table creation abilities to write create table DDL to a text file? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: can I use SA to dump a table's DDL to a file?
In the FAQ...sorry: http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABLEoutputasastring On Jun 1, 8:01 pm, Randy Syring ra...@rcs-comp.com wrote: I'd like to be able to dump an MS SQL server's objects to text on the local file system. I have a working solution for views, stored procedures, and functions, but tables are a different story. Can i use SA's reflection and table creation abilities to write create table DDL to a text file? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: pyodbc + FreeTDS segfault?? on linux
Seems to be a unicode conversion problem, if you are interested in following, the pyodbc issue with very small test case is here: http://code.google.com/p/pyodbc/issues/detail?id=170 On Apr 7, 9:37 am, Michael Bayer mike...@zzzcomputing.com wrote: On Apr 7, 2011, at 12:46 AM, Randy Syring wrote: I am running Ubuntu 10.04, python 2.6.5, SA 0.6.6, latest pyodbc release. I have tried FreeTDS that ships with the distro (0.82) as well as current CVS. I can make a connection and issue a basic SQL statement. However, when I try to run my unit tests, I get the following error: *** glibc detected *** /path/to/venv/bin/python: free(): invalid next size (fast): 0x02527bf0 *** nothing ive seen before with freetds (and I use freetds a lot) - so your steps would be to isolate the problem into something reproducible, then ask on the FreeTDS or possibly pyodbc lists, possibly first converting it into a straight pyodbc script so there are at least fewer layers of abstraction at play. The SQL issued just before that error is: 2011-04-07 00:26:39,257 INFO sqlalchemy.engine.base.Engine.0x...0350 INSERT INTO corporate_sessions (createdts, updatedts, id, hits, relatedip, user_id) VALUES (?, ?, ?, ?, ?, ?) 2011-04-07 00:26:39,257 INFO sqlalchemy.engine.base.Engine.0x...0350 (datetime.datetime(2011, 4, 7, 0, 26, 39, 257073), None, u'e1457896AdZgRjqg8CD8', 0, '10.66.3.230', 1) and then the process just hangs and I have to kill the process. My unit tests run successfully against sqlite, postgresql, and MSSQL on Windows. I have successfully ran the following test script on with the same libraries and virtualenv: import datetime import sqlalchemy as sa eng = sa.create_engine(mssql://user:pass@server/temp? Port=1435TDS_Version=8.0,echo=True) res = eng.execute('select 1+1 as foo') for row in res: print 'answer=',row['foo'] #eng.execute('DROP TABLE satest') #eng.execute('CREATE TABLE satest(id INT, name VARCHAR(300), ts datetime)') res = eng.execute('INSERT INTO satest(id, name, ts) VALUES (?, ?, ?)', (10, u'foobar', datetime.datetime(2011, 4, 7, 0, 26, 39, 257073))) One last thing, when I first tested this, I got an error related to using an ORM instance when it had already been deleted (or something like that, I can't remember exactly). But I haven't seen that error for a long time and don't remember doing anything in particular to change it. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] pyodbc + FreeTDS segfault?? on linux
I am running Ubuntu 10.04, python 2.6.5, SA 0.6.6, latest pyodbc release. I have tried FreeTDS that ships with the distro (0.82) as well as current CVS. I can make a connection and issue a basic SQL statement. However, when I try to run my unit tests, I get the following error: *** glibc detected *** /path/to/venv/bin/python: free(): invalid next size (fast): 0x02527bf0 *** The SQL issued just before that error is: 2011-04-07 00:26:39,257 INFO sqlalchemy.engine.base.Engine.0x...0350 INSERT INTO corporate_sessions (createdts, updatedts, id, hits, relatedip, user_id) VALUES (?, ?, ?, ?, ?, ?) 2011-04-07 00:26:39,257 INFO sqlalchemy.engine.base.Engine.0x...0350 (datetime.datetime(2011, 4, 7, 0, 26, 39, 257073), None, u'e1457896AdZgRjqg8CD8', 0, '10.66.3.230', 1) and then the process just hangs and I have to kill the process. My unit tests run successfully against sqlite, postgresql, and MSSQL on Windows. I have successfully ran the following test script on with the same libraries and virtualenv: import datetime import sqlalchemy as sa eng = sa.create_engine(mssql://user:pass@server/temp? Port=1435TDS_Version=8.0,echo=True) res = eng.execute('select 1+1 as foo') for row in res: print 'answer=',row['foo'] #eng.execute('DROP TABLE satest') #eng.execute('CREATE TABLE satest(id INT, name VARCHAR(300), ts datetime)') res = eng.execute('INSERT INTO satest(id, name, ts) VALUES (?, ?, ?)', (10, u'foobar', datetime.datetime(2011, 4, 7, 0, 26, 39, 257073))) One last thing, when I first tested this, I got an error related to using an ORM instance when it had already been deleted (or something like that, I can't remember exactly). But I haven't seen that error for a long time and don't remember doing anything in particular to change it. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] MSSQL: OUTPUT clause is throwing an error on a table with an INSERT trigger
I am executing a bulk insert which is resulting in SQL like: INSERT INTO equipment (equipment_type_id, number, mark, inactive) OUTPUT inserted.id VALUES (?, ?, ?, ?)' (6, u'1', None, 0) But I have a trigger on equipment that does some validity checking. When executing, I get an exception which relays the following info from SQL server: DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause. I am not sure what to do about this and would appreciate suggestions/ discussion. Thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: MSSQL: OUTPUT clause is throwing an error on a table with an INSERT trigger
Thank you! Ticket for doc addition: http://www.sqlalchemy.org/trac/ticket/1994 On Dec 6, 6:10 pm, Michael Bayer mike...@zzzcomputing.com wrote: specify implicit_returning = False as part of the Table arguments. Should probably add a note to the MSSQL docs on this. On Dec 6, 2010, at 5:41 PM, Randy Syring wrote: I am executing a bulk insert which is resulting in SQL like: INSERT INTO equipment (equipment_type_id, number, mark, inactive) OUTPUT inserted.id VALUES (?, ?, ?, ?)' (6, u'1', None, 0) But I have a trigger on equipment that does some validity checking. When executing, I get an exception which relays the following info from SQL server: DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause. I am not sure what to do about this and would appreciate suggestions/ discussion. Thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] SAValidation: class-level validation for Declarative objects
I wanted to let others know about a project I am working on to give active record style validations to declarative SA classes. The idea is that you can declare validation on individual fields or the whole class and have those validators fire when the session is flushed. I am not an SA guru by any means and I feel like this is a bit hacked together, but I wanted to get it out there and see if there is interest and feedback. One of the nicest parts about this package, IMO, is that it has the ability to automatically validate some constraints like length of strings and nullability. We set up our web-form validation to map the validation errors to the form fields and it saves us from having to do manual validation at the form level and duplicate meta-data about fields that the declarative classes already know about. PYPI: http://pypi.python.org/pypi/SAValidation bitbucket: http://bitbucket.org/rsyring/sqlalchemy-validation/src -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] documentation correction on declarative mixin
http://www.sqlalchemy.org/docs/reference/ext/declarative.html#controlling-table-inheritance-with-mixins In the second example, Engineer has __tablename__ = None But, I think the point of that section, is that it wouldn't be needed. __tablename__ in Tablename should assign None anyway, shouldn't it? laggo on IRC confirmed and said I should post to the mailing list. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] negative implications of using multiple declarative Base classes
I have been, naively it seems, using multiple declarative Base classes in my webapp. They all share the same metadata object. I have found one negative ramification of this, which is that string references (like what can be used in relation()) won't find the object if they are not using the same Base. Are there others? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] model validation library, need help with extension sequencing
I am developing a library to do Active Record style validations on SA declarative instances: http://bitbucket.org/rsyring/sqlalchemy-validation/src/tip/savalidation/ I know there are differences of opinion on the value of doing validation like this, but I am really hoping to avoid that discussion as I think it will be a valuable addition if I can get it implemented correctly. I am running into an issue with getting my hooks in the right place. I would like to be able to do the following: * I would like default values to have been applied to the instance if applicable before validation, i.e. i want to see the instance as a mirror of what the flushed SQL will look like * I would like to be able to have multiple models fail validation i.e. I want to get details on as many errors as possible before raising an exception I was initially using before_insert and before_update on the mapper extension and trying to catch the errors on those instances in the session extension. However, before_flush() gets called before the mapper extension's before_insert/update, so that didn't work. So, Ideally, my work flow would look something like: - before_insert/before_update called on each instance -- instance.do_validation() called; any errors are stored on the instance and the process continues - I loop through the session looking for instances with validation errors -- if I find any, I throw a ValidationError exception - SQL is actually flushed to the DB I really am in over my head a bit in SA internals though, so feel free to let me know if I am missing the big picture. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: model validation library, need help with extension sequencing
Ok, so I thought I might sacrifice getting all errors at once fore being able to validate default values, by using the mapper extension's before_update() and before_insert(). However, that also does not seem to work as the Column's default values are not applied at that point either. -- Randy Syring Intelicom 502-644-4776 Whether, then, you eat or drink or whatever you do, do all to the glory of God. 1 Cor 10:31 Randy Syring wrote: Mike, Thank you for your quick reply. If I understand correctly, after_flush() will get called after the SQL is actually sent to the db server. If that is the case, then it is too late for this validation as I would want the ability to test for validation issues that would cause DB errors (i.e. NULL in non-NULL fields, string lengths greater than the column size). However, I can do some testing with it. On Apr 19, 3:31 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Apr 19, 2010, at 3:25 PM, Randy Syring wrote: * I would like default values to have been applied to the instance if applicable before validation, i.e. i want to see the instance as a mirror of what the flushed SQL will look like * I would like to be able to have multiple models fail validation i.e. I want to get details on as many errors as possible before raising an exception I was initially using before_insert and before_update on the mapper extension and trying to catch the errors on those instances in the session extension. However, before_flush() gets called before the mapper extension's before_insert/update, so that didn't work. So, Ideally, my work flow would look something like: - before_insert/before_update called on each instance -- instance.do_validation() called; any errors are stored on the instance and the process continues - I loop through the session looking for instances with validation errors -- if I find any, I throw a ValidationError exception - SQL is actually flushed to the DB I really am in over my head a bit in SA internals though, so feel free to let me know if I am missing the big picture. did you try after_flush() ? the transaction hasn't been committed yet at that point. then you could whiz through the whole session and each object has everything fully, including DB-generated state. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Any libraries or examples of how to let users define custom fields on SA tables/objects
I am creating a web application that has the concept of organizations and people. I would like the end users to be able to define custom fields on those objects as needed. So, User A (only default fields): Organization: - Name - Zip People: - Name - Phone Number User B (has a few custom fields): Organization: - Name - Zip - Main Contact - Phone People: - Name - Phone Number - Email Address Obviously, customer B's customizations shouldn't affect User A's usage. So, this seems like a relatively common paradigm and I was hoping someone might already know of a library or example application for using SA to accomplish this. I am looking for something akin to Rail's acts_as_cutomizable plugin (http://github.com/trappist/ acts_as_customizable) for SA. Thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: How to get SA column object from table.col, declarative attribute, or elixir entity attribute
Can anyone give me an idea of what I need to do here? If I can just get a high level overview of what I need to do, I am happy to read the documentation and source to fill in the details. Thanks. On Oct 1, 1:56 pm, Randy Syring ra...@rcs-comp.com wrote: Mike, Thank you for the prompt reply: On Oct 1, 1:11 pm, Michael Bayer mike...@zzzcomputing.com wrote: RandySyring wrote: I'm not sure of the context here. are you generating code or just executing SQL ? Ok, maybe a small example. Here is a declarative object and the how the datagrid gets defined: http://paste.pocoo.org/show/142448/ Note that the datagrid currently accepts the column from the table or the declarative attribute. It would also accept an elixir attribute. Then, inside the datagrid library, I have references to those objects that were passed in. Use them to construct SQL based on options selected by the user. So, a user might request something like: /foo?filteron=firstnamefilteronop=nefilterfor=test* and we would generate something like: SELECT persons.id AS persons_id, persons.firstname AS persons_firstname, persons.last_name AS persons_last_name FROM persons WHERE persons.firstname NOT LIKE 'test' run that against the DB and then return the results in an HTML table. The way we generate the SQL is by using the SA/Elixir column/ attributes in a query, here is a small snippet of that: http://paste.pocoo.org/show/142453/ So, my problem currently is that in the above snippet, ffor might be an empty string. That's fine if the column is a text type, but I have to handle it differently if the column is a time stamp. I am also assuming I will run into other problems related to the type of column being used that I will need to handle. But, I can't currently handle those situations b/c I don't know enough about the SA objects to figure what type they are. if the latter wouldn't you be using TypeEngine subclasses to handle coersion of type values ? And that's where you lose me, sorry. All I know is that I have an SA table column, SA declarative attribute, or elixir entity attribute and I need to figure out what type of SA column they represent (i.e. DateTime, Time, Date, etc.). I really have no idea what it will take to bridge the gap, hence this post. :) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] How to get SA column object from table.col, declarative attribute, or elixir entity attribute
I am trying to write a datagrid library for sqlalchemy which will allow easy generation of tables from SA objects. I would like the datagrid to be able to accept a table column, declarative attribute, or elixir entity attribute interchangeably. Since I am building the SQL with queries, this has worked well so far, I just use the objects and SA takes care of the rest. However, I have recently run into a problem that requires the datagrid to be a little smarter. If someone wants to filter on a date, but leaves the value as None, then I need to convert the empty string to a None. However, I only want to do that for Date/Time SA columns. So, how can I extract the SA column object from a declarative attribute or elixir entity? Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Proper way to use case() in a filter()?
Mike, Thank you for the reply. That looks very close to what I have. At the very least, my understanding about what I should be able to do looks correct. I will do some further testing and examination and post back with details. Thanks again. -- Randy Syring RCS Computers Web Solutions 502-644-4776 www.rcs-comp.com Whether, then, you eat or drink or whatever you do, do all to the glory of God. 1 Cor 10:31 On Tue, Aug 11, 2009 at 10:04 PM, Mike Conley mconl...@gmail.com wrote: Not much detail to tell what is wrong. Here is a contrived but working example based on one of my examples: ut = Table('user',meta, Column('uid',String,primary_key=True), Column('name',String) ) kt = Table('keywords',meta, Column('keyword',String,primary_key=True) ) ukt = Table('userkeywords',meta, Column('uid',String,ForeignKey('user.uid')), Column('kw',String,ForeignKey('keywords.keyword')) ) class User(Base): __table__ = ut keywords = relation('Keyword', secondary=ukt, backref='users') def __repr__(s): return User %s:%s % (s.uid,s.name) class Keyword(Base): __table__ = kt def __repr__(s): return Keyword %s % s.keyword q = session.query(User).\ select_from(join(ut,ukt,ut.c.uid==ukt.c.uid)).\ filter(case([(ut.c.uid=='mike','M'), (ut.c.uid=='sue','F')], else_='X') == 'M') -- Mike Conley --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Proper way to use case() in a filter()?
I have a case(...) that looks right when I print it and am now trying to use it in a filter, something like: sess.query(User).select_from(join(...)).filter(case(...) == 1) But I get an error. When dissecting the parts, the problem is coming from the case(...) == 1 part. Can someone tell me how to do this properly? Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: autoload of db view treating columns as Decimal
On Jul 22, 9:02 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jul 22, 2009, at 8:48 PM, Randy Syring wrote: well this is really easy to analyze, turn on echo=debug and see what SQLite is returning. The mapping of SQLite names to SQLA column types in the 0.5 series is in sqlalchemy/databases/sqlite.py in a dictionary called ischema_names. its a straight dictionary lookup. in fact here it is: Well, I am not really sure what the problem is, but it looks like it might not have anything to do with the autoload. I tried just using a select statement with a from_obj parameter and it looks to me like SA is getting things wrong when it looks for the field name. Here is the script i used: http://paste.pocoo.org/show/130239/ Here is the key:value pairs for each row/column when trying to select from an sqlite view that uses a case statement: row 0 id: 1 name: jack 'something': something case when name == 'jack' then 1 else 0 end: 1 row 1 id: 2 name: sam 'something': something case when name == 'jack' then 1 else 0 end: 0 The same test running against Postgres is: row 0 id: 1 name: jack staticfield: something isjack: 1 row 1 id: 2 name: sam staticfield: something isjack: 0 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: autoload of db view treating columns as Decimal
I work with the OP. The columns are correctly typed and the reflection code seems to work correctly with views that are directly selecting from tables. Its when a view selects from another view that we have problems with the type being lost. Can anyone else say for sure whether it is supported to autoload a table object from an SQLite view? If it is, this would seem to be a bug. If not, I guess were are lucky it works in the first place. New feature maybe? Thanks. On Jul 21, 7:09 pm, Michael Bayer mike...@zzzcomputing.com wrote: it is news to me that SQLAlchemy's table reflection code would even read a SQLite view (and that sqlite had views, actually). Make sure your view defines character based fields as VARCHAR, CHAR or similar. note that SQLite has a very casual notion of types so you can make up any type names that you want. NUMERIC, DECIMAL, and REAL will reflect as a SQLAlchemy Numeric, a type that by default returns python decimal values. guruofgentoo wrote: In SQLite, I have a db view which selects from a second db view. When accessing this view as a sqlalchemy Table with autoload, the fields are being treated as Decimal. This raises an InvalidOperation exception when I attempt to access a varchar value. When I have a view selecting from table fields (tables created with sqlalchemy) instead of another view, this problem does not occur. Why is the type information being lost in multiple layers of views? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Missing Post: mssql and pyodbc weirdness
I made a post on Friday about some issues I was having with pyodbc and mssql. Now I can't find it. I feel like I am losing my mind. Does anyone remember seeing that post? I have tried just about every search combination I can think of and can't find the post anywhere. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Missing Post: mssql and pyodbc weirdness
Someone just replied to the post, so thankfully, I was able to figure out where it was: http://groups.google.com/group/sqlalchemy/browse_thread/thread/a94ab57826e23274 Sorry for the list SPAM. I swear though, its not coming up in the search, at least, not when sorted by date. On Jun 1, 3:46 pm, Randy Syring ra...@rcs-comp.com wrote: I made a post on Friday about some issues I was having with pyodbc and mssql. Now I can't find it. I feel like I am losing my mind. Does anyone remember seeing that post? I have tried just about every search combination I can think of and can't find the post anywhere. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql and pyodbc weirdness with data not saving
Darn! I thought I checked that. But you are right. Thank you for the post! On Jun 1, 3:55 pm, mtrier mtr...@gmail.com wrote: Looks like you're using 0.5.3 I believe which had problems with this. Upgrade to a later release. Thanks, Michael --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] mssql and pyodbc weirdness with data not saving
I have been having a weird thing happen when using pyodbc with mssql. The create table statements work, but none of the INSERT statements do. For example, when I run code and echo, I get this: (TCSData) F:\Inetpub\TCSData\src\tcsdata-dist\tcsdatapysmvt broadcast initapp calling: action_pysapp_initapp 2009-05-29 02:22:17,194 INFO sqlalchemy.engine.base.Engine.0x...a350 BEGIN 2009-05-29 02:22:17,210 INFO sqlalchemy.engine.base.Engine.0x...a350 INSERT INTO users_permission (name) VALUES (?); select scope_identity() 2009-05-29 02:22:17,210 INFO sqlalchemy.engine.base.Engine.0x...a350 [u'webapp-c ontrolpanel'] 2009-05-29 02:22:17,210 INFO sqlalchemy.engine.base.Engine.0x...a350 COMMIT But there is nothing in that table when I am done committing. I profiled the server, and here is the code sequence that it shows: set implicit_transactions on exec sp_datatype_info 93, @ODBCVer = 3 SET IMPLICIT_TRANSACTIONS OFF BEGIN TRANSACTION declare @P1 int set @P1=1 exec sp_prepexec @P1 output, N'@P1 nvarchar(19)', N'INSERT INTO users_permission (name) VALUES (@P1); select scope_identity()', N'webapp-controlpanel' select @P1 exec sp_unprepare 1 IF @@TRANCOUNT 0 COMMIT TRAN If is switch to adodbapi, then the insert works just fine. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query.delete() doesn't cascade on Sqlite
Another solution is to use triggers in SQLite to enforce FK relationships. http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers http://code.google.com/p/sqlitefktg4sa/ On May 29, 6:59 am, Mike Conley mconl...@gmail.com wrote: One solution is to change the commit strategy; issue commits periodically during the loop. parentlist = session.query(Parent).all() count = 0 for parent in parentlist: session.delete(parent) count += 1 if count % 100 == 0 # use whatever frequency is needed count = 0 session.commit() if count 0: session.commit() # this gets the last group of deletes The disadvantage of this approach is that you lose the ability to rollback the entire delete process, and now must handle that problem with application design. -- Mike Conley On Fri, May 29, 2009 at 4:08 AM, Harish Vishwanath harish.shas...@gmail.com wrote: Hello, I am running Sqlite/SQLA/Elixir on an embedded system. I have different classes with OneToMany relationships and I have configured cascade = all, delete, delete-orphan correctly on them. However, for this to work, I should do something like : parentlist = session.query(Parent).all() for parent in parentlist: session.delete(parent) # session.commit() The above chokes the system since it has limited memory. The statement below : session.query(Parent).delete(), issues DELETE FROM PARENT; This is memory efficient, but it doesn't delete the child objects since Sqlite doesn't impose FK constraints. Is there any way to solve this problem? Regards, Harish --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query.delete() doesn't cascade on Sqlite
Mike, Well...I am not sure. I thought SQLite held transaction details in a .journal file and not in memory. I thought that the memory use might actually be a Python problem and not a result of SQLite. If my thoughts are correct, using the FK approach should keep deleting the children in SQLite, which should use a journal file, which should reduce memory usage. But I could be wrong. :) On May 29, 9:28 am, Mike Conley mconl...@gmail.com wrote: Randy, Interesting approach to foreign key management. Harish indicates he is having a problem with restricted memory. Won't that still be true with triggers? After all, if the problem is that the transaction is too big, it will still be too big with all the pending deletes executed in a trigger. -- Mike Conley On Fri, May 29, 2009 at 8:47 AM, Randy Syring ra...@rcs-comp.com wrote: Another solution is to use triggers in SQLite to enforce FK relationships. http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers http://code.google.com/p/sqlitefktg4sa/ On May 29, 6:59 am, Mike Conley mconl...@gmail.com wrote: One solution is to change the commit strategy; issue commits periodically during the loop. parentlist = session.query(Parent).all() count = 0 for parent in parentlist: session.delete(parent) count += 1 if count % 100 == 0 # use whatever frequency is needed count = 0 session.commit() if count 0: session.commit() # this gets the last group of deletes The disadvantage of this approach is that you lose the ability to rollback the entire delete process, and now must handle that problem with application design. -- Mike Conley On Fri, May 29, 2009 at 4:08 AM, Harish Vishwanath harish.shas...@gmail.com wrote: Hello, I am running Sqlite/SQLA/Elixir on an embedded system. I have different classes with OneToMany relationships and I have configured cascade = all, delete, delete-orphan correctly on them. However, for this to work, I should do something like : parentlist = session.query(Parent).all() for parent in parentlist: session.delete(parent) # session.commit() The above chokes the system since it has limited memory. The statement below : session.query(Parent).delete(), issues DELETE FROM PARENT; This is memory efficient, but it doesn't delete the child objects since Sqlite doesn't impose FK constraints. Is there any way to solve this problem? Regards, Harish --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Query.delete() doesn't cascade on Sqlite
Ah...missed that part. Well, the triggers are fired per-row I believe. So, if you don't use a transaction explicitly, and just let SQLite run without one, it will be slower, but the resource usage should be much less. On May 29, 10:31 am, Mike Conley mconl...@gmail.com wrote: Harish said it was an embedded system, probably all resources are pretty severely restricted. -- Mike Conley On Fri, May 29, 2009 at 9:44 AM, Randy Syring ra...@rcs-comp.com wrote: Mike, Well...I am not sure. I thought SQLite held transaction details in a .journal file and not in memory. I thought that the memory use might actually be a Python problem and not a result of SQLite. If my thoughts are correct, using the FK approach should keep deleting the children in SQLite, which should use a journal file, which should reduce memory usage. But I could be wrong. :) On May 29, 9:28 am, Mike Conley mconl...@gmail.com wrote: Randy, Interesting approach to foreign key management. Harish indicates he is having a problem with restricted memory. Won't that still be true with triggers? After all, if the problem is that the transaction is too big, it will still be too big with all the pending deletes executed in a trigger. -- Mike Conley On Fri, May 29, 2009 at 8:47 AM, Randy Syring ra...@rcs-comp.com wrote: Another solution is to use triggers in SQLite to enforce FK relationships. http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers http://code.google.com/p/sqlitefktg4sa/ On May 29, 6:59 am, Mike Conley mconl...@gmail.com wrote: One solution is to change the commit strategy; issue commits periodically during the loop. parentlist = session.query(Parent).all() count = 0 for parent in parentlist: session.delete(parent) count += 1 if count % 100 == 0 # use whatever frequency is needed count = 0 session.commit() if count 0: session.commit() # this gets the last group of deletes The disadvantage of this approach is that you lose the ability to rollback the entire delete process, and now must handle that problem with application design. -- Mike Conley On Fri, May 29, 2009 at 4:08 AM, Harish Vishwanath harish.shas...@gmail.com wrote: Hello, I am running Sqlite/SQLA/Elixir on an embedded system. I have different classes with OneToMany relationships and I have configured cascade = all, delete, delete-orphan correctly on them. However, for this to work, I should do something like : parentlist = session.query(Parent).all() for parent in parentlist: session.delete(parent) # session.commit() The above chokes the system since it has limited memory. The statement below : session.query(Parent).delete(), issues DELETE FROM PARENT; This is memory efficient, but it doesn't delete the child objects since Sqlite doesn't impose FK constraints. Is there any way to solve this problem? Regards, Harish --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] MSSQL Stored Procedures with output (OUT) parameters
I have searched the list and have seen some examples with Oracle and I have seen some examples with MSSQL using 'exec' but without parameters. So, I was hoping that someone could give me or point me to an example of using a MSSQL stored procedure with both input and output parameters as well as the stored procedure returning a result set (or two). Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Stale ORM Objects Locking Best Practice (for nested sets)
I have a nested set implementation that I am working on. Currently, the steps involved to make a change to the table are: 1) Retrieve parent node (ORM Object) 2) Create new node (same ORM object as #1) 3) Calculate UPDATE boundaries, etc. from anchor node 4) Create UPDATE SQL based on #3 5) Execute #4 using session.execute() 6) Set corrected nested set related values on new node 7) issue session.flush() to INSERT new node 8) set treeid of new node to node.id if node is root node 9) session.commit() (or rollback if needed) This works so far, but I have two issues I would appreciate your help with. First, when I do multiple inserts in a row, I have problems unless I put commits() between each insert: http://paste.pocoo.org/show/110607/ (code example) If I don't put commits, then the ORM objects are stale (presumably b/c my UPDATE statement are affecting the underlying data) and subsequent inserts have the wrong values. So, ASSUMING my current thread is the only thing updating the table, it seems all I would need to do is somehow flag the session to make all ORM objects update from the DB before an attribute it used from that object. But, the assumption I just made isn't really valid. I would like this nested sets table/implementation to be able to be used by more than one thread, multiple processes, or even a different application altogether. How would I go about making the above method safe in a multi thread/process/application environment. Basically, worst case scenario would be that the node table is updated by a different application between steps #1 and #2 above. If that happens, when step #5 is executed, the node structure in the table would be hosed. The same could happen with a different thread or processes. I was thinking that I could lock the table before step #1 and unlock after step #9, but was wondering if that would work and also if there was a better option. I am currently testing this with SQLite, but would like it to work on mysql, postgres, or MSSQL as well. I *really appreciate* any input you can give. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Stale ORM Objects Locking Best Practice (for nested sets)
Michael, Thanks for your reply. On Apr 1, 11:46 am, Michael Bayer mike...@zzzcomputing.com wrote: there's a nested set demo in the examples/ folder of the distribution, which uses MapperExtension as well as a little-used flag on the mapper called batch=False. are you building off of that ? No, I had wrote my own solution b/c I wanted to be able to support having multiple trees in the same table, but the solution in the example looked a little cleaner, so I switched to it. as far as multithread/process, the UPDATE statements which nested sets requires would result in a lock of most of the table, thus protecting it from concurrent updates. Yes, but I don't think that solves the problem. The problem isn't concurrent updates. The problem is that the update itself is built on stale data. Take code from the example: http://paste.pocoo.org/show/110684/ Note my comments. It seems to me that it is possible, although unlikely, that your entire tree structure will get corrupted unless you can assure that no other tree update process works between the the connection.scalar() and connection.execute(). Otherwise, if the tree is updated between those two calls, in a way that the affects the value returned from connection.scalar(), then the node structure in the table gets corrupt. nested sets is extremely inefficient in a concurrent write environment. Agreed. The environment probably won't be that concurrent that I am concerned about efficiency. What I am concerned about is trying to prevent the node structure from getting corrupted. If there is even a remote possibility of the node structure getting corrupt, I would like to prevent it. just that operator alone may render nested sets largely obsolete. I will be happy when there is some kind of backend agnostic way to accomplish hierarchies. I can guarantee I won't miss nested sets. Thank you. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite connections shared across threads (test case and logs)
On Nov 16, 11:30 am, Michael Bayer [EMAIL PROTECTED] wrote: I've looked into PyISAPIe. Suffice to say this seems to be an extremely new project. Their homepage is blank: It has been around for a couple years: http://web.archive.org/web/*/http://pyisapie.sourceforge.net/ But I agree there isn't much of a user base. I have been in contact with the author of the software and he has been great to work with. I hope more people will begin to use the software so it can get tested. It would be worthwhile to ask about threading.local() there, but it seems very possible that some interaction on their end is incompatible with threading.local(). I certainly wouldn't trust native python- embedded code with that low of a user base on a production site. I went ahead and wrote a test and I do think something in PyISAPIe's thread local model is broken: http://groups.google.com/group/pyisapie/browse_thread/thread/f82ea13b8481d3eb I ran that same test with a Python WSGI server and with isapi-wsgi and I didn't get any id clashes. In addition, I ran the DB test we referenced in previous posts with the singleton pool against isapi- wsgi and didn't have any problems there either. The evidence is pointing at PyISAPIe at this point. But its really a shame, as PyISAPIe has isapi-wsgi beat pretty bad on speed. I have emailed Phillip and hopefully he will have some time to look into the problem. I agree that running something without much of a user base on a production system is not a good idea. But as far as I am aware, there are only two WSGI projects that interface with IIS: PyISAPIe and isapi- wsgi. Both are relatively new and untested products, but I am stuck with IIS at this point. If PyISPIe's thread model doesn't get fixed, I will have to stick with isapi-wsgi. Thank you so much for all your help on this problem, especially since it turns out not to have had anything to do with sqlalchemy. I am truly grateful. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Should create_engine() be called per-process or per-request in threaded wsgi environment?
Problem turns out to have been with my ISAPI WSGI interface, it looks like it has a broken thread local model. More details here if anyone is interested: http://groups.google.com/group/sqlalchemy/browse_thread/thread/fbca1399020f6a2e On Nov 6, 5:19 pm, Randy Syring [EMAIL PROTECTED] wrote: Thank you for taking the time to respond, I really appreciate it! On Nov 6, 2:46 pm, Michael Bayer [EMAIL PROTECTED] wrote: you should definitely create the engine and metadata on a per-process basis. When using SQLite, the engine automatically chooses the SingletonThreadPool connection pool, which will maintain a single SQLite connection per application thread, which is never moved across threads (unless you did so explicitly). Ah, well there is something I have overlooked. I have been forgetting that there is a connection object since I never use it directly. I was actually thinking that the engine was the connection, but I see now that is not accurate. But would I need to close the connection explicitly after each request? The error you're getting would only occur if you are sharing the connection returned by the engine across threads, which can also occur if you're using a single Session that's bound to a connection across threads. When using the scoped_session() manager, this also should not occur - some description of this lifecycle is athttp://www.sqlalchemy.org/docs/05/session.html#unitofwork_contextual_... . I do not believe that I am sharing the connection object across threads, at least not deliberately. The only sqlalchemy objects I am working with are an engine (which is stored at the process level), the metadata (which is unbound and stored at the process level), and a scoped session. At the end of each request, I call remove() on the scoped session class, which I assumed was enough. Would there be anything else I should do at the end of a request in order to clean up? Also, I am using Elixir. Is it possible that Elixir is holding on to a connection object I don't know about? It uses scoped sessions by default as well. Thanks! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SQLite connections shared across threads (test case and logs)
As noted here, I have been having some problems with SQLite connections: http://groups.google.com/group/sqlalchemy/browse_thread/thread/5f742fdd313f3da9/ I went ahead and produced what I hope is a very narrow test case to show that I am not explicitly holding onto connections (unless I completely misunderstand, which is possible). Here is my test code: http://paste.pocoo.org/show/91285/ When I run this through a native Python wsgi server, I get the following log: http://paste.pocoo.org/show/91286/ Which has some ProgrammingErrors related to SQLite connections, but they are caught and don't propagate to my application and don't interfere with anything (as far as I can tell). When I run the exact same code using PyISAPIe, I get the following log file, which shows many exceptions all propagating up to my application and hosing it: http://paste.pocoo.org/show/91287/ The log files show 100 requests using apache benchmark tool. In the logs, the number in parentheses just before the message is the thread id. Please help me with this. I have a web application that I have spent ~150 hours on that I need to get working for a customer. Everything was ready to go, and I move it to the production box and start getting these errors. Thank you! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite connections shared across threads (test case and logs)
On Nov 15, 6:39 pm, Michael Bayer [EMAIL PROTECTED] wrote: Thank you so much for your response, I am extremely grateful. However, I am still getting exceptions thrown from SQLite for sharing connections across threads. The explicit connection as well as the threadlocal strategy are all unnecessary here. Configuring the sessionmaker() with a bind to a plain engine i.e. create_engine('sqlite:///mydb.sql'), and making sure sess.close() is called within the WSGI method are all that's needed. Pattern here is: Session = sessionmaker(bind=engine) sess = Session() try: work with session finally: sess.close() Alternatively, as I noted previously inhttp://www.sqlalchemy.org/docs/05/session.html#unitofwork_contextual_... , using scoped_session in conjunction with Session.remove() at the end of the request works here as well, as I mentioned this is the practice that is standard among popular web frameworks such as Pylons. Pattern here is : Session = scoped_session(sessionmaker(bind=engine)) sess = Session() try: work with session finally: Session.remove() I have updated my code per your directions, I believe: http://paste.pocoo.org/show/91318/ I wasn't 100% sure where Session should be instantiated. At the module level or at the request level. Look at the docs and also at how Pylons does it, it seems that it should be instantiated at the module/application level. My example above shows it that way, but I tried it the other way as well, with similar results. I also tried using a non-contextual session and ended up with the same results. Note in the code above that I have added some exception logging to tell where the exceptions are being generated. The Session is then garbage collected via asynchronous gc, the connection is returned to the pool, and the pool's attempt to rollback() the connection before returning to the pool raises the exception. The exception does not propagate outwards since it is during garbage collection. This is why the program keeps running without overall issue (except for your ISAPI plugin which probably cannot handle that kind of thing gracefully). Well, I am not sure about the details, but the log messages say that the exceptions are being thrown by my query() call, not during garbage collection. Out of 100 requests, 14 ended in failures, and all of them were from my query() call. The log files also give some more information which I hope will be helpful: 2008-11-15 21:53:51,015 INFO (5548) Connection sqlite3.Connection object at 0x01F41AA0 checked out from pool 2008-11-15 21:53:51,015 INFO (5548) Connection sqlite3.Connection object at 0x01F41AA0 being returned to pool snip 2008-11-15 21:53:51,515 INFO (5412) start response 2008-11-15 21:53:51,515 INFO (5412) Connection sqlite3.Connection object at 0x01F41AA0 checked out from pool Ok, note above that thread 5548 checks out a connection object. Then a little while later, thread 5412 also checks out the *same* connection object (technically, the connection object at that memory location could have been closed and replaced by a new one, but the logs don't show the connection being closed and the errors below would seem to confirm its the same object). My understanding of what you have said about the SingletonThreadPool is that that should not happen. Once 5412 checks out the connection object created in 5548, the following log output is generated: 2008-11-15 21:53:51,515 INFO (5412) Invalidate connection sqlite3.Connection object at 0x01F41AA0 (reason: ProgrammingError:SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 5548 and this is thread id 5412) 2008-11-15 21:53:51,515 INFO (5412) Closing connection sqlite3.Connection object at 0x01F41AA0 2008-11-15 21:53:51,515 INFO (5412) Connection sqlite3.Connection object at 0x01F41AA0 threw an error on close: SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 5548 and this is thread id 5412 2008-11-15 21:53:51,515 INFO (5412) Connection None being returned to pool 2008-11-15 21:53:51,515 INFO (5412) query exception: (ProgrammingError) SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 5548 and this is thread id 5412 None [{}] 2008-11-15 21:53:51,515 INFO (5412) end response Are you sure this isn't a problem with pulling the wrong connection from the pool? Still another way to do this is to eliminate the source of the error at the pool level - ultimately, the SingletonThreadPool is attempting to return the connection to the pool and call rollback() on it, which is why the threaded access fails. If you use NullPool, the connection is thrown away entirely when closed and nothing is done to it. Any version of your program will run without errors if NullPool is used - you'll just get a little
[sqlalchemy] Should create_engine() be called per-process or per-request in threaded wsgi environment?
I am developing a WSGI based web framework with sqlalchemy. I am unclear about when create_engine() should be called. I initially thought that engine creation and metadata would be initialized per process and each thread/request would just get a new session. However, I have recently run into error messages when using sqlite with the framework in a threaded WSGI server: SQLite objects created in a thread can only be used in that same thread... That lead me to this thread: http://groups.google.com/group/pylons-discuss/browse_thread/thread/3d3009cd7421c45a/ed0f3dde401ff474?lnk=gst Can someone weigh in on this issue? What are the performance ramifications of needing to create an engine on each request as opposed to each process? Do I also need to load my meta data on each request or could I just re-bind the engine to the metadata on each request? Should I not bind the engine to the metadata at all but just bind it to the session? Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Should create_engine() be called per-process or per-request in threaded wsgi environment?
Thank you for taking the time to respond, I really appreciate it! On Nov 6, 2:46 pm, Michael Bayer [EMAIL PROTECTED] wrote: you should definitely create the engine and metadata on a per-process basis. When using SQLite, the engine automatically chooses the SingletonThreadPool connection pool, which will maintain a single SQLite connection per application thread, which is never moved across threads (unless you did so explicitly). Ah, well there is something I have overlooked. I have been forgetting that there is a connection object since I never use it directly. I was actually thinking that the engine was the connection, but I see now that is not accurate. But would I need to close the connection explicitly after each request? The error you're getting would only occur if you are sharing the connection returned by the engine across threads, which can also occur if you're using a single Session that's bound to a connection across threads. When using the scoped_session() manager, this also should not occur - some description of this lifecycle is athttp://www.sqlalchemy.org/docs/05/session.html#unitofwork_contextual_... . I do not believe that I am sharing the connection object across threads, at least not deliberately. The only sqlalchemy objects I am working with are an engine (which is stored at the process level), the metadata (which is unbound and stored at the process level), and a scoped session. At the end of each request, I call remove() on the scoped session class, which I assumed was enough. Would there be anything else I should do at the end of a request in order to clean up? Also, I am using Elixir. Is it possible that Elixir is holding on to a connection object I don't know about? It uses scoped sessions by default as well. Thanks! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] adding children to ORM object using property causes problems, maybe a bug?
After some work with Gedd on #sqlalchemy, it seems that adding children to a parent object using a custom property() doesn't work as we expected it would. A test case is here: http://paste.pocoo.org/show/86848/ The error is triggered by line #53. Are we doing something wrong or is this a bug in SA? Thanks! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] How to turn off UPDATE on child objects when deleting parent object
More details are here: http://groups.google.com/group/sqlelixir/browse_thread/thread/aac5d22702e3a8ec But basically, I have a relationship between a parent (Item) table and child (Link) table. When I try to delete an Item, an SQL statement is generated by SQLAlchemy that tries to set Link.item_id = NULL. That is invalid, because Link.item_id is a NOT NULL column and also because I have a FK on the column. I have tried adjusting the cascade options to no avail. I would *really* appreciate some help on this, it is caused me to come to a grinding hault on a project I am working on that needs to be done by the end of the week. Thanks! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to turn off UPDATE on child objects when deleting parent object
Sorry, one follow-up. I can actually get this to work by setting cascade='all, delete-orphan'. But since the FK will do a cascading delete, it is slower for SQLAlchemy to have to issue the statements. Can I just make SQLAlchemy not issue anything when deleting the parent? On Sep 30, 2:14 am, Randy Syring [EMAIL PROTECTED] wrote: More details are here:http://groups.google.com/group/sqlelixir/browse_thread/thread/aac5d22... But basically, I have a relationship between a parent (Item) table and child (Link) table. When I try to delete an Item, an SQL statement is generated by SQLAlchemy that tries to set Link.item_id = NULL. That is invalid, because Link.item_id is a NOT NULL column and also because I have a FK on the column. I have tried adjusting the cascade options to no avail. I would *really* appreciate some help on this, it is caused me to come to a grinding hault on a project I am working on that needs to be done by the end of the week. Thanks! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to turn off UPDATE on child objects when deleting parent object
Simon, THANK YOU!! Yes, I believe that will do it. On Sep 30, 5:12 am, King Simon-NFHD78 [EMAIL PROTECTED] wrote: -Original Message- From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of Randy Syring Sent: 30 September 2008 07:17 To: sqlalchemy Subject: [sqlalchemy] Re: How to turn off UPDATE on child objects when deleting parent object On Sep 30, 2:14 am, Randy Syring [EMAIL PROTECTED] wrote: More details are here:http://groups.google.com/group/sqlelixir/browse_thread/th read/aac5d22... But basically, I have a relationship between a parent (Item) table and child (Link) table. When I try to delete an Item, an SQL statement is generated by SQLAlchemy that tries to set Link.item_id = NULL. That is invalid, because Link.item_id is a NOT NULL column and also because I have a FK on the column. I have tried adjusting the cascade options to no avail. Sorry, one follow-up. I can actually get this to work by setting cascade='all, delete-orphan'. But since the FK will do a cascading delete, it is slower for SQLAlchemy to have to issue the statements. Can I just make SQLAlchemy not issue anything when deleting the parent? I think you need the 'passive_deletes' flag to tell SQLAlchemy that the database will delete child objects for you. It is described here: http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relatio... Hope that helps, Simon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Looking for HTML table generator based on SQLAlchemy query
Has anyone written or seen something that would take an SQLAlchemy query, any query, and turn the resulting recordset into an HTML table? Bonus points for providing a sort, filter, and paging feature. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
Here is my initial stab at this: http://code.google.com/p/sqlitefktg4sa/ Code plus tests. I welcome your feedback. On Aug 22, 1:48 pm, jason kirtland [EMAIL PROTECTED] wrote: Yep, though possibly you'd want it on before-drop. You can actually handle both tasks in the same function if you like- the event name will be passed in as the first argument. Randy Syring wrote: Jason, Thank you for the response. Using the method you suggest, am I understanding correctly that fks_for_sqlite() would only be run when a create() was processed for that table? Also, I am assuming I would need to create a complimentary function for handling the 'after-drop' event. On Aug 22, 1:25 pm, jason kirtland [EMAIL PROTECTED] wrote: DDL() has some simple templating capabilities that can help out a bit here, but I'd suggest taking the ForeignKey code Mike provided as a start and putting together an after-create listener using Table.append_ddl_listener directly: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... It would look something like: def fks_for_sqlite(event, table, bind): for c in table.c: for fk in c.foreign_keys: sql = your_code_to_make_trigger_for_fk(fk) bind.execute(sql) tbl.append_ddl_listener('after-create', fks_for_sqlite) Michael Bayer wrote: you can build this functionality using the DDL() construct provided by SQLAlchemy: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... the ForeignKey objects on the table can be pulled out using: for c in table.c: for fk in c.foreign_keys: do_something_with_fk(fk) On Aug 22, 2008, at 11:19 AM, Randy Syring wrote: I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_... Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
Gaëtan, Thank you. On Aug 27, 5:23 am, Gaetan de Menten [EMAIL PROTECTED] wrote: On Wed, Aug 27, 2008 at 3:21 AM, Randy Syring [EMAIL PROTECTED] wrote: Ok, so I was going to try and implement a solution using the method discussed here, but ran into a problem b/c I am using Elixir objects and not declaring the tables directly. Can I still use this method? How do I get the table references from the Elixir objects? YourEntity.table After running setup_all(), and obviously before running create_all() or similar (metadata.create_all(), etc...). -- Gaëtan de Mentenhttp://openhex.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
Ok, so I was going to try and implement a solution using the method discussed here, but ran into a problem b/c I am using Elixir objects and not declaring the tables directly. Can I still use this method? How do I get the table references from the Elixir objects? Thanks. On Aug 22, 1:48 pm, jason kirtland [EMAIL PROTECTED] wrote: Yep, though possibly you'd want it on before-drop. You can actually handle both tasks in the same function if you like- the event name will be passed in as the first argument. Randy Syring wrote: Jason, Thank you for the response. Using the method you suggest, am I understanding correctly that fks_for_sqlite() would only be run when a create() was processed for that table? Also, I am assuming I would need to create a complimentary function for handling the 'after-drop' event. On Aug 22, 1:25 pm, jason kirtland [EMAIL PROTECTED] wrote: DDL() has some simple templating capabilities that can help out a bit here, but I'd suggest taking the ForeignKey code Mike provided as a start and putting together an after-create listener using Table.append_ddl_listener directly: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... It would look something like: def fks_for_sqlite(event, table, bind): for c in table.c: for fk in c.foreign_keys: sql = your_code_to_make_trigger_for_fk(fk) bind.execute(sql) tbl.append_ddl_listener('after-create', fks_for_sqlite) Michael Bayer wrote: you can build this functionality using the DDL() construct provided by SQLAlchemy: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... the ForeignKey objects on the table can be pulled out using: for c in table.c: for fk in c.foreign_keys: do_something_with_fk(fk) On Aug 22, 2008, at 11:19 AM, Randy Syring wrote: I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_... Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_trigger_generator Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
Thank you, I will look into this. On Aug 22, 12:44 pm, Michael Bayer [EMAIL PROTECTED] wrote: you can build this functionality using the DDL() construct provided by SQLAlchemy: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... the ForeignKey objects on the table can be pulled out using: for c in table.c: for fk in c.foreign_keys: do_something_with_fk(fk) On Aug 22, 2008, at 11:19 AM, Randy Syring wrote: I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_... Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
Jason, Thank you for the response. Using the method you suggest, am I understanding correctly that fks_for_sqlite() would only be run when a create() was processed for that table? Also, I am assuming I would need to create a complimentary function for handling the 'after-drop' event. On Aug 22, 1:25 pm, jason kirtland [EMAIL PROTECTED] wrote: DDL() has some simple templating capabilities that can help out a bit here, but I'd suggest taking the ForeignKey code Mike provided as a start and putting together an after-create listener using Table.append_ddl_listener directly: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... It would look something like: def fks_for_sqlite(event, table, bind): for c in table.c: for fk in c.foreign_keys: sql = your_code_to_make_trigger_for_fk(fk) bind.execute(sql) tbl.append_ddl_listener('after-create', fks_for_sqlite) Michael Bayer wrote: you can build this functionality using the DDL() construct provided by SQLAlchemy: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... the ForeignKey objects on the table can be pulled out using: for c in table.c: for fk in c.foreign_keys: do_something_with_fk(fk) On Aug 22, 2008, at 11:19 AM, Randy Syring wrote: I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_... Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---