[sqlalchemy] iPhone App - 20/20 Vision
iPhone App - 20/20 Vision - http://2020visioniphoneapp2.weebly.com -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Integration with MSSQL and varchar field above 255 characters
Hi, I am getting this error The data types varchar and text are incompatible in the equal to operator when I try to run a filter or select on a column defined as VARCHAR 500. This happens only when the record in question has more than 255 characters. The funny thing is in the mssql query editor this error is not appearing. When I relplace == with like on this query I am able to fetch the results without a problem. Although my problem is solved I would like to know what actually happened when I used == Thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Strange behavior using relationships, inheritance and decl_enum.py
Hi All, I am using 0.7.10, sqlite memory and decl_enum.py from http://techspot.zzzeek.org/2011/01/14/the-enum-recipe/ for testing. Scenario 1 - test.py I am running it 8 times, 7 out of 8 it works and I get the following output: 0.7.10 1 10 Action Main /gvv 2 20 Action Main /gvv1 3 30 Menu Main Sys 4 10 Action Test test 1 out of 8 I get the following error: sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be unique u'INSERT INTO MenuItem (Id, TypeOfMenuItem, ItemMenu_Id, Line, Desc, CreateTimeStamp, CreateOpId_Id, ModifiedTimeStamp, ModifiedOpId_Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' ((1, 'Action', 1, 10, 'Action 1', '2013-04-01 23:57:56.664654', 1, None, None), (2, 'Action', 1, 20, 'Action 2', '2013-04-01 23:57:56.664667', 1, None, None), (1, 'Menu', 1, 30, 'Menu', '2013-04-01 23:57:56.664674', 1, None, None)) Scenario 2 - test1.py I have included table ProductClass just for noise. In scenario 1 it works Adding index=True to ProductClass.Class: Class = Column(String(16), index=True, nullable=False) always gives me the following error: sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be unique u'INSERT INTO MenuItem (Id, TypeOfMenuItem, ItemMenu_Id, Line, Desc, CreateTimeStamp, CreateOpId_Id, ModifiedTimeStamp, ModifiedOpId_Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' (1, 'Menu', 1, 30, 'Menu', '2013-04-02 00:05:20.569187', 1, None, None) What am I doing wrong? Thank you in advance for your help. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. from sqlalchemy.types import SchemaType, TypeDecorator, Enum from sqlalchemy import __version__ import re if __version__ '0.6.5': raise NotImplementedError(Version 0.6.5 or higher of SQLAlchemy is required.) class EnumSymbol(object): Define a fixed symbol tied to a parent class. def __init__(self, cls_, name, value, description): self.cls_ = cls_ self.name = name self.value = value self.description = description def __reduce__(self): Allow unpickling to return the symbol linked to the DeclEnum class. return getattr, (self.cls_, self.name) def __iter__(self): return iter([self.value, self.description]) def __repr__(self): return %s % self.name class EnumMeta(type): Generate new DeclEnum classes. def __init__(cls, classname, bases, dict_): cls._reg = reg = cls._reg.copy() for k, v in dict_.items(): if isinstance(v, tuple): sym = reg[v[0]] = EnumSymbol(cls, k, *v) setattr(cls, k, sym) return type.__init__(cls, classname, bases, dict_) def __iter__(cls): return iter(cls._reg.values()) class DeclEnum(object): Declarative enumeration. __metaclass__ = EnumMeta _reg = {} @classmethod def from_string(cls, value): try: return cls._reg[value] except KeyError: raise ValueError( Invalid value for %r: %r % (cls.__name__, value) ) @classmethod def values(cls): return cls._reg.keys() @classmethod def db_type(cls): return DeclEnumType(cls) class DeclEnumType(SchemaType, TypeDecorator): def __init__(self, enum): self.enum = enum self.impl = Enum( *enum.values(), name=ck%s % re.sub( '([A-Z])', lambda m:_ + m.group(1).lower(), enum.__name__) ) def _set_table(self, table, column): self.impl._set_table(table, column) def copy(self): return DeclEnumType(self.enum) def process_bind_param(self, value, dialect): if value is None: return None return value.value def process_result_value(self, value, dialect): if value is None: return None return self.enum.from_string(value.strip()) if __name__ == '__main__': from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String, create_engine from sqlalchemy.orm import Session Base = declarative_base() class EmployeeType(DeclEnum): part_time = P, Part Time full_time = F, Full Time contractor = C, Contractor class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String(60), nullable=False) type = Column(EmployeeType.db_type())
[sqlalchemy] Re: Dialect for Vertica db connectivity ?
I put a rework of the code posted by Bo into a package https://pypi.python.org/pypi/vertica-sqlalchemy/0.1 Selects, joins, table introspection works. Let me know if you can use it. Does anyone have an email for Bo so I can attribute him and check the license? thanks, James On Saturday, 16 March 2013 22:44:56 UTC, Femi Anthony wrote: Jonathan, thanks a lot. I'll test it out using the postgresSQL dialect. Femi On Friday, March 15, 2013 4:06:33 PM UTC-4, Jonathan Vanasco wrote: @Femi - I did a quick search online, but couldn't find any current ( since HP acquisition ) documentation. HOWEVER -- all of the old documentation and QAs that are still online talk about Vertica reimplementing the PostgreSQL syntax and functions. That's in line with what I remembered earlier, where the psql client was even their recommended command-line interface. ( Also, it was invented/started by the same guy who started PostgreSQL ) It's possible that things have changed, but I would try treating it as PostgreSQL. Unless they did a HUGE 360 pivot, I think that should work. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Integration with MSSQL and varchar field above 255 characters
this is a behavior of ODBC and/or FreeTDS and how it converts data when sending to the database. you might need to use CAST() to make sure the datatypes line up. On Apr 1, 2013, at 8:07 AM, Serendipity udn...@gmail.com wrote: Hi, I am getting this error The data types varchar and text are incompatible in the equal to operator when I try to run a filter or select on a column defined as VARCHAR 500. This happens only when the record in question has more than 255 characters. The funny thing is in the mssql query editor this error is not appearing. When I relplace == with like on this query I am able to fetch the results without a problem. Although my problem is solved I would like to know what actually happened when I used == Thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Strange behavior using relationships, inheritance and decl_enum.py
test.py doesnt fail for me, test1.py does. not sure whats going on with it yet, will know today On Apr 1, 2013, at 9:11 AM, gvv gvver...@gmail.com wrote: Hi All, I am using 0.7.10, sqlite memory and decl_enum.py from http://techspot.zzzeek.org/2011/01/14/the-enum-recipe/ for testing. Scenario 1 - test.py I am running it 8 times, 7 out of 8 it works and I get the following output: 0.7.10 1 10 Action Main /gvv 2 20 Action Main /gvv1 3 30 Menu Main Sys 4 10 Action Test test 1 out of 8 I get the following error: sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be unique u'INSERT INTO MenuItem (Id, TypeOfMenuItem, ItemMenu_Id, Line, Desc, CreateTimeStamp, CreateOpId_Id, ModifiedTimeStamp, ModifiedOpId_Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' ((1, 'Action', 1, 10, 'Action 1', '2013-04-01 23:57:56.664654', 1, None, None), (2, 'Action', 1, 20, 'Action 2', '2013-04-01 23:57:56.664667', 1, None, None), (1, 'Menu', 1, 30, 'Menu', '2013-04-01 23:57:56.664674', 1, None, None)) Scenario 2 - test1.py I have included table ProductClass just for noise. In scenario 1 it works Adding index=True to ProductClass.Class: Class = Column(String(16), index=True, nullable=False) always gives me the following error: sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be unique u'INSERT INTO MenuItem (Id, TypeOfMenuItem, ItemMenu_Id, Line, Desc, CreateTimeStamp, CreateOpId_Id, ModifiedTimeStamp, ModifiedOpId_Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' (1, 'Menu', 1, 30, 'Menu', '2013-04-02 00:05:20.569187', 1, None, None) What am I doing wrong? Thank you in advance for your help. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. decl_enum.pytest.pytest1.py -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Strange behavior using relationships, inheritance and decl_enum.py
OK this is http://www.sqlalchemy.org/trac/ticket/2689, one of those issues that as always it's super surprising hasn't been found sooner since it's pretty fundamental. Your scripts will work if you ensure a dependency between the MenuItemMenu/MenuItemAction tables and the parent MenuItem table. Easiest way is to put a ForeignKey in the primary key of each: class MenuItemAction(MenuItem): __tablename__ = 'MenuItemAction' Id = Column(Integer, ForeignKey('MenuItem.Id'), primary_key=True) # ... class MenuItemMenu(MenuItem): __tablename__ = 'MenuItemMenu' Id = Column(Integer, ForeignKey('MenuItem.Id'), primary_key=True) # ... you can make either script fail about 50% of the time by randomizing the unit of work's internal representations using this recipe: # put this at the top of the test script from sqlalchemy.orm import unitofwork, session, mapper, dependency from sqlalchemy.util import topological from sqlalchemy.testing.util import RandomSet topological.set = unitofwork.set = session.set = mapper.set = \ dependency.set = RandomSet On Apr 1, 2013, at 10:39 AM, Michael Bayer mike...@zzzcomputing.com wrote: test.py doesnt fail for me, test1.py does. not sure whats going on with it yet, will know today On Apr 1, 2013, at 9:11 AM, gvv gvver...@gmail.com wrote: Hi All, I am using 0.7.10, sqlite memory and decl_enum.py from http://techspot.zzzeek.org/2011/01/14/the-enum-recipe/ for testing. Scenario 1 - test.py I am running it 8 times, 7 out of 8 it works and I get the following output: 0.7.10 1 10 Action Main /gvv 2 20 Action Main /gvv1 3 30 Menu Main Sys 4 10 Action Test test 1 out of 8 I get the following error: sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be unique u'INSERT INTO MenuItem (Id, TypeOfMenuItem, ItemMenu_Id, Line, Desc, CreateTimeStamp, CreateOpId_Id, ModifiedTimeStamp, ModifiedOpId_Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' ((1, 'Action', 1, 10, 'Action 1', '2013-04-01 23:57:56.664654', 1, None, None), (2, 'Action', 1, 20, 'Action 2', '2013-04-01 23:57:56.664667', 1, None, None), (1, 'Menu', 1, 30, 'Menu', '2013-04-01 23:57:56.664674', 1, None, None)) Scenario 2 - test1.py I have included table ProductClass just for noise. In scenario 1 it works Adding index=True to ProductClass.Class: Class = Column(String(16), index=True, nullable=False) always gives me the following error: sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be unique u'INSERT INTO MenuItem (Id, TypeOfMenuItem, ItemMenu_Id, Line, Desc, CreateTimeStamp, CreateOpId_Id, ModifiedTimeStamp, ModifiedOpId_Id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' (1, 'Menu', 1, 30, 'Menu', '2013-04-02 00:05:20.569187', 1, None, None) What am I doing wrong? Thank you in advance for your help. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. decl_enum.pytest.pytest1.py -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] unexpected behavior with the orm
I ran into a situation the other day where I would create a new object but copy over some properties from an earlier version of the object. To do this, I first created the new object, then I queried the database for the latest copy of the object, copied properties to the new object, then added the new object to the session and committed. I was very surprised to discover that the new object (which was not added to the session) was returned from the database query as the latest object. bar = session.query(Bar).first() new_foo = Foo() new_foo.bar = bar # apparently causes new_obj to be added to the session latest_foo = session.query(Foo).filter_by(...).order_by(Foo.creation_date.desc()).first() assert new_foo is latest_foo # - True This behavior is bizarre to me, as I'd expect to only receive objects from a query that I had explicitly added to the session via session.add(new_foo) or another slightly more implicit way like adding new_foo to a managed object (bar.foos.append(new_foo)). Anyway, I went back and apparently it's the same behavior in 0.6/0.7/0.8, so it may not be a bug. Is there a rationale at least? Or is it a corner case that could be fixed? The gist below is a full example that can be run on the above versions of SQLAlchemy. https://gist.github.com/mmerickel/5286502 Thanks, Michael -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] unexpected behavior with the orm
On Apr 1, 2013, at 2:10 PM, Michael Merickel mmeri...@gmail.com wrote: I ran into a situation the other day where I would create a new object but copy over some properties from an earlier version of the object. To do this, I first created the new object, then I queried the database for the latest copy of the object, copied properties to the new object, then added the new object to the session and committed. I was very surprised to discover that the new object (which was not added to the session) was returned from the database query as the latest object. bar = session.query(Bar).first() new_foo = Foo() new_foo.bar = bar # apparently causes new_obj to be added to the session this behavior is known as cascade, and is described in detail here: http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#cascades . It can be fully customized, but it seems like you are at least expecting basic forwards-direction save-update cascade to occur. Cascade also takes place during backref events. The specific behavior regarding bi-directional cascades due to backrefs, as well as controlling their behavior using the cascade_backrefs flag, is here: http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#controlling-cascade-on-backrefs -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Update object with results of a server-side processing [pgsql]
Hi! I've got a server-side trigger function (before insert) - it changes some fields of the inserted record, and I need this info back in my SA entity object (akin to what SA does with ID's). SA uses RETURNING whenever it is supported, maybe I can use it as well? Or am I limited to refreshing the object manually via session.refresh()? Thanks! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Transactional DDL and SQLite?
On Friday, 18 February 2011 08:14:28 UTC-8, Michael Bayer wrote: we've put tickets on their tracker to this effect, that they should be more liberal about considering when the transaction begins. http://code.google.com/p/pysqlite/issues/detail?id=21 pysqlite is tricky since I dont know if the Python.org tracker or the code.google.com tracker is more appropriate. In any case it doesn't seem like a lot is being done. In case anyone else needs this... A patch was submitted for the bundled sqlite3 module in http://bugs.python.org/issue10740 and for pysqlite at https://code.google.com/p/pysqlite/issues/detail?id=24 adding an optional `operation_needs_transaction_callback` to the connection. I've added this to my pysqlite-static-env branch along with instructions to set this up for sqlalchemy here: https://code.google.com/p/pysqlite-static-env/ Laurence -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Update object with results of a server-side processing [pgsql]
The insert() construct supports a call called returning() to emit whatever RETURNING you want, but if you're using the ORM, then the insert() construct is generated by your mappings. In this case, the ORM right now favors being able to batch the INSERT statements together into an executemany(), which doesn't support RETURNING, so as far as it using RETURNING specifically for non primary-key defaults, the ORM isn't quite set up for that yet. But, that doesn't mean you can't get those values automatically. Any column that's not passed in and is marked as having a server default will be queued up to fetch automatically as soon as you access it. The FetchedValue construct is used as this marker when you're dealing with something like a trigger: from sqlalchemy import FetchedValue class MyClass(Base): # ... some_col = Column(Integer, server_default=FetchedValue()) http://docs.sqlalchemy.org/en/rel_0_8/core/schema.html#triggered-columns On Apr 1, 2013, at 5:26 PM, Alexey Vihorev viho...@gmail.com wrote: Hi! I’ve got a server-side trigger function (before insert) – it changes some fields of the inserted record, and I need this info back in my SA entity object (akin to what SA does with ID’s). SA uses RETURNING whenever it is supported, maybe I can use it as well? Or am I limited to refreshing the object manually via session.refresh()? Thanks! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email tosqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] python: sqlalchemy - how do I ensure connection not stale using new event system
I am using the sqlalchemy package in python. I have an operation that takes some time to execute after I perform an autoload on an existing table. This causes the following error when I attempt to use the connection: sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away') I have a simple utility function that performs an insert many: def insert_data(data_2_insert, table_name): engine = create_engine('mysql://blah:blah123@localhost/dbname') # Metadata is a Table catalog. metadata = MetaData() table = Table(table_name, metadata, autoload=True, autoload_with=engine) for c in mytable.c: print c column_names = tuple(c.name for c in mytable.c) final_data = [dict(zip(column_names, x)) for x in data_2_insert] ins = mytable.insert() conn = engine.connect() conn.execute(ins, final_data) conn.close() It is the following line that times long time to execute since 'data_2_insert' has 677,161 rows. final_data = [dict(zip(column_names, x)) for x in data_2_insert] I came across the following post below which refers to a similar problem, however I am not sure how to implement the connection management suggested as I am quite a newbie. http://stackoverflow.com/questions/3033234/handle-mysql-restart-in-sqlalchemy Note for SQLAlchemy 0.7 - PoolListener is deprecated, but The same solution can be implemented using the new event system. – robots.jpg Here is the link to the new event system described by one of the users: http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic If someone can please show me a couple of pointers on how I could go about integrating the suggestions into the way I use sqlalchemy I would be very appreciative. Thank you. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] python: sqlalchemy - how do I ensure connection not stale using new event system
what is the actual reason that MySQL is disconnecting, is it actually being restarted while the operation proceeds ? running a long query shouldn't cause the connection to die off unless something goes wrong with the operation. On Apr 1, 2013, at 10:04 PM, algotr8...@gmail.com wrote: I am using the sqlalchemy package in python. I have an operation that takes some time to execute after I perform an autoload on an existing table. This causes the following error when I attempt to use the connection: sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away') I have a simple utility function that performs an insert many: def insert_data(data_2_insert, table_name): engine = create_engine('mysql://blah:blah123@localhost/dbname') # Metadata is a Table catalog. metadata = MetaData() table = Table(table_name, metadata, autoload=True, autoload_with=engine) for c in mytable.c: print c column_names = tuple(c.name for c in mytable.c) final_data = [dict(zip(column_names, x)) for x in data_2_insert] ins = mytable.insert() conn = engine.connect() conn.execute(ins, final_data) conn.close() It is the following line that times long time to execute since 'data_2_insert' has 677,161 rows. final_data = [dict(zip(column_names, x)) for x in data_2_insert] I came across the following post below which refers to a similar problem, however I am not sure how to implement the connection management suggested as I am quite a newbie. http://stackoverflow.com/questions/3033234/handle-mysql-restart-in-sqlalchemy Note for SQLAlchemy 0.7 - PoolListener is deprecated, but The same solution can be implemented using the new event system. – robots.jpg Here is the link to the new event system described by one of the users: http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic If someone can please show me a couple of pointers on how I could go about integrating the suggestions into the way I use sqlalchemy I would be very appreciative. Thank you. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] python: sqlalchemy - how do I ensure connection not stale using new event system
Thank you Michael for your response. So its not a long query that I think is the problem here because I don't believe the execution gets that far. My suspicion is that it is the line that computes the dictionary key/value pairs, which takes a long time since it has to build 677,161 x 10 (columns) = 6.7 million key/value pairs. I can't vpn into my machine right now for some reason so I will have to wait until tomorrow to get the traceback but my hunch is the connection becomes stale after the call to table = Table() because there is not activity while the dictionary is being created. Does this make sense? engine = create_engine('mysql://blah:blah123@localhost/dbname') # Metadata is a Table catalog. metadata = MetaData() table = Table(table_name, metadata, autoload=True, autoload_with=engine) column_names = tuple(c.name for c in mytable.c) final_data = [dict(zip(column_names, x)) for x in data_2_insert] # This is the line that takes time to complete ins = mytable.insert() conn = engine.connect() conn.execute(ins, final_data) conn.close() On Monday, April 1, 2013 9:09:34 PM UTC-7, Michael Bayer wrote: what is the actual reason that MySQL is disconnecting, is it actually being restarted while the operation proceeds ? running a long query shouldn't cause the connection to die off unless something goes wrong with the operation. On Apr 1, 2013, at 10:04 PM, algot...@gmail.com javascript: wrote: I am using the sqlalchemy package in python. I have an operation that takes some time to execute after I perform an autoload on an existing table. This causes the following error when I attempt to use the connection: sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away') I have a simple utility function that performs an insert many: def insert_data(data_2_insert, table_name): engine = create_engine('mysql://blah:blah123@localhost/dbname') # Metadata is a Table catalog. metadata = MetaData() table = Table(table_name, metadata, autoload=True, autoload_with=engine) column_names = tuple(c.name for c in mytable.c) final_data = [dict(zip(column_names, x)) for x in data_2_insert] ins = mytable.insert() conn = engine.connect() conn.execute(ins, final_data) conn.close() It is the following line that times long time to execute since 'data_2_insert' has 677,161 rows. final_data = [dict(zip(column_names, x)) for x in data_2_insert] I came across the following post below which refers to a similar problem, however I am not sure how to implement the connection management suggested as I am quite a newbie. http://stackoverflow.com/questions/3033234/handle-mysql-restart-in-sqlalchemy Note for SQLAlchemy 0.7 - PoolListener is deprecated, but The same solution can be implemented using the new event system. – robots.jpg Here is the link to the new event system described by one of the users: http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic If someone can please show me a couple of pointers on how I could go about integrating the suggestions into the way I use sqlalchemy I would be very appreciative. Thank you. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+...@googlegroups.com javascript:. To post to this group, send email to sqlal...@googlegroups.comjavascript: . Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy?hl=en. For more options, visit https://groups.google.com/groups/opt_out.