Re: [sqlalchemy] Serializing sqlalchemy declarative instances with yaml
On Oct 23, 2014, at 7:42 AM, Peter Waller pe...@scraperwiki.com wrote: We would like to freeze the results of a query to my database in a yaml file, so that we can use the results in an app which isn't connected to the database. It makes sense here to reuse the model classes. Here's an example: class Foo(declarative_base()): __tablename__ = foo id = S.Column(S.Integer, primary_key=True) Unfortunately, `yaml.dump(Foo())` gives a surprising result: /usr/lib/python3/dist-packages/yaml/representer.py in represent_object(self, data) 311 reduce = copyreg.dispatch_table[cls](data) 312 elif hasattr(data, '__reduce_ex__'): -- 313 reduce = data.__reduce_ex__(2) 314 elif hasattr(data, '__reduce__'): 315 reduce = data.__reduce__() /usr/lib/python3.4/copyreg.py in _reduce_ex(self, proto) 63 else: 64 if base is self.__class__: --- 65 raise TypeError(can't pickle %s objects % base.__name__) 66 state = base(self) 67 args = (self.__class__, base, state) TypeError: can't pickle int objects It seems that what is happening is that `data` is equal to `Foo`, and `Foo.__reduce_ex__(2)` gives `TypeError: can't pickle int objects`. As does `declarative_base().__reduce_ex__(2)`. I note that `pickle.dumps` works. But we'd rather use YAML. Where is the bug? Is it in sqlalchemy, yaml, or python? no clue. ints shouldn’t have an issue with pickle, obviously!for Yaml it would be much more appropriate to build custom per-class serialization in any case since you don’t want all the persistence junk like _sa_instance_state() carried along. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Simplified Pyodbc Connect Questions
Is there a way to use a pyodbc connection without a dialect? I'd prefer to not have to create a custom dialect to talk to my Teradata server as I have gotten pyodbc to work well. Seems like since ODBC is a standard that I might be able to get enough functionality out of it. Is this even possible? I've been trying something like import pyodbc from sqlalchemy import create_engine def getconn() pyodbc.pooling = False # required or Teradata ODBC dumps return pyodbc.connect('DSN=tdprod') engine = create_engine('pyodbc://tdprod', creator=getconn) Barring that working which seems unlikely since I can't find any working examples, I have started stubbing out a very simple Teradata dialect but I can't figure out how to manually set pyodbc.pooling = False. This is required as the TD ODBC driver will core dump on connect if this isn't set. I've tried the following in the my pyodbc.py my dialect but on testing it core dumps indicating the value isn't being set. Here is the pyodbc.py for my TD dialect. I'm trying to control pooling in two different ways in this example but neither works: import pyodbc from .base import TeradataDialect, TeradataExecutionContext from sqlalchemy.connectors.pyodbc import PyODBCConnector pyodbc.pooling = False class TeradataExecutionContext_pyodbc(TeradataExecutionContext): pass class TeradataDialect_pyodbc(PyODBCConnector, TeradataDialect): execution_ctx_cls = TeradataExecutionContext_pyodbc pyodbc_driver_name = 'Teradata' def initialize(self, connection): # Teradata requires pooling off for pyodbc super(TeradataDialect_pyodbc, self).initialize(connection) self.dbapi.pooling = False dialect = TeradataDialect_pyodbc Here is the output of the test: metl@ichabod:~/src/etl/util/sqlalchemy/dialects/teradata$ python run_tests.py nose.plugins.cover: ERROR: Coverage not available: unable to import coverage module Fatal Python error: Unable to set SQL_ATTR_CONNECTION_POOLING attribute. Aborted (core dumped) The Fatal Python error is the same error I get if I attempt to connect to my Teradata database via pyodbc without first setting pooling = False. Any help would be appreciated. Mike -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Simplified Pyodbc Connect Questions
On Oct 23, 2014, at 2:50 PM, Lycovian mfwil...@gmail.com wrote: Is there a way to use a pyodbc connection without a dialect? there is not. the dialect is responsible for formulating SQL of the format that the database understands as well as dealing with idiosyncrasies of the DBAPI driver (of which pyodbc has many, many, many, which are also specific to certain databases). Barring that working which seems unlikely since I can't find any working examples, I have started stubbing out a very simple Teradata dialect but I can't figure out how to manually set pyodbc.pooling = False. This is required as the TD ODBC driver will core dump on connect if this isn't set. I've tried the following in the my pyodbc.py my dialect but on testing it core dumps indicating the value isn't being set. Here is the pyodbc.py for my TD dialect. I'm trying to control pooling in two different ways in this example but neither works: have you tried talking to your database using just pyodbc directly? does this flag even work ? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.