Michael,
Thank you for the explanation. And thank you for this awesome tool!
Calling sqlalchemy.orm.configure_mappers() indeed solves the problem. I
am surprised this function isn't highlighted in the Mapper
Configuration section of the documentation, or even mentioned on the
Automap page. I
Usually for this sort of stuff, I serialize the object's data into a JSON
dict ( object columns to JSON dict, object relations to a dict, list of
dicts, or reference to another object). ( Custom dump/load is needed to
handle Timestamp, Floats, etc). You might be able to iterate over the data
Well I was hoping to just use yaml since yaml understands when two
objects refer to the same underlying object. That means you don't have to
write any logic to de-duplicate objects through relationships, etc.
Since json doesn't have the notion of referencing, that doesn't seem
straightforward
I have, I wrote a bit about my experience
@
https://social.msdn.microsoft.com/Forums/en-US/64517130-40ce-4912-80a2-844d3c3ce8b9/sqlalchemy-working-with-azure-sql-database?forum=ssdsgetstarted.
Andrew
On Wednesday, October 30, 2013 11:41:59 AM UTC-4, Lorenzo Lee wrote:
Curious if anyone has
On Friday, October 24, 2014 10:39:43 AM UTC-4, Peter Waller wrote:
I was also hoping to just use yaml to avoid writing custom dumping code,
since it seems in general like a useful capability. So I may yet try and
find the underlying bug and fix it.
It might not be a bug, and the effect of
The oddity is that calling `__reduce_ex__` on the instance is fine, but on
the class it is not. When serialising a declarative class it finds itself
serialising the class type, which fails. This actually fails for the
`object`, too (see below).
So I think what's happening is that serialisation
Thanks for the reply Mike. I've been tracking SQLAlchemy for years now and
it's probably some of the best code, docs, and support I've ever seen.
Yes, I have verified that pyodbc and the pooling flag work:
In [1]: import pyodbc
In [2]: pyodbc.pooling = False
In [3]: conn = pyodbc.connect
You probably need to hit that flag early before anything connects. A safe
place would be in the dbapi() accessor itself, or in the __init__ method of the
dialect class.
Sent from my iPhone
On Oct 24, 2014, at 1:32 PM, Lycovian mfwil...@gmail.com wrote:
Thanks for the reply Mike. I've
I'm trying to get the literal query (with the binds inline) for a SQLite
connection per the docs:
http://docs.sqlalchemy.org/en/latest/faq.html#faq-sql-expression-string
Here is the statement:
print(stmt.compile(SQL_ENGINE,compile_kwargs={literal_binds: True}))
I have a statement object, with
I believe you have to specify a dialect in order to get the binds presented
in the right format.
This is the utility function i use for debugging. you could easily adapt
it to return a sqlite statement.
https://gist.github.com/jvanasco/69daa58aeb0e921cdbbe
that being said -- I don't think
does anyone know if its possible to implement some form of eagerloading or
class attribute where only the presence (or first record, or count) of
relations are emitted?
I have a few queries where i'm loading 100+ rows, but I only need to know
whether or not any entries for the relationship
On Oct 24, 2014, at 9:06 PM, Jonathan Vanasco jvana...@gmail.com wrote:
does anyone know if its possible to implement some form of eagerloading or
class attribute where only the presence (or first record, or count) of
relations are emitted?
I have a few queries where i'm loading 100+
12 matches
Mail list logo