Hi,
pyodbc + direct win32com calls to do table reflection. To make support more
solid, we would need to move to adodbapi (and fix problems with adodbapi).
Interestingly, you get more functional SQL through the ADO interface than
the ODBC interface. At the moment it's plenty functional enough for
Hi,
heh, adding this raw-data-copy to the autoload.py
makes quite a database-copier/migrator...
Yes indeed, I used this yesterday to migrate a legacy database, it was
impressively quick and easy.
I can see we've got similar requirements in this area. Perhaps you and I
could work together to
Hi.
In SQLAlchemy 0.3.10 ( / Python2.5), I found that YEAR column type is
created as TEXT column type in the following simple create query:
---
from sqlalchemy import *
from sqlalchemy.databases.mysql import *
db = create_engine(mysql://[EMAIL
[EMAIL PROTECTED] wrote:
hi, i have similar idea/need within dbcook, although on a somewhat
higher level:
pre
cache_results/: (dbcook/SA) add-on for automaticaly-updated database
denormalisation caches of intermediate results, each one depending on
particular pattern of usage. Wishful
Hi,
I'm migrating my Pylons application to the latest version of Pylons
(0.9.6rc2) and SA (0.4.0dev-r3205) using the new scoped_session instead
of the deprecated SessionContext. From the SA docs (0.4), there's a note
about how .flush() works:
hi alexandre -
I've implemented all the missing class-level methods on
ScopedSession in r3212, so you should be able to call refresh().
But also, if you want to say Session(), then work with it, that is
also the intended usage model, although the Session.xxx methods
should in theory be
Dave,
I recently upgraded from SQLAlchemy 0.3.8 to 0.3.10. The only
problem I ran into is that 0.3.10 no longer allows you to set datetime
columns using strings.
What database are you using? I did some rework around MSSQL and dates
between those versions.
You now need to use
Hi,
Those of you using MSSQL may remember the fun we've been having with
scope_identity(). In short this is a way to reliably fetch inserted IDs
from tables, avoiding a bug related to triggers. The problem was that
PyODBC needed a mod by Michael Jahn to do this.
Well, I see PyODBC 2.0.37 has
FYI I believe there is a ticket to make improvements in the type system that
would allow strings to be given as date input (among other conveniences),
and I don't think it's a bad thing. Lots of databases make the conversion
anyway, and it's ultimately pretty confusing thing to have various
this particular feature is easily implemented right now as an end
user recipe, using TypeDecorator. no new type system is needed for
this one (although there is a ticket for such).
the advantage to TypeDecorator is that you get to define what kind of
date representation you'd like to
Hi,
I've been bugging Mike for a long, long time about a better type
system, and I think I may have oversold it and made it sound too big
and grandiose. All it needs to be is a layer that adapts the incoming
type to the desired column type.
That would help with the situation in
I think there's something a little simpler we need - some
documentation. For all the SA types we should document the type that
convert_result_value returns, and that convert_bind_param expects, and
check that all the DBAPIs stick to this (probably with unit tests). I'm
pretty sure there's
Yeah of course date formats vary, it's one of the trickier issues in type
adaption, can be computationally expensive, etc. A full-on date
parser is probably just way out of scope for SA (the excellent
dateutil package already
handles it pretty well). I'm of the opinion that it would not be so
On Wednesday 08 August 2007 12:18:24 Paul Colomiets wrote:
[EMAIL PROTECTED] wrote:
hi, i have similar idea/need within dbcook, although on a
somewhat higher level:
pre
cache_results/: (dbcook/SA) add-on for automaticaly-updated
database denormalisation caches of intermediate results,
On Wednesday 08 August 2007 11:44:57 Paul Johnston wrote:
Hi,
heh, adding this raw-data-copy to the autoload.py
makes quite a database-copier/migrator...
Yes indeed, I used this yesterday to migrate a legacy database, it
was impressively quick and easy.
I can see we've got similar
15 matches
Mail list logo