;
> Em Qui, 2008-01-10 às 21:20 -0500, Rick Morrison escreveu:
> > You're mixing single-table inheritance (using the discriminator
> > column), with concrete inheritance (using multiple tables).
> >
> > You have to pick one scheme or the other. Either use a single
I'm not sure I understand what you're looking for...you want the column to
remain NULL after an insert?
Then take off the default from the column definition and make it a datetime
field instead of a timestamp.
On Jan 10, 2008 9:15 PM, deanH <[EMAIL PROTECTED]> wrote:
>
> Hello,
>
> I am having
You're mixing single-table inheritance (using the discriminator column),
with concrete inheritance (using multiple tables).
You have to pick one scheme or the other. Either use a single inheritance
chain, or separate the two class hierarchies into two separate chains that
don't inherit from each o
Ah, I read too fast, you are getting back the ColumnDefault object
try column.default.arg
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googleg
Isn't it just
column.default ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EM
For a stepwise migration from raw, SQL, it will probably be easier to get
your mind around the SQL-expression side of the library, and then adopt ORM
features as you feel comfortable with them.
On the SQL-expression side of the library, you'll find that your Table()
object has a collection called
Runs clean here now -- thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL
> actually that explaination makes no sense. the warning is only raised
> when that "_for_ddl" flag is True which should *only* occur during
> CREATE TABLE.
>
> the issue is only with CREATE TABLE.
Well, here's the traceback:
-> main.metadata.drop_all(checkfirst=True)
/home/xram/Projects/oss/
89 of compiler.py
> def visit_typeclause(self, typeclause, **kwargs):
> return typeclause.type.dialect_impl(self.dialect,
> _for_ddl=True).get_col_spec()
>
> if "type" subclasses Text, there should be no warning
>
> On Jan 8, 2008, at 10:51 PM, Rick Morrison wr
}
On Jan 8, 2008 10:27 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:
> it has to do with what the string/text types in mssql.py inherit - if you
> inherit String you get the warning, if Text/TEXT, you dont. just look
> inside of dialect_impl().
>
>
> On Jan 8, 2008, at
I still get this on r4031 with MSSQL/pymssql. Are there changes that need
to be made in the database module, maybe? Far as I can see all my Table()
defs use the TEXT type identifier.
On Jan 8, 2008 4:36 AM, Felix Schwarz <[EMAIL PROTECTED]> wrote:
>
>
> Michael Bayer schrieb:
> > can you try r40
Hey I've been busy and haven't had a chance to comment on the 0.4.2 release
yet.
Just wanted to give a big congrats on this release. I know it's one of the
few releases out that have spawned an "a" (and looks like a forthcoming "b")
interim releases, but aside from those issues, it's a really nice
db.Query(ModelObject).get(id)
> except:
> o = ModelObject(1, u'title')
> db.save(o)
> db.commit()
> return o
>
> -Braydon
>
>
> Rick Morrison wrote:
> >
> > ...if you're just checking to see if something exists in the
...if you're just checking to see if something exists in the database, why
not just try to .load() it, and then construct it afresh if you don't find
it?
This kind of operation is sometimes called an "upsert" ...some database
engines support it, some don't. Most don't. But what all database engine
Yah, that's a surprise to me too. I usually .flush() anyway (it's just good
hygiene :), but never knew that session.commit() implied session.flush()
On 12/26/07, Denis S. Otkidach <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 26, 2007 6:29 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:
> > yet another sce
Something like this is available on a roll-your-own basis via Python
properties along with some mapper tricks:
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_overriding
I would be +1 for such a feature implemented on mapped instances, could be
useful for detecting those har
ge format is going to be a huge job, and really won't give
you anything that you don't already get from pickle -- what other app
besides you own is ever going to understand those heavily class-hinted JSON
files?
On 12/20/07, Matt <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 20,
You're not going to be able to serialize Python class instances in JSON:
json strings are simple object literals limited to basic Javascript types.
Pickle does some pretty heavy lifting to serialize and reconstitute class
instances.
Easiest way to store JSON in the database is to limit the type of
Agreed, it's easier to start with a single file, and then split from there
as maintaining it becomes more difficult.
You'll find the table defs and mappers, and mapped classes are pretty
inter-related, and you'll drive yourself nuts if you have them in 20
different places.
--~--~-~--~
How would you specify the logical "parenthesis" to control evaluation order
in complex expressions?
On Dec 20, 2007 5:14 AM, svilen <[EMAIL PROTECTED]> wrote:
>
> query.filter() does criterion = criterion & new
> why not having one that does criterion = criterion | new ?
> its useful to have some
I believe that as of 0.4.0, that's now:
column.in_([1,2,3,4])
On Dec 19, 2007 6:23 AM, Bertrand Croq <[EMAIL PROTECTED]> wrote:
>
> Le mercredi 19 décembre 2007 12:06, Marcin Kasperski a écrit:
> > Maybe I missed something but can't find... Does there exist
> > SQLExpression syntax for
> >
>
OK, I checked to make sure the updates were being fired (and from the looks
of the log, they are).
But I think I see that the lack of update executions hasn't been the problem
all along, but rather that those updates are not finding their row... never
checked that part.
I'm offsite right now and
Same here on pymssql.
I tried it with 'start' as the only PK, and with both 'identifier' and
'start' as PK. Both work fine.
Are you sure your in-database tabledef matches your declared schema?
I've attached a script that works here. This one has both 'identifier' and
'start' set as PK.
***---
Just noticed that ResultProxy.fetchall() is a bit broken in 0.4x (I think
it's for queries that do not populate the DBAPI cursor.description). In my
case, it's executing a stored procedure that returns data:
S.execute('exec schema.storedproc 1234').fetchall()
Traceback (most recent call last):
Hey Fabio, would you please post a full non-working copy with the new schema
and all the PKs that you want set up? There are a few too many variants in
this thread to see what's going on now. Your earlier versions didn't include
'station' as a PK, but did include 'start', while this one's the oppos
> I did not get any exception... doh! :) What kind of exception did
> you get?
The traceback I get is below. If you're not getting one, it may be a pyodbc
issue, which I don't have installed right now.
Traceback (most recent call last):
File "test.py", line 31, in ?
sa_session.commit()
> But another thing, is that the whole idea of "save/update/save-or-
> update", which we obviously got from hibernate, is something ive been
> considering ditching, in favor of something more oriented towards a
> "container" like add(). since i think even hibernate's original idea
> of save/update
Wouldn't a flavor of .save() that always flush()'ed work for this case?
say, Session.persist(obj)
Which would then chase down the relational references and persist the object
graph of that object...and then add the now-persisted object to the identity
map.
...something like a 'mini-flush'.
--~-
> Having to
> call flush before doing anything that might require the ID seems
> excessive and too low-level for code like that.
Why? To me, having to work around the implications of an implicit persisting
of the object for nothing more than a simple attribute access is much worse.
For example, I
I did get an exception, that's how I knew to change the type!
On 12/10/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
>
> On Dec 10, 2007, at 12:47 PM, Rick Morrison wrote:
>
> > This works here on MSSQL/pymssql with a small change:
> >
> &g
This works here on MSSQL/pymssql with a small change:
-- j = Job("TEST1", datetime.datetime.now())
++ j = Job(1, datetime.datetime.now())
MSSQL (and most other db engines) are going to enforce type on the
'identifier' column. In the new code, it's an int, so...no strings allowed.
The original ex
Yeah, it was a "for instance" answer, you'll need to use the correct MySql
syntax of course.
On 12/10/07, Adam B <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 10, 1:16 am, "Rick Morrison" <[EMAIL PROTECTED]> wrote:
> > Any query using sql expres
Any query using sql expressions is going to want to use correctly typed data
-- you're trying to query a date column with a string value. The LIKE
operator is for string data.
I'm not up on my mssql date expressions, but the answer is going to resemble
something like this:
.filter(and_(fu
> I still need to sort out a way to have MSSQL unit test periodically
I'm still planning on hosting a buildbot as I promised some months (how
embarassing) ago. The first one will be Linux + pymssql, but once I get the
new VMware host provisioned out here, I can put up a Windows + pyodbc host
too.
column.foreign_key is now a list: foreign_keys[]. Trunk looks correct and
should work. Works here.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@go
If I recall, your application is using localthread strategy and
scoped_session(), doesn't it? Doesn't scoped_session() collect references
from otherwise transient Session()'s and hold on to them between calls?
--~--~-~--~~~---~--~~
You received this message because
Just noticed that for mappers that use tables that do not define foreign
keys, specifying only 'primaryjoin=' plus 'foreign_keys=' doesn't seem to
be sufficient to define the relationship, adding 'remote_side=' fixed it.
Also for such mappers, if there is a 'backref', the backref doesn't seem to
Another technique to think about would be to enumerate the child addresses
with a small integer value, say "sequence". You can then add the "order_by"
to the relation to fetch the addresses in sequence order, and by convention
the first address in the result list -- the one with min(sequence) in th
Running much better this way...
>I think this will make the next release and adoption of the feature a
>lot easier.
Me too, many thanks for the quick turnaround!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlal
> the assert_unicode thing only happens with python SQL expressions,
>usually DDL is issued using textual SQL. if youre using a SQL
> expression for your DDL
The creates/drops are via Metadata.create_all() / .drop_all(), so perhaps
there's an expression in that path? Didn't trace it yet (as I tho
> also, what im considering doing, is having assert_unicode only be
> implicitly turned on for the Unicode type specifically. the engine-
> wide convert_unicode and String convert_unicode would not implicitly
> set the assert_unicode flag.
That makes more sense to me, and I would prefer it. The
What's the point of the new 'assert_unicode' flag? It's causing failures
with pymssql DDL (in my case CREATE / DROP)
Is there a good reason to not only allow unicode, but to *enforce* it for
DDL?
I'd like to have one set of tabledefs, prefererably in ASCII for DB
portability.
--~--~-~--~
If memory serves, there is already an 'encoding' attribute on each Dialect
instance that is normally used in conjunction with another Dialect flag
'convert_unicode'. Not sure if it dovetails with your plans, tho
On 11/25/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
>
> Hi Florent,
>
> Just r
Is there any way to configure logging on an engine instance after the engine
has been instantiated?
it looks to me as if the engine init checks the module logger status and
sets a couple of flags "_should_log_info" and "_should_log_debug". (I'm
guessing these are there to keep the logging function
Hi Marco,
There is a DB2 driver in the works, but I haven't heard much noise about it
lately, so I don't know what kind of progress is being made.
As for supported drivers, the three engines you mention are all supported, I
think that PG is probably has better test coverage than either Oracle or
I use a similar technique with a Pylons controller, but instead of
engine.begin(), I use session.begin(). Then by passing around the session
for all calls made by that controller, I can use Session.execute() for
expression-based and text-based SQL mixed with ORM ops, and it all commits
in one shot
>in the after_*() there are (mapper, connection, instance) arguments -
>but there's no session. Any way to get to that? mapext.get_session()
>does not look like one
http://www.sqlalchemy.org/docs/04/sqlalchemy_orm.html#docstrings_sqlalchemy.orm_modfunc_object_session
--~--~-~--~~-
>i have non-ORM updates happening inside
>ORM transaction (in after_insert() etc). How to make them use the
>parent transaction? i have a connection there.
You can pass in the Session and use Session.execute() to reuse the session
connection.
--~--~-~--~~~---~--~~
The issue is that in essence, both answers are right. ANSI SQL specifies
different data types for DATE fields and DATETIME fields, where DATE fields
do not hold the time portion. Oracle, SQL Server and other database engines
have their own ideas about how best to handle dates / datetimes.
SA split
Ah good thanks: I had noticed that too and was just ignoring it until it
bugged me enough. Laziness pays off again!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email
Also depending on how you are starting the subprocess, you may have control
over open file handle inheritance. The subprocess module, for example allows
you to close all open inherited files, which would include open sockets for
DBAPI connections (or a file handle in the case of SQLite). You could
I guess you know that storing the actual bytecodes (or the source) of a
Python function in the database itself is not going to buy you much:
Since the function bytecodes or source
would be in Python, only a Python interpreter could run it to produce
the function result, and if you know you're goin
This is going to be messy, as the support for unicode varies among the
various MSSQL DBAPIS (which is in lart part why multiple DBAPI support is
needed by the MSSQL driver). ODBC looks to me to tell the best story:
The newer ODBC drivers have an "autotranslate" feature that somehow
retrieves the c
Most database engines support a couple of SQL functions that help in cases
like this, read your database docs for either the ISNULL or the COALESCE
function.
Another technique is to use an SQL CASE statement.
For all three methods the idea is to supply a default value to substitute
when the valu
One of the reasons that Query.select() is deprecated is that the way it was
named led to this kind of confusion.
The Query() class is used for ORM operations, and when it's used as mapped
against a table, it's going to give you all the columns from the table by
default. There are ways of defining
all of them? same as any Python object:
obj_attributes = [k for k in obj.__dict__]
On 11/6/07, Christophe Alexandre <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> I am also interested in retrieving all the attributes resulting from the
> ORM.
>
> The loop on '.c' will list only the database columns.
Ah. OK, thanks!
I checked in a small update to the 3.x -> 4.0 migration guide in the docs to
note this.
On 11/5/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> you need to return EXT_CONTINUE for your TimestampExtension methods.
>
Rick
--~--~-~--~~~---~--~~
You
Seems that when multiple mapper extensions are used, only the first is run.
Testcase attached
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googleg
nd of error, im
> surprised its trying to chug along with that (also that its not trying
> to make some bizarre self-referential join on the table). id
> actually want to file a bug report that this didn't raise a whole
> bunch of errors and alarms...
>
> On Nov 2, 5:23 pm, &
Please have a look at the attached testcase that mixes single-table
polymorphic inheritance, primaryjoin overides on the relation() and a
secondary table.
SA seems to add a WHERE clause to filter for the polymorphic type on
relations that it calculates, but forgets to add it for the test case. Is
> extensions colliding with user data in these dicts, then we'd also be
> concerned about extensions colliding with other extensions, and mutliple
> dicts arent helping in that case anyway. keys can be placed as tuples
> (such as ('myext', 'somekey')) if n
Autoloading and foreign key detection are problematic on some DB engines (I
think SQlite falls in this category as it doesn't directly support FKs), on
other engines they are simply hard to detect.
I would think that your override of the FK in the t_incident table should be
all that's needed to pi
That sounds reasonable to me; my knee-jerk thought was that we might need to
worry about memory usage, but these references are only on low-count
instances like tables, columns, sessions and mappers, not ORM object
instances.
On 10/31/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> Ah s
If you don't want to pollute the current namespace, then make an indirect
reference.
1) Make a module of your own, say db.py
2) In db.py:
...
from sqlalchemy import *
from sqlalchemy.orm import *
...
table1 = Table('footable', ...)
...
# other ta
Ah sure, so it's to be a namespace for namespaces, a shared dict() parking
lot. Got it.
So then, how about
"aux"
"etc"
"other"
or maybe
"miscdata"
"extra"
"more"
"additional"
"supplemental"
"auxiliary"
"adjunct"
"appendix"
"surplus"
"spare"
"augment"
--~--~-~--~~~--
>
> > The core can (and does) use these buckets too, so I'm not sure about the
> > user-y moniker.
Hold it. I thought the whole point of this was to separate core usage from
user usage? To create a "safe-zone" for library user's private data.
> But if that were it, I'd only be +1 on a spelled
personal opinion: I'm not wild about either 'attributes' or 'properties',
(a) they seem too long, and
(b) yes, they are too similar to generic ORM terms
many many moons ago (pre Windows-1.0) I used an Ascii-GUI thing called
C-scape (I think it's called "vermont views" now).
anyway, most of it
I often use Session as a context placeholder, and have felt a bit uneasy
about this as you never know when some new release is going to stake a claim
on the name you've used. I know I'd feel better if there was a name that
would be kept aside.
On 10/26/07, Paul Johnston <[EMAIL PROTECTED]> wrote
..coincidentally released on the self-same day when I am finally taking the
wraps off 0.4.0 for a spin on a new project.
Congrats on this huge release, everybody!
On 10/17/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
> Hey list -
>
> I'm very happy to announce that we've put out 0.4.0 fina
Yes, that would be the only thing that has a hope of working across database
engines. LIMIT with UPDATE is MySQL-only AFAIK.
For SA, joins in updates can be tricky, so the correlated query would best
be a IN() or EXISTS() query that has the limit you want. To get a
deterministic set of records tha
BTW the "assume autoincrement for integer PK" behavior is a hangover from
the early days of the module, where I originally copied from the PG
module. Since PG doesn't have the funky "it's ok to explicit ID insert now"
mode, the problem doesn't surface there.
I think the behavior was meant more for
Yes, those are called subqueries; they're fully supported by SA. Your query
above has a couple of items of note:
a) it's a correlated subquery: the inner query references items in the
outer query. (supported by SA)
b) it's a self-join: the inner query and outer query reference the same
table. (a
This is Python, after all, and it would be trivial to simply put whatever
attribute you want on a Table, Column or any SA object.
SA would just need to stay out of the way and agree not to use a certain
attribute like "description" or "userdata", or whatever.
--~--~-~--~~~
It's on the to-do. This would be a great place to start hacking on SA if
you're interested, it's a feature that's been requested a few times now.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To
Don't build SQL strings up from fragments that contain user input -- it's
what makes the application subject to SQL injection in the first place.
Safest would be to use a bound parameter for the literal. See here for
details:
http://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_everythingelse
Hi Scott,
> ...to develop on Windows and deploy on Linux. It sounds
> like pyodbc is the best option
pyodbc works well in Windows, but I've heard on Linux, not so much. pymssql
is your best bet for Linux. Note that pymssql does not work with unicode,
limits SQL identifiers to 30 characters, and w
I believe the 0.4 unit tests have profiling support, have a look there.
On 9/14/07, Hermann Himmelbauer <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
> I'd like to know if there is some profiling support in SQLAlchemy. It
> would be
> nice if there would be some data accompanying SQL statements in the
>
I think it might be more historical than anything else. Back when what is
now filter() was a single argument to the select() call, on the SQL-API
side, and there couldn't take any additional arguments, as the select() call
was already pretty heavy with keyword arguments and it was easy to get
thing
> Im not sure about creation, but I've not had any problems using
> cross-schema foreign keys, relations, joins and so forth using
> SQLAlchemy 0.3 and PostgreSQL.
..and of course the test case I wrote up to show the problem worked fine.
Turns out the issue was in the PK declaration for the table
SA 0.3* doesn't seem to handle relationships between tables in different
schemas very well: it seems to think that
schema.A -> public.B
is:
schema.A -> schema.B
and even specifying "primaryjoin=" in the mapper won't help it. There seems
to be no directive to say "use the default / p
SQL Server provides no facilities for retrieving a GUID key after an insert
-- it's not a true autoincrementing key. The MSSQL driver for SA uses either
@@IDENTITY or SCOPE_IDENTITY() to retreive the most-recently inserted
autoincrement value, but there is no such facility for getting GUID keys.
S
You should find it at z.USER_SID after the flush.
Not sure about your save() and flush() calls howevershould be
session.save_or_update(z) and session.flush()
On 9/11/07, Lukasz Szybalski <[EMAIL PROTECTED]> wrote:
>
>
> Hello,
>
> I am saving to my column in this way. How do I get the prima
> there's just a few odd places, and I wonder if "drop table" is one of
> them, resulting in the original cause of this thread.
I don't think that DROP is a special case. Look upthread. The incorrect DROP
happened in the same wrong schema as the incorrect CREATE. The problem is
that the check-tabl
> Should ansisql recognize and use default schemas,
> or should the DB dialect somehow override the construction of the table
name?
The more I think about this, the more I'm becoming convinced that specifying
an implicit default schema in all generated SQL is a pretty bad idea. The
reason is that
>> Sure, but the drop is being issued in the correct default schema (dbo).
>No it's not. If I don't enable checkfirst the table is dropped, which
>means both statements are issued on the wrong schema (considering that
>the check is right).
Ah OK I didn't get that from the previous
messages. Then
> It's for the delete (which then does not happen because the table is not
found)
Sure, but the drop is being issued in the correct default schema (dbo). The
error is not that the drop is being issued in the wrong schema, it is that
the table was *created* in the wrong schema, and so is not where
> but it would be more sane
> if power(connect_args) >= power(url_args), that is, connect_args is
> the more general/powerful/ way, and url allows a subset (or all) of
> connect_args items; and not vice versa -- connect_args is the
> programatical way, so that should do _everything_..
If we ever g
It's an SQLite-ism. See the recent thread on the type system. I've had
exactly this issue with SQLite vs. MSSQL.
On 8/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> > >i'm Wondering if all the unicode strings (at least table/column
> > > names) should be converted back into plain string
That SQL log is from the table existence check.
Although it's unclear from the trace and log as to whether the check
is for the table create or for the table drop, it is correctly using
the default schema, which is 'dbo' on all MSSQL platforms.
So, the table check and the drop are working correc
Yeah of course date formats vary, it's one of the trickier issues in type
adaption, can be computationally expensive, etc. A full-on date
parser is probably just way out of scope for SA (the excellent
dateutil package already
handles it pretty well). I'm of the opinion that it would not be so
horr
> I think there's something a little simpler we need - some
> documentation. For all the SA types we should document the type that
> convert_result_value returns, and that convert_bind_param expects, and
> check that all the DBAPIs stick to this (probably with unit tests). I'm
> pretty sure there's
FYI I believe there is a ticket to make improvements in the type system that
would allow strings to be given as date input (among other conveniences),
and I don't think it's a bad thing. Lots of databases make the conversion
anyway, and it's ultimately pretty confusing thing to have various Dialect
MSSQL is case-sensitive, and wants to see queries to INFORMATION_SCHEMA in
UPPER CASE.
See mssql.py.uppercase_table() for the gory details, or rather, THE GORY
DETAILS ;-)
On 7/27/07, Christophe de VIENNE <[EMAIL PROTECTED]> wrote:
>
>
> Hi svil,
>
> Still no luck. I don't know if the information
>
>
> what needs to be modified?
> can it be done by a src-patch? maybe i can apply it runtime (;-)
I think the patch has been out for a few weeks now, so it will likely be in
the next release of pyodbc.
There is also an easy workaround, pass use_scope_identity = False to the
Engine ctor, and it
The dependancy chains looks like this (more or less):
pymssql ==> DBlib ==> FreeTDS ==> wire
pyodbc (Unix) ==> iodbc / unixodbc ==> FreeTDS ==> wire
pyodbc (Win) ==> ms-odbc ==> OLEdb ==> wire
The unicode problem for pymssql is in DBlib, not FreeTDS
Rick
On 7/26/07, Christophe de VIENNE
I'm going to reply here, as I can't seem to login to trac again. I did
manage to get comments in for #685
#679 - committed in r3050
#684 - committed in r3051
#685 - needs more discussion see the Trac comments
Thanks for the patches!
Rick
On 7/26/07, Christophe de VIENNE <[EMAIL PROTECTED]> wro
> you may do better to focus your efforts on getting PyODBC working better
on Unix.
Agreed here. One stable and supportable DBAPI for MSSQL would be really
nice.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlal
Thanks for having a look at the tests, Christophe.
Responses below:
The reason is that MSSQLDialect.get_default_schema_name takes no
> argument, while the unittext gives it an engine. The Mssql dialect,
> for example, does take an argument.
It looks to me like get_default_schema_name() has rec
The list is useful only for a hacker on the MSSQL module, not for general
users, but FWIW, I've added it to
http://www.sqlalchemy.org/trac/wiki/DatabaseNotes
I generally try to respond directly to help encourage anyone willing to
offer a hand on the MSSQL module, as I don't have the time these day
Hi Christope,
> I see. Are the reasons for thoses failures well known ? fixable ? If
> it's not too tricky I could spend a bit of time on it in a little
> while.
The reasons for the failures that I've had time to look into have so far had
as much to do with the tests as with the MSSQL module. Th
201 - 300 of 453 matches
Mail list logo