Ok, thanks!
On Mar 26, 4:26 am, Michael Bayer [EMAIL PROTECTED] wrote:
no. do a clear_mappers() and build your mappers again if you need to
change the base configuration.
On Mar 25, 2007, at 3:35 PM, Koen Bok wrote:
I get this, but that's only on a particular query object. That makes
On 3/23/07, Michael Bayer [EMAIL PROTECTED] wrote:
OK, this is actually something people have asked for a lot. in the
beginning, recently, etc. also for different reasons...i.e.
convenience, or performance, etc. So, first off let me start by
illustrating how this use case is done right
And by the way, if you agree with that direction of things, I'd
happily work on a patch for the query-on-relation thing.
On 3/26/07, Gaetan de Menten [EMAIL PROTECTED] wrote:
On 3/23/07, Michael Bayer [EMAIL PROTECTED] wrote:
OK, this is actually something people have asked for a lot. in
On Mar 26, 2007, at 6:15 AM, Gaetan de Menten wrote:
though, as a user, I'd prefer the first solution.
Wouldn't that be possible? I think it should be. You only need to keep
the deferred approach of the InstrumentedList that I demonstrated in
my patch, so that the whole list is not
i notice that neither your table DDL nor your mappers have any notion
of a primary key, so thats not the complete application...whats below
will throw an error immediately.
but the most likely cause for what youre seeing is that if any
element of the primary key in a result row is None, no
On Mar 25, 3:54 pm, Michael Bayer [EMAIL PROTECTED] wrote:
I've modified my SA again, so ResultProxy.keys uses the name as it
comes out of the DB, and everything else uses the lower-cased version.
Again, my stuff still works, but the same test fails as before
i think using entry points to load in external database dialects is a
great idea.
though the current six core dialects i think i still want to load via
__import__ though since im a big fan of running SA straight out of
the source directory (and therefore thered be no entry points for
Michael Bayer wrote:
i think using entry points to load in external database dialects is a
great idea.
though the current six core dialects i think i still want to load via
__import__ though since im a big fan of running SA straight out of
the source directory (and therefore thered
I think the issue is you cant put a task_status ordering in your
Task mapper since that table is not part of its mapping.
http://www.sqlalchemy.org/trac/wiki/
FAQ#ImusinglazyFalsetocreateaJOINOUTERJOINandSQLAlchemyisnotconstructing
On 3/26/07, Michael Bayer [EMAIL PROTECTED] wrote:
i think using entry points to load in external database dialects is a
great idea.
though the current six core dialects i think i still want to load via
__import__ though since im a big fan of running SA straight out of
the source directory
Michael Bayer wrote:
i think using entry points to load in external database dialects is a
great idea.
though the current six core dialects i think i still want to load via
__import__ though since im a big fan of running SA straight out of
the source directory (and therefore thered be
When you do a multiple-row insert, such as:
users.insert().execute(
{'user_id':6, 'user_name':'jack', 'password':'asdfasdf'},
{'user_id':7, 'user_name':'ed'},
{'user_id':8, 'user_name':'scott', 'password':'fadsfads'},
{'user_id':9, 'user_name':'bob'},
)
...the 'password' field
i vote Query().
No surprise there ;-)
Me too, +1
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this
BrendanC [EMAIL PROTECTED] writes:
New user here trying to get started - I'd like to create an ORM map from an
existing legacy (SQL Server) database - although I may convert this to
Postgres or MySQL.
So what is the least painful approach to creating the ORM (I plan to
use TurboGears to
Always one in every bunch. :)
I hear what you're saying about the import errors. But does it really
help to allow work to get done before throwing the error? I would think
you'd want to know right up front if you don't have a driver loaded
rather then letting a program actually get started up
It works after I specify primary_key in my mapper or allow_null_pks, but not
if I specify
the column as primary_key in the Table() constructor.
Thanks
/kk
On 3/26/07, Michael Bayer [EMAIL PROTECTED] wrote:
i notice that neither your table DDL nor your mappers have any notion
of a primary
Currently when sqlalchemy performs a polymorphic lookup, it queries
against all the detail tables, and returns the correct Python object
represented by the polymorphic identity. Essentially you get a sub
select for each detail table that is included in your primary join
even though only one of
Another issue with the same setup:
node = query.select_by(name='konsole19.xx..com')[0]
print query.select_by(console=node)
The debugging output shows this:
INFO:sqlalchemy.engine.base.Engine.0x..74:SELECT node.cport AS node_cport,
node.type_id AS node_type_id, node.locshelf AS
18 matches
Mail list logo