So, I can use that way in inserting one row, but can't when inserting
multiple rows ? It is correct ?
2008/11/24 Michael Bayer [EMAIL PROTECTED]:
oh, right. Column objects only work when you say insert().values(**dict).
MikeCo wrote:
Using 0.5.0rc4 doesn't seem to do that. or what am I
Hello,
I think I have found a bug, but I may be doing something wrong. It
looks like session.query(class).set_shard(shard_id) does not work
and session.connection(shard_id=shard_id).execute does. The first
does not return any result, the second one does (even when executing
the same query).
I've
All of that dbcook stuff scares me, though I think I can see
why you want it.
heh. your model will look this way:
---
import dbcook.usage.plainwrap as o2r
class Text( o2r.Type): pass
class Itemtype( o2r.Base):
name = Text()
inherits = o2r.Association.Hidden( 'Itemtype',
Petr Kobalíèek wrote:
So, I can use that way in inserting one row, but can't when inserting
multiple rows ? It is correct ?
you can only use string keys as the arguments to the execute() method.
this applies to one row or many. columns as keys can be used for the
values argument/generative
You probably don't want to do the inserts one by one because of the
commit overhead, or needing to rollback on failure of on insert. You
can still get multiple inserts in one transaction. Add this to the
example posted at http://pastebin.com/fd0653b0 to see three inserts in
one transaction.
Oops, I stand corrected. see http://pastebin.com/fe4a38d6
At least for SQLite, my loop solution is many times slower than the
insert many syntax. I would be curious to see results run against
different database engines. I don't have quick access to them right
now.
Still, unless there are very
executemany() syntax is very efficient and I dont really understand
how the column/string thing is that much of an issue other than an
small inconvenience and a slight failure of the API to be
consistent...all you have to do is convert the dict keys to be
column.key.
On Nov 26, 2008,
And that is what we did in our application before this discussion even
started. Don't know what Petr is doing in his.
I think it is more of an interesting, mostly academic, discussion
about alternative techniques; probably a very low priority issue to
the SA code base.
On Nov 26, 5:56 pm,
insert() has had some inconsistencies being reported as of late (like
params() ) that i would like to get nailed down. a construct like
this shouldn't have any surprises.
On Nov 26, 2008, at 6:04 PM, MikeCo wrote:
And that is what we did in our application before this discussion even
Hey all,
I've got a situation where I have 2 object A and B, and a third object
C that has a foreign key reference to both A and B. I can have many
C's that map to the same A.
Now I've implemented a MapperExtension for C that has an after_delete
function, and that function checks to see if the
i'm not expert on these, but i think u need something like
cascade='all' on your relation, _instead_ of the mapperExt. check the
docs about possible settings. the mapperExt fires too late and the
session flush-plan gets surprised.
On Thursday 27 November 2008 08:15:04 David Harrison wrote:
Sorry, I should probably have mentioned that C isn't the only object
that maps A, so a cascade doesn't work.
2008/11/27 [EMAIL PROTECTED]:
i'm not expert on these, but i think u need something like
cascade='all' on your relation, _instead_ of the mapperExt. check the
docs about possible
So this is actually a follow on from a question I posed quite a while back now:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/4530dd3f5585/eb4638599b02577d?lnk=gstq=Postgres+cascade+error#eb4638599b02577d
So my approach to solving this problem was to use a MapperExtension,
13 matches
Mail list logo