And that is what we did in our application before this discussion even
started. Don't know what Petr is doing in his.
I think it is more of an interesting, mostly academic, discussion
about alternative techniques; probably a very low priority issue to
the SA code base.


On Nov 26, 5:56 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
> executemany() syntax is very efficient and I dont really understand  
> how the column/string thing is that much of an issue other than an  
> small inconvenience and a slight failure of the API to be  
> consistent...all you have to do is convert the dict keys to be  
> column.key.
>
> On Nov 26, 2008, at 5:52 PM, MikeCo wrote:
>
>
>
> > Oops, I stand corrected. seehttp://pastebin.com/fe4a38d6
>
> > At least for SQLite, my loop solution is many times slower than the
> > insert many syntax. I would be curious to see results run against
> > different database engines. I don't have quick access to them right
> > now.
> > Still, unless there are very large numbers of inserts there should be
> > little, if any, noticeable difference.
>
> > --
> > Mike
>
> > On Nov 26, 4:50 pm, MikeCo <[EMAIL PROTECTED]> wrote:
> >> You probably don't want to do the inserts one by one because of the
> >> commit overhead, or needing to rollback on failure of on insert. You
> >> can still get multiple inserts in one transaction. Add this to the
> >> example posted athttp://pastebin.com/fd0653b0tosee three inserts in
> >> one transaction.
>
> >> print '****', '5. column attributes in .values()'
> >> data = [{MyTable.c.col1:itm.x} for itm in items]
> >> print '****', data
> >> trans = conn.begin()    # < - - start a transaction to wrap many
> >> inserts
> >> for d in data:
> >>     conn.execute(MyTable.insert().values(d))
> >> trans.commit()  # < - - end transaction
>
> >> Wrapping the insert loop in a transaction should give you the same or
> >> very nearly the same performance as executing the "insert many"
> >> operation, and only adds 4 or 5 simple lines of code. The exception
> >> might be if you have a large batch process inserting thousands or  
> >> tens
> >> of thousands of records; in that case, you will need to design a  
> >> solid
> >> commit strategy anyway. with or without SQLAlchemy.
>
> >> --
> >> Mike
>
> >> On Nov 26, 10:42 am, "Michael Bayer" <[EMAIL PROTECTED]>  
> >> wrote:
>
> >>> Petr Kobalíèek wrote:
>
> >>>> So, I can use that way in inserting one row, but can't when  
> >>>> inserting
> >>>> multiple rows ? It is correct ?
>
> >>> you can only use string keys as the arguments to the execute()  
> >>> method.
> >>> this applies to one row or many.  columns as keys can be used for  
> >>> the
> >>> values argument/generative method on an insert() or update()  
> >>> construct.
>
> >>> to be quite honest there was never an intention for columns as  
> >>> keys to not
> >>> work properly when sent as keys as parameters to .execute(), there  
> >>> just
> >>> seems to be a lack of testing in this case, so feel free to file a  
> >>> ticket
> >>> for this.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to