Michael Bayer wrote:
matiskiva wrote:

Naturally, the problem is that SA collects the generated id, that is
why every statement is executed alone, and than the generated id is
retrieved.


I was wondering though, is there some kind of a more efficient
mechanism?
At least on MySQL, under InnoDB [or with table locking] all the items
can be inserted, and individual Id's can be calculated.
Is there any development there?

within the ORM side, very unlikely.  DBAPI supports no generalized system
of returning newly generated primary key identifiers across a
multi-insert, and the ORM does not use the same INSERT statement for every
row - some may have values missing for which those columns are omitted
from the ultimate INSERT so as to allow a server-side default to fire off.

In the case you describe, I assume you mean that the IDs can be "guessed"
by assuming 30K new rows means "max(id)....max(id) + 30000".  I'm not
aware of another system, but let me know if there's some feature of InnoDB
at play here.

If indeed that's the idea, that scheme relies upon specifics not just of
your database but also the scenario, that is your 30K inserts are all of
the identical set of columns, using a sequential integer generator with no
"skipping" of ids which may already exist out of sequence within the table
(i.e. that the inserts really start at max(id) and not just available
slots), with no concurrent inserts elsewhere that may change the
sequencing as it goes (or the table were locked entirely).   All fine
things but nothing that the ORM can assume.   Also, if you're inserting
30K rows, I doubt you really *need* the newly generated IDs or "live
objects" at that point, it sounds much more like you're just dumping a
huge amount of rows to be individually accessed at some later time.

If you *do* actually assume all these things, then you really should just
be using an executemany - particularly since you're assuming you know the
exact structure of rows to be inserted.

I'm assuming you're familiar with executemany, or if not, its described here:

http://www.sqlalchemy.org/docs/sqlexpression.html#executing-multiple-statements



Regards,
Mati

--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.



Thank you for your feedback.

In the case I mentioned, I cannot just dump the 30K items. Because I need their generated id's for other inserted rows (I have relations) In more detail, I am inserting "resource" items, and also mission items, each mission connects to a resource via id. So dumping all the data, without connecting python object to id is not an option.


About the strategy for resolving the matter - I need to have some assumptions, such as table locking. Obviously this is not a good general approach, however in many cases a functionality of bulk-insert-and-acquire-generated-id is desired. If the functionality requires a special configuration and error handling, than we guarantee that the developers are not surprised by failures.

So, I would like to implement it. Hopefully with your guidance.

As I see it, I need to perform the following steps:
* group insert actions by destination table (and order, not breaking dependencies) this will allow me to perform bulk operations, as I am working on a list of items, rather than one item. * perform configuration and environment defendant SQL operations, or withdraw to original insert operation

I need your help in the followings:
* some description of how the session.add results in insert commands
* a pointer to the code that deals with the inserts

Regards,
Mati

This mail was sent via Mobileye Mail-SeCure system.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to