Question 1:
I don't think there is a good fancy way of doing this built in to 
SQLAlchemy. With your constraint of using a stored proc for inserts (we 
have a similar constraint where I work), one way around the 
multiple-command overhead would be to do a bulk insert to a temporary 
"parameters" table, and then use another stored proc that will send those 
params to your original proc one set at a time, or will implement a more 
efficient bulk "insert from select." Error handling can become complex if 
there are instances where your parameters to the insert function could be 
rejected by the stored procedures.

Some engines will also let you send multiple commands in a "BEGIN ... END" 
sql block, but I have not tested this with Oracle and there is always a 
limit to how large a command like that can be.

Question 2: 
a / b : A single transaction will not span threads or connections in a 
pool. Your described approach would use a single thread with a single 
connection on the client side.
c: The only real overhead here is the 1000 separate db calls, especially if 
they go across a network. There is some overhead to each command sent to 
the db server. 1000 calls is a lot but might not be too many, so before you 
go through any optimization, do some performance testing to make sure you 
need it.

On Wednesday, 21 October 2020 at 13:11:06 UTC-4 thequie...@gmail.com wrote:

> In our application we are using SQLAlchemy to retrieve data from our 
> Oracle database. However, we have a policy that any INSERTS to the Oracle 
> database are done via stored procedures. We have approximately 1000 rows 
> (where each row  consists of 3-4 simple scalars such as numbers or strings) 
> that we need to insert.
>
> Question 1 - is there a sophisticated approach of creating an array of 
> these 1000 items and submitting this collection to Oracle in one shot? If 
> so, are there any references on this?
>
> Question 2 - If we adopt a naive approach of opening a transaction, making 
> 1000 calls to this stored proc that does simple inserts, and then commiting 
> to database at the end of the 1000 calls to the stored proc, does this 
> represent a bottleneck on database/computer resources of some sort, such as:
> a - will multiple threads will be created for this?
> b - presuming the default of 20 pooled connections to the database 
> <https://docs.sqlalchemy.org/en/13/core/pooling.html>, how many 
> connections from this pool will be used for the 1000 calls to the 
> inserting-stored-procedure before we commit?
> c - Would synchronous calls of this stored procedure pose a serious 
> performance/system-overhead issue? 
>
>
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d40accbf-efb2-464d-85f7-f8c75e9134can%40googlegroups.com.

Reply via email to