On 09/29/2016 03:50 AM, Andrea Cassioli wrote:
Hi,
I am working with SQLAlchemy to insert data in a MSSQL database.
Typically we need to make regular insert of say half million rows in few
tables.

Right now I am using Core flavour to make batch insertion via pyodbc and
I have notice that in the DB trace I have an entry per row

|
declare @p1 int
set @p1=12996
exec sp_prepexec @p1 output,N'@P1 nvarchar(36),@P2 bigint,@P3 float,@P4
float,@P5 float,@P6 float,@P7 nvarchar(5),@P8 nvarchar(5),@P9
nvarchar(3),@P10 nvarchar(3),@P11 nvarchar(9),@P12 nvarchar(1),@P13
nvarchar(7),@P14 float,@P15 float,@P16 float,@P17 float,@P18 float,@P19
float,@P20 float,@P21 float,@P22 int,@P23 float,@P24 varchar(1),@P25
nvarchar(2),@P26 varchar(1),@P27 float,@P28 int,@P29 float',N'INSERT
INTO tmp.guest.[PythonAccess] ([RunID], [DemandID], [FFEPerWeek],
[WeightPerFFE], [MXNPerFFE], [TimeLimit], [PortFrom], [PortTo],
[CargoType], [CtrType], [StringCode], [Direction], [TradeLane],
[20fPct], [40fPct], [45fPct], [MaxTransshipments], [CostLimit],
[FFEPerWeekSource], [WeightPerFFESource], [TimeLimitSource],
[MaxTransshipmentsSource], [CostLimitSource], [SourceChanges],
[RouteCode], [Source], [TimeLimitSlack], [MaxTransshipmentsSlack],
[CostLimitSlack]) VALUES (@P1, @P2, @P3, @P4, @P5, @P6, @P7, @P8, @P9,
@P10, @P11, @P12, @P13, @P14, @P15, @P16, @P17, @P18, @P19, @P20, @P21,
@P22, @P23, @P24, @P25, @P26, @P27, @P28,
@P29)',N'737f1545-a49d-4120-81d4-47fa642d4e0b',12988,0,2759662062713899,36343,736111111109,50000,50,625,N'LVRIX',N'CMDLA',N'DRY',N'DRY',N'WAFEUR999',N'S',N'EUR/WAF',0,83302701910413379,0,16697298089586618,0,1,803,52160288929122,NULL,NULL,NULL,NULL,NULL,NULL,N'W3',NULL,NULL,NULL,NULL
select @p1
|


what surprise me is that there is a prepare per row.

OK I think you are referring to a "prepared statement", not a two-phase PREPARE. those are two different concepts.


A similar C++
implementation shows only an initial prepare and then only exec. And
indeed is much faster.

I have then saw I can force a prepare on the transaction

|
  with engine.connect() as conn:
      trans= conn.begin_twophase()
      trans.prepare()
|




But the two phase transaction is not supported for pyodbc,

Why is that? Any suggestion/alternative??

So assuming you mean "prepared statements", the Python DBAPI does not have an explicit "prepared statements" feature. A DBAPI like Pyodbc can choose to use prepared statements internally, either for some statements based on statement type, or sometimes as the implementation for the executemany() call that runs multiple parameters. It may or may not do so if you are actually looking to invoke this stored procedure, which you would invoke via the DBAPI cursor.callproc() method. This method is not part of SQLAlchemy, you would access the cursor directly. An example of this is at http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html?highlight=callproc#calling-stored-procedures .

As for two-phase, that feature isn't supported for SQL server either, overall just about every database driver in Python can barely get reliable behavior out of 2PC with the exception of Postgresql, so this is basically something nobody ever uses. It's not related to what you are illustrating above.







Thanks a lot.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com
<mailto:sqlalchemy@googlegroups.com>.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to