Tried doing various conversions on the pk values as they enter the 
statement:
1.  to bytes
2.  to ascii
3. to latin1 (technically the same encoding as the extract source before 
entering the db)

None of which yielded a performance improvement for the non-compiled 
version.

I have read that this can be an issue with pyodbc and that there are engine 
settings related to it.  Also perhaps I should try using the pymssql driver 
to see if that changes anything.

On Thursday, August 31, 2017 at 5:00:47 PM UTC-4, Ken MacKenzie wrote:
>
> So inspecting the elements of the tuple, they are both str, so hence 
> unicode.
>
> Are you saying that if I convert those values to bytes it could improve 
> performance?
>
>  
>
>> I'd not bother with the literal_binds and just use a literal value: 
>>
>> pkf = [(col == literal_column("'%s'" % v)) for (col, v) in zip(cols, x)] 
>>
>> but also I'd look to see what the nature of "v" is, if it's like a 
>> Unicode object or something, you might be getting bogged down on the 
>> decode/encode or something like that.   Sending as bytes() perhaps 
>> might change that. 
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to