[cc-ing back to the list in case other readers find it
helpful...]

On 15/11/2011 15:16, Tony Pelletier wrote:
Thanks, Tim!

This is working brilliantly.... Slow, but working..:)  I can go from
here and see if there's a way to speed it up.

Well you've got a few options, although an amount depends on how
much control you have over your data and how well you can predict.

One option is to encode at SQL Server level: CAST your NVARCHAR to VARCHAR as part of the your query, eg:

SELECT
  contacts.id,
  name = CAST (
    contacts.name COLLATE SQL_Latin1_General_CP1_CS_AS AS
      VARCHAR (200)
  )
FROM
  contacts


This will bring the text in as bytes encoded Latin1 which you
can then write directly to the csv without the encoder. Without
having tested this, I imagine it would be faster than encoding
blindly at the Python end since it'll happen lower down the stack
and you're pinpointing the data rather than running through all
the columns on the offchance of finding one which is unicode.

An alternative is to arrange something equivalent at the Python
end -- ie have specific encoders for different rows which can
target the specific columns which are known to be NVARCHAR.

TJG
_______________________________________________
python-win32 mailing list
python-win32@python.org
http://mail.python.org/mailman/listinfo/python-win32

Reply via email to