PRAGMA statements, I see what you mean now. This is exactly what I needed, 
thanks a lot.

To clarify what I am doing, my SQL statements are in UTF-8 and they are all 
prepared, with parameter bindings. So table names, column names, etc.. are all 
UTF-8.

However, I have table fields which will be UTF-16. For example, filenames that 
have to support international character sets. Or metadata fields that use 
different character sets (UNICODE). For these I am using sqlite3_bind_text16() 
and passing an appropriate wchar_t buffer.

On the other hand there is some legacy data that I want to store using UTF-8. 
For these fields I will use sqlite3_bind_text(). It is possible that in a 
single INSERT statement there are both UTF-16 and UTF-8 (wchar_t and char) 
fields present.

At no point am I ever constructing SQL statements using a printf() style 
conversion on field data to create the statement.

Am I vulnerable to a performance penalty because of conversions in this 
scenario?

Thanks


> Igor Tandetnik wrote:
> > You can mix and match encodings in your application.
> The database 
> > encoding determines how strings are actually stored in
> the file (and 
> > it's database-wide, not per table). SQLite API
> converts back and forth 
> > as necessary.
> >   
> Very inneficiently, but yes, it does. I suggest to the OP
> to use
> parameterised queries if you need to use string values,
> otherwise,
> you'll see significant overhead from conversions back
> and forth between
> utf8 and utf16 inside the sqlite code.
> > Igor Tandetnik 

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to