I think std function for convertions would be very helpful.

I am still having issues trying to get my charaters standardizsed. I spent
all of yesterday playing with ideas but it still in the dark.

Part of my problem is I dont have a clue what my source data is encoded as.
Does anyone know of a tool which can try and guess the encoding? Basically
its a custom java bean written by someone else. It takes reports from a
third party system turns them into XML using a string buffer. They just
append everything to a string buffer. The code which actually adds this to
the output (the key peice) I cant actually see at this point. So by my best
guess based on research is that java usually uses UTF-16. But if this is so,
it should work.

If I add the text using the *16 prepare and then retrieve it using the *16
column_text, I still get the two seperate characters instead of the umlaught
thingie. So I can only assume that somehow my source isnt UTF-16. or I am
converting it somewhere in the middle. This is possible since I am using
Delphi and it has some implicit convertions, but I think I have got that
under control.

The problem is if I copy my source and paste it into Notepad say, it shows
correctly cause notepad then does it own stuff, and if I save the notepad
and read that it works fine. *sigh*.

So my questions  are:

1) Any tools to determine encoding based on datas?
2)
When using the
NON16 version of prepare:
 If I add text which is in UTF16 what happens?

16 Version:
If I add UTF16 text what happnes?
if I add UTF-8 Text what happens?
if I add ASCIII text what happnes?

Thanks,

Reply via email to