Wow, you are right, this is bizarre... And it's not that glibc intends to compute the length in unicode chars, it actually counts bytes (c plain chars) -as it should- for computing field widths... But, for some strange reason, when there is some width calculation involved it tries so parse the char[] using the locale encoding (when there's no point in doing it!) and if it fails, it truncates (silently) the printf output. So it seems more a glib bug to me than an interpretion issue (bytes vs chars). I posted some details in stackoverflow: http://stackoverflow.com/questions/2792567/printf-field-width-bytes-or-chars
BTW, I understand that postgresql uses locale semantics in the server code. But is this really necessary/appropiate in the client (psql) side? Couldnt we stick with C locale here? -- Hernán J. González http://hjg.com.ar/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers