> Date: Wed, 14 Jan 2015 10:42:08 +0000
> From: Gavin Smith <gavinsmith0...@gmail.com>
> Cc: Texinfo <bug-texinfo@gnu.org>
> 
> int
> wcwidth (wchar_t wc)
> #undef wcwidth
> {
>   /* In UTF-8 locales, use a Unicode aware width function.  */
>   const char *encoding = locale_charset ();
>   if (STREQ_OPT (encoding, "UTF-8", 'U', 'T', 'F', '-', '8', 0, 0, 0 ,0))
>     {
>       /* We assume that in a UTF-8 locale, a wide character is the same as a
>          Unicode character.  */
>       return uc_width (wc, encoding);
>     }
>   else
>     {
>       /* Otherwise, fall back to the system's wcwidth function.  */
> #if HAVE_WCWIDTH
>       return wcwidth (wc);
> #else
>       return wc == 0 ? 0 : iswprint (wc) ? 1 : -1;
> #endif
>     }
> }
> 
> locale_charset is always called every time.

Yes, I know.  But only if gnulib's wcwidth is used.  Is it used on
your platform?  AFAIK, glibc provides wcwidth, so I'd expect the
gnulib version not to be used on your platform.

> It must be slower under a Windows system. The implementation of
> locale_charset is in the localcharset.c file from gnulib, although I
> haven't looked at it in detail, and don't know why it would be slow
> under Windows.

If I comment out the call to locale_charset in gnulib's wcwidth, and
show that the slow-down goes away, will that convince you?

In any case, I don't see why we need to call locale_charset for each
and every character, over and over and over again.  We should call it
once and then reuse the result, since it depends on the environment
outside the reader, and will not change during the session, right?

Reply via email to