On 4/12/17 00:12, Tom Lane wrote: > The change in setup_formatted_log_time seems a bit weird: > > - char msbuf[8]; > + char msbuf[10]; > > The associated call is > > sprintf(msbuf, ".%03d", (int) (saved_timeval.tv_usec / 1000)); > > Now a human can see that saved_timeval.tv_usec must be 0..999999, so > that the %d format item must always emit exactly 3 characters, which > means that really 5 bytes would be enough. I wouldn't expect a > compiler to know that, but if it's making a generic assumption about > the worst-case width of %d, shouldn't it conclude that we might need > as many as 13 bytes for the buffer? Why does msbuf[10] satisfy it > if msbuf[8] doesn't?
Because the /1000 takes off three digits? The full message from an isolated test case is test.c: In function 'f': test.c:11:15: warning: '%03d' directive writing between 3 and 8 bytes into a region of size 7 [-Wformat-overflow=] sprintf(buf, ".%03d", (int) (tv.tv_usec / 1000)); ^ test.c:11:15: note: directive argument in the range [-2147483, 2147483] test.c:11:2: note: '__builtin___sprintf_chk' output between 5 and 10 bytes into a destination of size 8 sprintf(buf, ".%03d", (int) (tv.tv_usec / 1000)); ^ (This is with -O2. With -O0 it only asks for 5 bytes.) -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers