#3880: integer overflow in date.c (mutt_mktime)
-----------------------+----------------------
Reporter: vinc17 | Owner: mutt-dev
Type: defect | Status: new
Priority: critical | Milestone:
Component: mutt | Version: 1.7.0
Resolution: | Keywords:
-----------------------+----------------------
Comment (by code@…):
None
Filename untitled-part.sig could not be saved, problem: [Errno 13]
Permission denied:
'/home/mutt/trac/files/attachments/ticket/4c0/4c0ca50acf1dba181d72e4ecfffaeb06d9ad64ac/90449eb29c8e1d56f09effe64f19fbf629d6557d.sig'\{{{
On Mon, Oct 03, 2016 at 01:04:23AM -0000, Mutt wrote:
{{{
}}}
Ah, you're right. It only works if sizeof (time_t) == sizeof (unsigned
long long), i.e. on 64-bit, which sadly is the only place I tested,
for lack of an immediately available 32-bit machine.
The idea was this (step by step, though slightly different order):
$ ./foo
x = ~0
-> x = ffffffff
y = (unsigned long long)x
-> y = ffffffffffffffff
y = y >> 1
-> y = 7fffffffffffffff
x = ffffffff
My mistake was after the ULL cast, I expected y to be 0xffffffff (or
more precisely, 0x00000000ffffffff). TBH it makes no sense to me that
it is not... If that were true, then right shifting once gives you
the maximum positive integer that fits in a time_t, and of course on
64-bit it does exactly that, since sizeof (time_t) == sizeof (ULL).
You need the value that you shift to be unsigned, because shifting a
negative integer right leaves the high bit set.
It's moot though since the compiler behaves (IMHO) nonsensically when
casting a signed int to a larger UNSIGNED int. Seems to me the entire
point of casting a signed value to an unsigned one is because you
don't want the sign bit to be treated as a sign bit... I'm sure
there's a reason for this that's perfectly reasonable, albeit arcane
and obscure, but it seems like the opposite of what you'd want here to
me for exactly this reason...
}}}
[attachment:"untitled-part.sig"]
--
Ticket URL: <https://dev.mutt.org/trac/ticket/3880#comment:8>
Mutt <http://www.mutt.org/>
The Mutt mail user agent