On Thu, May 17, 2018 at 9:49 AM, Chris Barker via Python-ideas
<python-ideas@python.org> wrote:
> On Tue, May 15, 2018 at 11:21 AM, Rob Speer <rsp...@luminoso.com> wrote:
>>
>>
>> I'm sure that the issue of "what do you call the leap second itself" is
>> not the problem that Chris Barker is referring to. The problem with leap
>> seconds is that they create unpredictable differences between UTC and real
>> elapsed time.
>>
>> You can represent a timedelta of exactly 10^8 seconds, but if you add it
>> to the current time, what should you get? What UTC time will it be in 10^8
>> real-time seconds? You don't know, and neither does anybody else, because
>> you don't know how many leap seconds will occur in that time.
>
>
> indeed -- even if you only care about the past, where you *could* know the
> leap seconds -- they are, by their very nature, of second precision -- which
> means right before leap second occurs, your "time" could be off by up to a
> second (or a half second?)

Not really. There are multiple time standards in use. Atomic clocks
count the duration of time – from their point of view, every second is
the same (modulo relativistic effects). TAI is the international
standard based on using atomic clocks to count seconds since a fixed
starting point, at mean sea level on Earth.

Another approach is to declare that each day (defined as "the time
between the sun passing directly overhead the Greenwich Observatory
twice") is 24 * 60 * 60 seconds long. This is what UT1 does. The
downside is that since the earth's rotation varies over time, this
means that the duration of a UT1 second varies from day to day in ways
that are hard to estimate precisely.

UTC is defined as a hybrid of these two approaches: it uses the same
seconds as TAI, but every once in a while we add or remove a leap
second to keep it roughly aligned with UT1. This is the time standard
that computers use the vast majority of the time. Importantly, since
we only ever add or remove an integer number of seconds, and only at
the boundary in between seconds, UTC is defined just as precisely as
TAI.

So if you're trying to measure time using UT1 then yeah, your computer
clock is wrong all the time by up to 0.9 seconds, and we don't even
know what UT1 is more precisely than ~milliseconds. Generally it gets
slightly more accurate just after a leap second, but it's not very
precise either before or after. Which is why no-one does this.

But if you're trying to measure time using UTC, then computers with
the appropriate setup (e.g. at CERN, or in HFT data centers) routinely
have clocks accurate to <1 microsecond, and leap seconds don't affect
that at all.

The datetime module still isn't appropriate for doing precise
calculations over periods long enough to include a leap second though,
e.g. Python simply doesn't know how many seconds passed between two
arbitrary UTC timestamps, even if they were in the past.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to