On Aug 4, 2004, at 12:13 AM, Tom Lane wrote:

Michael Glaesemann <[EMAIL PROTECTED]> writes:
No. I'm just saying that PostgreSQL does not represent or store
timestamps as epoch timestamps internally.

You're wrong.

It's not exactly Unix-like because we use a different epoch date
(2000-1-1 not 1970-1-1) but the concept is just the same: what's
stored is the number of seconds before or after the epoch.  The
default is to store this as a double precision number (hence supporting
fractional seconds, with a machine-dependent amount of precision)
but you can compile the server to use 64-bit integers instead.  In that
case the integer value actually represents microseconds before or after
the epoch, and so the precision is fixed at microseconds.

As I understood Achilleus, he said that PostgreSQL used UNIX epoch timestamp internally, which is defined as seconds from 1970-01-01. What I said is that PostgreSQL does not use UNIX epoch internally, which is exactly what you've verified. PostgreSQL uses seconds and microseconds from 2000-01-01, and PostgreSQL can be compiled to use 64-bit integers (rather than double precision floats) to represent integer microseconds from 2000-01-01. Thank you for explaining these things. However, I don't quite understand how I am wrong in saying that PostgreSQL does not use UNIX epoch timestamps internally, as you've clearly explained it doesn't.


What you see when you display the value is an external textual
representation, not the internal form.

Which I don't think was ever at issue.

Thanks again for explaining the internals. I'm trying to learn as much as I can grepping the source, but it's often easier to hear an explanation.

Michael Glaesemann
grzm myrealbox com


---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Reply via email to