Hello,

I am setting up a proof of concept database to store some historical data.
Whilst I've used PostgreSQL a bit in the past this is the first time I've
looked into disk usage due to the amount of data that could potentially be
stored. I've done a quick test and I'm a little confused as to why it is
occupying so much space on disk. Here is my table definition:

CREATE TABLE "TestSize"
(
  "Id" integer NOT NULL,
  "Time" timestamp without time zone NOT NULL,
  "Value" real NOT NULL,
  "Status" smallint NOT NULL,
  PRIMARY KEY ("Id", "Time")
);

CREATE INDEX test_index ON "TestSize" ("Id");

With a completely empty table the database is 7 MB. After I insert 1
million records into the table the database is 121 MB. My understanding is
that each of the fields is sized as follows:

integer - 4 bytes
timestamp without time zone - 8 bytes
real - 4 bytes
smallint - 2 bytes

So for 1 million records, it needs at least 18 million bytes, or ~17 MB to
store the data. Now I'm sure there is extra space required for managing the
primary key fields, the index and other misc overhead involved in getting
this data into the internal storage format used by PostgreSQL. But even if
I triple the number of bytes stored for each record, I only end up with 51
MB or so. Am I missing something obvious?

Cheers,

Andrew

Reply via email to