At the suggestion of several people, I have increased the

default settings in postgresql.conf before continuing my

postgresql vs mysql performance tests.

 

To date, I have only been loading a (roughly) million-row

file, creating indexes during the load, running a vacuum analyze,

and a couple of simple queries.

 

I do intend on performing more complicated tests, but I did not

want to do that until people stopped telling me my methodology

for simple tests was...flawed.

 

I ran a thorough series of tests, varying shared_buffers from 1000 to 9000,

work_mem from 10000 to 90000 and maintenance_work_mem from 10000 to 90000.

The complete results are long (I will provide them if anyone is interested)

so I am only including a small selection.

 

Before I do that, I will confess that increasing memory made more of

a difference than I thought it would.  I know many of you are thinking

"no kidding" but I thought it would only be important for big

complicated queries, or a server with multiple concurrent requests.

No, it makes a big difference for "merely" loading a million rows and

indexing them.

 

                                   Time in seconds

shared_buffers work_mem m_work_mem COPY       VACUUM

 1000           10000    10000     186.154     9.814   

 3000           10000    10000      64.404     4.526   

 5000           50000    50000      65.036     3.435   

 9000           90000    90000      63.664     2.218   

 

 

---------- The relevant commands

create table data (

      fid   integer,

      rid   integer,

      range real,

      x     real,

      y     real,

      z     real,

      bs    real,

      snr   real,

      rvel  real,

      cfar  smallint);

 

create index fidrid_data on data (fid,rid);

create index fidx on data (x);

 

-- COPY a table with 934500 rows

COPY data (fid,rid,range,snr,bs,rvel,cfar,x,y,z) FROM '/home/rick/bed/data/data.dat';

 

-- VACUUM

vacuum analyze data;

 

 

Reply via email to