I don't know the answer to the question of why 7.4 is slower, but I have
some suggestions on additional things to test, and how to make it faster.

First off, try 200 transactions of 1000 records each, you might even want
to try 20 transactions of 10,000 records each.  Postgres seems to run much
faster the less commits you have, but different configs may change the
sweet spot.

Secondly, one possible solution to your problem is to drop the indexes,
insert the new rows and recreate the indexes.  Of course, for testing,
you'll want to time the entire process of drop/insert/create and compare
it to the raw insert time with indexes intact.  I use a stored procedure
on my databases, i.e.:

select drop_foo_indexes();
...
<commands to insert many rows into table foo>
...
select create_foo_indexes();

Another thing to consider is vacuums.  You don't mention how often you
vacuumed the database during testing, I would recommend a "vacuum full"
between each test (unless, of course, you're testing how much a lack
of vacuum hurts performance ;)

Hope this helps.

Sezai YILMAZ wrote:
Hello

I need high throughput while inserting into PostgreSQL. Because of that I
did some PostgreSQL insert performance tests.

------------------------------------------------------------
-- Test schema
create table logs (
      logid serial primary key,
      ctime integer not null,
      stime integer not null,
      itime integer not null,
      agentid integer not null,
      subagentid integer not null,
      ownerid integer not null,
      hostid integer not null,
      appname varchar(64) default null,
      logbody varchar(1024) not null
);

create index ctime_ndx on logs using btree (ctime);
create index stime_ndx on logs using btree (stime);
create index itime_ndx on logs using btree (itime);
create index agentid_ndx on logs using hash (agentid);
create index ownerid_ndx on logs using hash (ownerid);
create index hostid_ndx on logs using hash (hostid);
------------------------------------------------------------

Test Hardware:
IBM Thinkpad R40
CPU: Pentium 4 Mobile 1993 Mhz (full powered)
RAM: 512 MB
OS: GNU/Linux, Fedora Core 1, kernel 2.4.24

A test program developed with libpq inserts 200.000 rows into table logs. Insertions are made with 100 row per transaction (total 2.000 transactions).

Some parameter changes from postgresql.conf file follows:
----------------------------------------------------------------
shared_buffers = 2048           # min max_connections*2 or 16, 8KB each
max_fsm_relations = 20000       # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 200000          # min 1000, fsm is free space map, ~6 bytes
max_locks_per_transaction = 256 # min 10
wal_buffers = 64                # min 4, typically 8KB each
sort_mem = 32768                # min 64, size in KB
vacuum_mem = 16384              # min 1024, size in KB
checkpoint_segments = 6         # in logfile segments, min 1, 16MB each
checkpoint_timeout = 900        # range 30-3600, in seconds
fsync = true
wal_sync_method = fsync         # the default varies across platforms:
enable_seqscan = true
enable_indexscan = true
enable_tidscan = true
enable_sort = true
enable_nestloop = true
enable_mergejoin = true
enable_hashjoin = true
effective_cache_size = 2000     # typically 8KB each
geqo = true
geqo_selection_bias = 2.0       # range 1.5-2.0
geqo_threshold = 11
geqo_pool_size = 0              # default based on tables in statement,
                               # range 128-1024
geqo_effort = 1
geqo_generations = 0
geqo_random_seed = -1           # auto-compute seed
----------------------------------------------------------------

The test was made with both of PostgreSQL 7.3.4 and PostgreSQL 7.4.1 (the
test program was recompiled during version changes).

The results are below (average inserted rows per second).

speed for speed for
# of EXISTING RECORDS PostgreSQL 7.3.4 PostgreSQL 7.4.1
=========================================================================


0 initial records 1086 rows/s 1324 rows/s
200.000 initial records 781 rows/s 893 rows/s
400.000 initial records 576 rows/s 213 rows/s
600.000 initial records 419 rows/s 200 rows/s
800.000 initial records 408 rows/s not tested because of bad results



When the logs table reconstructed with only one index (primary key) then 2941 rows/s speed is reached. But I need all the seven indexes.


The question is why the PostgreSQL 7.4.1 is so slow under heavy work?

Is there a way to speed up inserts without eliminating indexes?

What about concurrent inserts (cocurrent spare test program execution) into the same table? It did not work.

-- Bill Moran Potential Technologies http://www.potentialtech.com


---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly

Reply via email to