Hi all:

We are loading in a number (100+) of sql files that are about 100M in
size.  It takes about three hours to load the file. There is very
little load on the database other than the copy from operations.

We are running postgresql-8.1.3 under Centos 4 on a RAID 1/0 array
with 4 disks (so we have only one spindle). The partitions are set up
in an LVM and iostat 5 shows (for one report):

  avg-cpu: %user %nice %sys %iowait %idle
           1.70  0.00  0.80 51.40   46.10

  Device:  tps     Blk_read/s  Blk_wrtn/s  Blk_read  Blk_wrtn
  sda      179.20  1472.00     2561.60     7360      12808
  sda1     0.00    0.00        0.00        0         0
  sda2     385.20  1462.40     2561.60     7312      12808
  dm-0     0.80    0.00        6.40        0         32
  dm-1     0.00    0.00        0.00        0         0
  dm-2     0.00    0.00        0.00        0         0
  dm-3     0.00    0.00        0.00        0         0
  dm-4     4.40    0.00        35.20       0         176
  dm-5     0.00    0.00        0.00        0         0
  dm-6     380.00  1462.40     2520.00     7312      12600

dm-6 is where the data files reside and dm-4 is where the WAL archives
are kept. Note all the DM's are on the same RAID 0 device /dev/sda2.

A sample psql command file to load the data is:

  BEGIN;
  COPY peers (observe_start, observe_end, geo_scope, geo_value,
      peer_a, peer_b) FROM stdin WITH NULL AS '';
  (data here)
  3 more copy commands to different tables w/ data
  COMMIT;

The primary keys for the tables being loaded are composite keys using
4-7 columns, so that may be part of the issue.

>From postgres.conf

  shared_buffers = 3000
  #temp_buffers = 1000                    # min 100, 8KB each
  #max_prepared_transactions = 5          # can be 0 or more
  max_locks_per_transaction).
  work_mem = 2048                         # min 64, size in KB
  maintenance_work_mem = 65536            # min 1024, size in KB
  #max_stack_depth = 2048                 # min 100, size in KB

The prior settings for work_mem/maintenance_work_mem were the
defaults:

  #work_mem = 1024                       # min 64, size in KB
  #maintenance_work_mem = 16384          # min 1024, size in KB

I also took a look at disk-io hit rates:

# select * from pg_statio_user_tables;  
 relid | schema |   relname    | heap_blks_read | heap_blks_hit | idx_blks_read 
| idx_blks_hit |
-------+--------+--------------+----------------+---------------+---------------+--------------+
 17282 | public | providers    |      179485097 |      78832253 |        835008 
|    196903582 |
 17264 | public | events       |              0 |             0 |               
|              |
 17262 | public | days         |            495 |           219 |           478 
|           16 |
 17276 | public | peers        |      147435004 |     114304828 |       1188908 
|    295569499 |
 17288 | public | rankings     |      564638938 |     345456664 |     275607291 
|   1341727605 |
 17270 | public | market_share |         131932 |         90048 |          5408 
|       182100 |

market_share did have one tidx_blks_read reported, but all the other
fields (toast_blks_read, toast_blks_hit, tidx_blks_read,
tidx_blks_hit) were empty for all rows.

This looks like we have whole indexes in memory except for the days
table, which has a low update rate, so I am not worried about that.

However for the heap_blks_read and heap_blks_hit we get a different
story:

  relname       | hit_percent
  --------------+-----------
  providers     | 43.92
  days          | 44.24
  peers         | 77.52
  rankings      | 61.18
  market_share  | 68.25

so we see a 43 % hit ratio for providers to 77% hit ratio for
peers. Not horrible hit rates given that we are more data warehousing
than OLTP, but I am not sure what effect increasing these (by
increasing shared_buffers I think) will have on the COPY operation. I
would suspect none.

To try to solve this speed issue:

 I checked the logs and was seeing a few

    2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)

 of these, so I changed:

   checkpoint_segments = 30 
   checkpoint_warning = 150

 in postgres.conf and reloaded postgres. I have only seen one of these
 log messages in the past week.
 
 I have turned of autovacuum.

 I have increased the maintenance_work_mem as mentioned
   above. (Although I didn't expect it to do anything unless we
   drop/recreate indexes).

 I have increased work_mem as mentioned above.

The only things I can think of is increasing shared memory, or
dropping indexes.

I don't see any indication in the docs that increasing shared memory
would help speed up a copy operation.

The only indexes we have to drop are the ones on the primary keys
(there is one non-primary key index in the database as well).

Can you drop an index on the primary key for a table and add it back
later?  Am I correct in saying: the primary key index is what enforces
the unique constraint in the table? If the index is dropped and
non-unique primary key data has been added, what happens when you
re-add the index?

Does anybody have any things to check/ideas on why loading a 100Mb sql
file using psql would take 3 hours?

Thanks in advance for any ideas.

--
                                -- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to