Hello All,

Need some help in scaling PostgreSQL:

I have a table with 400M records with 5 int columns having index only on 1 
column.

Rows are updated by a perl script which takes 10k numbers in one transactions 
and fires single single update in a loop on database keeping track of the 
result returned . If zero returned then at later stage it does an insert. In 
short if the record is present in the DB then it gets updated and if not then 
get inserted. > 80% the records are always there in the DB so updates are 
more.

We need to speed up this process as it takes about 150 sec to complete 10k 
batch. From database logs on the avg each update takes about 15ms.

I tried to do a bulk delete of 1M numbers and copy of the same but no luck so 
far. Delete and copy also take a longer time more than 1 hour each.

Few Details:

PostgreSQL 9.0.0 on 11.0 open SuSe-64bit,Shared 
buffer=2.5GB,effective_cache_size=20GB,checkpoint_segments=200 (raised for 
bulkloading)

Server: dell dual quadcore ,32GB RAM, DB partition on RAID 10 and pg_xlog on 
RAID 1.

p.s. Previously we were having slony read only slave on 8.4.2 , where delete 
was fast about 7 min and copy 6 min, we moved to PostgreSQL 9 for read only 
Stand by slave to remove overhead caused by slony due to triggers (also the 
slave was always lagging in case of bulkloads on master)  in the hope of 
speeding up the process.

Any help would be much appriciated ...

With Regards
sandy



 

Reply via email to