jay wrote:
        I know the problem, because there are about 35 million rows , which
cost about 12G disk space and checkpoint segments use 64, but update
operation is in one transaction which lead fast fill up the checkpoint
segments and lead do checkpoints frequently, but checkpoints will cost lots
resources, so update operation become slowly and slowly and bgwrite won't
write because it's not commit yet.
Create a new table maybe a quick solution, but it's not appropriated in some
cases.
        If we can do commit very 1000 row per round, it may resolve the
problem.

Committing more frequently won't help you with checkpoints. The updates will generate just as much WAL regardless of how often you commit, so you will have to checkpoint just as often. And commits have no effect on bgwriter either; bgwriter will write just as much regardless of how often you commit.

One idea would be to partition the table vertically, that is, split the table into two tables, so that the columns that you need to update like that are in one table, together with the primary key, and the rest of the columns are in another table. That way the update won't need to scan or write the columns that are not changed. You can create a view on top of the two tables to make them look like the original table to the application.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to