Just out of curiosity Mark, didn't you write your session daemon so that you don't have to put sessions in postgres anymore? Or are you just giving that as an example of a very wide, very heavily updated table? My session tables have been an extreme case of this problem, but no other table that I have is so adversely affected by this behavior. My decision was not to pull postgres out entirely, just using other session handlers.

Rick

On Jun 22, 2006, at 7:59 AM, Mark Woodward wrote:

After a long battle with technology, [EMAIL PROTECTED] ("Mark
Woodward"), an earthling, wrote:
Clinging to sanity, [EMAIL PROTECTED] ("Mark Woodward") mumbled into
her beard:
[snip]

1. The index points to all the versions, until they get vacuumed out.

It can't point to "all" versions, it points to the last "current"
version
as  updated by vacuum, or the first version of the row.

No, it points to *all* the versions.

Suppose I take a table with two rows:

INFO:  analyzing "public.test"
INFO:  "test": 1 pages, 2 rows sampled, 2 estimated total rows
VACUUM

Then, over and over, I remove and insert one entry with the same PK:

sample=# delete from test where id = 2;insert into test (id) values (2);
DELETE 1

[snip]

Now, I vacuum it.

sample=# vacuum verbose analyze test;
INFO:  vacuuming "public.test"
INFO:  index "test_id_key" now contains 2 row versions in 2 pages
DETAIL:  10 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "test": removed 10 row versions in 1 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "test": found 10 removable, 2 nonremovable row versions in 1 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "public.test"
INFO:  "test": 1 pages, 2 rows sampled, 2 estimated total rows
VACUUM

Notice that the index contained 10 versions of that one row.

It pointed to *ALL* the versions.

Hmm, OK, then the problem is more serious than I suspected.
This means that every index on a row has to be updated on every
transaction that modifies that row. Is that correct?

I am attaching some code that shows the problem with regard to
applications such as web server session management, when run, each second the system can handle fewer and fewer connections. Here is a brief output:

[EMAIL PROTECTED]:~/pgfoo$ ./footest
1307 sessions per second, elapsed: 1
1292 sessions per second, elapsed: 2
1287 sessions per second, elapsed: 3
....
1216 sessions per second, elapsed: 25
1213 sessions per second, elapsed: 26
1208 sessions per second, elapsed: 27
....
1192 sessions per second, elapsed: 36
1184 sessions per second, elapsed: 37
1183 sessions per second, elapsed: 38
....
1164 sessions per second, elapsed: 58
1170 sessions per second, elapsed: 59
1168 sessions per second, elapsed: 60

As you can see, in about a minute at high load, this very simple table
lost about 10% of its performance, and I've seen worse based on update
frequency. Before you say this is an obscure problem, I can tell you it isn't. I have worked with more than a few projects that had to switch away
from PostgreSQL because of this behavior.

Obviously this is not a problem with small sites, but this is a real
problem with an enterprise level web site with millions of visitors and
actions a day. Quite frankly it is a classic example of something that
does not scale. The more and more updates there are, the higher the load
becomes. You can see it on "top" as the footest program runs.

There has to be a more linear way of handling this scenario.
<footest.c>

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to