On 07/21/2015 02:00 PM, Alexander Korotkov wrote:
On Tue, Jul 21, 2015 at 12:40 PM, Heikki Linnakangas <hlinn...@iki.fi>
wrote:

Has anyone done any performance testing of this?

The purpose of a fillfactor is to avoid the storm of page splits right
after building the index, when there are some random updates to the table.
It causes the index to bloat, as every full page is split to two half-full
pages, and also adds latency to all the updates that have to split pages.

As the code stands, do we actually have such a problem with GIN? Let's
investigate. Let's create a little test table:

create table foo (id int4, t text);
insert into foo select g, 'foo' from generate_series(1, 1000000) g;

And some indexes on it:

-- B-tree index on id, 100%
create index btree_100 on foo (id) with (fillfactor=100);
-- B-tree index on id, 90% (which is the default)
create index btree_90 on foo (id) with (fillfactor=90);
-- GIN index on id. Id is different on each row, so this index has no
posting trees, just the entry tree.
create index gin_id on foo using gin (id) with (fastupdate=off);
-- GIN index on t. T is the same on each row, so this index consists of a
single posting tree.
create index gin_t on foo using gin (t) with (fastupdate=off);

Immediately after creation, the index sizes are:

postgres=# \di+
                           List of relations
  Schema |   Name    | Type  | Owner  | Table |  Size   | Description
--------+-----------+-------+--------+-------+---------+-------------
  public | btree_100 | index | heikki | foo   | 19 MB   |
  public | btree_90  | index | heikki | foo   | 21 MB   |
  public | gin_id    | index | heikki | foo   | 53 MB   |
  public | gin_t     | index | heikki | foo   | 1072 kB |
(4 rows)


Now let's update 1% of the table, spread evenly across the table, and see
what happens to the index sizes:

postgres=# update foo set id = id where id % 100 = 0;
UPDATE 10000
postgres=# \di+
                           List of relations
  Schema |   Name    | Type  | Owner  | Table |  Size   | Description
--------+-----------+-------+--------+-------+---------+-------------
  public | btree_100 | index | heikki | foo   | 39 MB   |
  public | btree_90  | index | heikki | foo   | 21 MB   |
  public | gin_id    | index | heikki | foo   | 53 MB   |
  public | gin_t     | index | heikki | foo   | 1080 kB |
(4 rows)

As you can see, btree_100 index doubled in size. That's the effect we're
trying to avoid with the fillfactor, and indeed in the btree_90 index it
was avoided. However, the GIN indexes are not bloated either. Why is that?

The entry tree (demonstrated by the gin_id index) is not packed tightly
when it's built. If anything, we could pack it more tightly to make it
smaller to begin with. For the use cases where GIN works best, though, the
entry tree should be fairly small compared to the posting lists, so it
shouldn't make a big difference. For the posting tree, I think that's
because the way the items are packed in the segments. Even though we pack
the segments as tightly as possible, there is always some free space left
over on a page that's not enough to add another segment to it at index
build, but can be used by the updates to add items to the existing
segments. So in practice you always end up with some free space on the
posting tree pages, even without a fillfactor, so the "effective
fillfactor" even today is not actually 100%.


There are two reasons of this behavior. You mentioned the first one:
"effective fillfactor" for posting lists is not 100%. But the second one is
structure of posting lists. In your example UPDATE allocates new tuple at
the end of heap. So, from the point of posting lists these updates are
appends.

Ah, good point.

Situation is different when you reuse space in the heap. See an
example.

drop table foo;
create table foo (id integer, val int[]) with (fillfactor = 90);
insert into foo (select g, array[g%2]::int[] from
generate_series(1,1000000) g);
create index foo_val_idx_100 on foo using gin(val) with (fastupdate=off,
fillfactor=100);
create index foo_val_idx_90 on foo using gin(val) with (fastupdate=off,
fillfactor=90);

# \di+
                              List of relations
  Schema |      Name       | Type  | Owner  | Table |  Size   | Description
--------+-----------------+-------+--------+-------+---------+-------------
  public | foo_val_idx_100 | index | smagen | foo   | 1088 kB |
  public | foo_val_idx_90  | index | smagen | foo   | 1200 kB |
(2 rows)

update foo set val = array[(id+1)%2]::int[] where id % 50 = 0;

# \di+
                              List of relations
  Schema |      Name       | Type  | Owner  | Table |  Size   | Description
--------+-----------------+-------+--------+-------+---------+-------------
  public | foo_val_idx_100 | index | smagen | foo   | 1608 kB |
  public | foo_val_idx_90  | index | smagen | foo   | 1200 kB |
(2 rows)

Hmm. That's slightly different from the test case I used, in that the update is changing the indexed value, which means that the new value goes to different posting tree than the old one. I'm not sure if that makes any difference. You're also updating more rows, 1/50 vs. 1/100.

This case is closer to my earlier one:

postgres=# create table foo (id int4, i int4, t text) with (fillfactor=90);
CREATE TABLE
postgres=# insert into foo select g, 1 from  generate_series(1, 1000000) g;
INSERT 0 1000000
postgres=# create index btree_foo_id on foo (id); -- to force non-HOT updates
CREATE INDEX
postgres=# create index gin_i on foo using gin (i) with (fastupdate=off);
CREATE INDEX
postgres=# \di+
                           List of relations
 Schema |     Name     | Type  | Owner  | Table |  Size   | Description
--------+--------------+-------+--------+-------+---------+-------------
 public | btree_foo_id | index | heikki | foo   | 21 MB   |
 public | gin_i        | index | heikki | foo   | 1072 kB |
(2 rows)

postgres=# update foo set id=-id where id % 100 = 0;
UPDATE 10000
postgres=# \di+
                           List of relations
 Schema |     Name     | Type  | Owner  | Table |  Size   | Description
--------+--------------+-------+--------+-------+---------+-------------
 public | btree_foo_id | index | heikki | foo   | 22 MB   |
 public | gin_i        | index | heikki | foo   | 1080 kB |
(2 rows)

I'm not sure what's making the difference to your test case. Could be simply that your case happens to leave less free space on each page, because of the different data.

Based on this, I think we should just drop this patch. It's not useful in
practice.

I wouldn't say it's not useful at all. It's for sure not as useful as btree
fillfactor, but it still could be useful in some cases...
Probably, we could leave 100 as default fillfactor, but provide an option.

Doesn't seem worth the trouble to me...

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to