Title: Converted from Rich Text
I have followed the discussion from 3 months ago on WAL bypass and wanted to offer some more information.
 
I have long believed that the bottleneck in transaction-oriented systems is the writing of the indexes, complete with splits and merges. A single update to one field of a heavily-indexed table could cause dozens of index writes to cascade.
 
The point is that speeding up index writing should offer the most "bang for the buck" relative to performance. As a bonus, a corrupted index can ALWAYS be recovered, while a corrupted table cannot.
 
I did some informal testing using pgbench on v8.07. First, I ran pgbench normally with 75 users doing 100 transactions, full vacuuming between runs. My machine consistently gave me 92 tps.
 
As an experiment, I commented out of the btree index source all of the XLOG code I could find. I basically replaced the test for a temp table with "if (0)" and then recompiled.
 
Running again pgbench with 75 users and 100 transactions, I received a consistent rate of 132 tps, a 40% increase in throughput.
 
It seems to me that major performance gains can be had by allowing some indexes to be created with some "UNSAFE-FAIL" flag, which would eliminate WAL logging and fsync() for the index files.
 
Upon recovery, the index gets rebuilt. The only downside is potentially long rebuild times during recovery.
 
Thoughts?
 
Sincerely,
Marty

Reply via email to