Darren Duncan wrote:
This matter reminds me of a discussion on the SQLite list years ago about whether pragma synchronous=normal or synchronous=full should be the default, and thankfully 'full' won.

Right now, when I see deployments in the field, serious database servers setup by professional DBAs tend to use the right hardware and setup to do the correct thing with PostgreSQL. And trivial installs done for testing purposes cheat, but many of those users don't really care because they know they are not running a real server, and expect that their desktop is neither reliable nor fast at database work. The SQLite situation has a slightly different context than this, because the places it's put into don't so regularly have a DBA involved at all in situations where the data is important. It's often just system software sitting in the background nobody even is aware of.

I also remember when SQLite did come out of the background, when it was crucified for being the cause of Firefox slowdowns actually linked to changed kernel fsync behavior. That's the sort of bad press this project really doesn't need right now, when it actually doesn't matter on so many production database servers. You may not be aware that there's already such a change floating around out there. PostgreSQL installs on Linux kernel 2.6.32 or later using ext4 are dramatically slower out of the box than they used to be, because the OS started doing the right thing by default; no change in the database code. I remain in mild terror that this news is going to break in a bad way and push this community into damage control. So far I've only seen that reported on Phoronix, and that included a testimony from a kernel developer that they introduced the regression so it wasn't so bad. The next such publicized report may not be so informed.

Some of this works out to when to change things rather than what to change. PostgreSQL is at a somewhat critical spot right now. If people grab a new version, and performance sucks compared to earlier ones, they're not going to think "oh, maybe they changed an option and the new version is tuned for safety better". They're going to say "performance sucks on this database now" and give up on it. Many evals are done on hardware that isn't representative of a real database server, and if we make a change that only hurts those people--while not actually impacting production quality hardware--that needs to be done carefully. And that's exactly what I think would happen here if this was just changed all of the sudden.

I don't think anyone is seriously opposed to changing the defaults for safety instead of performance. The problem is that said change would need to be *preceeded* by a major update to the database documentation, and perhaps even some code changes to issue warnings when you create a cluster with what is going to turn out to now be a slow configuration. We'd need to make it really obvious to people who upgrade and notice that performance tanks that it's because of a configuration change made for safety reasons, one that they can undo for test deployments. That particular area, giving people better advice about what they should do to properly tune a new install for its intended workload, is something that's been making slow progress but still needs a lot of work. I think if some better tools there come along, so that most people are expected to follow a path that involves a tuning tool, it will be much easier to stomach the idea of changing the default--knowing that something that will undo that change is likely to appears to the user that suggests the possibility is available.

--
Greg Smith, 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services and Support  www.2ndQuadrant.us
Author, "PostgreSQL 9.0 High Performance"    Pre-ordering at:
https://www.packtpub.com/postgresql-9-0-high-performance/book


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to