Neil Conway <[EMAIL PROTECTED]> wrote on 02.12.2004, 05:05:12: > On Wed, 2004-12-01 at 19:34 +0100, [EMAIL PROTECTED] wrote: > > I regard performance testing as an essential part of the > > release process of any performance critical software. Most earlier beta > > releases were fixing functional problems and now the focus has moved to > > performance related issues. > > I don't agree. There is an important difference between fixing a > performance regression and making a performance improvement. If a change > made in 8.0 resulted in a significant, unintentional performance > regression, I think there would be grounds for fixing it (assuming the > fix was fairly noninvasive). The aggregate patch addresses a problem > that has existed for a long time (7.4 at the least); I don't see the > need to delay 8.0 to fix it. >
I understand this argument, but believe it misses out one consideration. IMHO the effects of this are somewhat problematic for performance-related issues: ISTM Performance -of any product- is mainly tested during late beta. This is the first real chance to see whether the software performs, since it was unstable or incomplete before that point. Unplanned performance opportunities arise at this time in the cycle. It seems strange to me to only fix regressions that emerge, but simply ignore positive opportunities that arise. For me, the subject under discussion here is what to do with these positive opportunities; I agree with you on handling regressions. If we choose not to take opportunities, then many performance concerns are effectively only ever fixed one release behind and the group is therefore making an unconscious decision that performance is not a top-tier function of the product. My own viewpoint comes from effectively being a leading-edge user (by proxy) - I don't claim any right to an opinion on this issue for any other reason. I do have respect for robustness concerns (why else would I do PITR?), and wouldn't argue this route if the code changes we're discussing weren't well isolated and IMHO fairly non-invasive. These decisions are always a risk-benefit trade-off. If I seem to speak too loudly, please forgive me. I raise this because I honestly believe the +20% perf tweak to be of benefit to the majority, and worth a comparatively short wait to implement and test. We haven't mentioned this, but it is easy to produce custom versions for people with these tweaks in, but that approach detracts from general adoption of the software. As a result, it seems likely that *not* implementing the patch could result in greater personal benefit to me, though that's not a stance I ever take. Could we implement this using a GUC, turned off by default, memory_protect_functions = true, which does not appear in the postgresql.conf? Any potential instability would then be protected by default and partially discouraged from use. That would allow people who understood the risk to disable the robustness in favour of performance - which would be relatively safe when using the built-in aggregate functions. I'll commit to doing some hard testing on this, if we agree to implement it. Thanks, Best Regards, Simon Riggs ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org