Inline. On Sat, Feb 9, 2013 at 1:50 AM, Johan Holmquist <holmi...@gmail.com> wrote:
> I guess I fall more to the "reason about code" side of the scale > rather than "testing the code" side. Testing seem to induce false > hopes about finding all defects even to the point where the tester is > blamed for not finding a bug rather than the developer for introducing > it. > Performance critical code should absolutely be analyzed regularly. Trivial changes such as adding a field to a structure can have a huge impact on performance. This is true in C or Haskell. When you find a performance regression use a profiler to help understand what caused the regression. > > Surely there will be a canary > > period, parallel running of the old and new system, etc.? > > Is that common? I have not seen it and I do think my workplace is a > rather typical one. > Yes, at least in some circles. I've seen it commonly done in one of two forms: a) New version is rolled out that takes a duplicated stream of production traffic, new service results including timing are checked against old version and then discarded. After some bake time (days or more) without regressions, serve results from the new version and use the old version as a reference. b) New version is rolled to a subset of the servers first, (sometimes) initially biased towards serving employees or the local intranet. Run for a day or two before rolling out to more servers. Rinse, repeat. Also, would we really want to preserve the old "bad" code just because > it happened to trigger some optimization? > The old code doesn't sound so bad if you goal is performance. The old code also should give a lot of details in terms of core and assembly that should help someone fix the new code. -n
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe