On 9/14/2012 12:12 PM, David Boyes wrote: >>> All this talk about 'reliable code for our users' is total BS until >>> 'make check' actually does some realisitic functionality tests. >>> If you can't write an automated test for a feature, they I would >>> request we consider disabling that feature. > > I'm not sure this is a realistic goal in a single machine environment. For a > realistic testing environment, you need at least 10 system images (and the > ability to create a lot more would be very desirable for some of the subtler > bugs), and the ability to control time and replay multiple streams of events > in a repeatable way. It involves a separate Kerberos infrastructure, and a > lot of other moving parts, plus a lot of scripting to build the environment , > run the test, and then reset the environment for the next run. You also need > different types of systems, different OS levels, etc which complicate the > test even further. > > It's not impossible, but I can say that it cost a fair amount (in the > mid-to-high 5 digit range) to build that environment here.
David, In this case I think you are low-balling the estimate. To do it right it isn't sufficient to test one build against itself. You need to test new clients against a range of old servers and vice versa in a constrained environment. It is necessary to be able to identify when a change has an adverse performance impact as well as accuracy. There is a need to be able to introduce intentional errors at various points in the protocol. Just the hardware costs are mid 5 digits and the software development is significantly more than that. Jeffrey Altman
signature.asc
Description: OpenPGP digital signature