On 4/11/2011 7:59 AM, Christopher Dolan wrote:
...
These conditions are hard to reproduce in a typical lab, because they
require large numbers of machines and deliberately misconfigured DNS.
I'd appreciate any thoughts that others have about Reggie scaling
issues.
...

I can't comment on the specific issue, but I do have a general concern
about scalability.

I've had many opportunities to compare the educated performance
estimates of expert programmers to actual measurements. Scalability
almost always involves surprises. The bottlenecks programmers expect are
not the ones that matter.

The biggest difference I've found between being a performance architect
working for computer manufacturers and being an open source programmer
is the difference in opportunities for scalability measurement. When I was working for computer manufacturers, if a benchmark needed to
be fast on a system with P processors and M gigabytes of memory, I would
measure and profile it on a system with P processors and M gigabytes of
memory.

Now, I'm trying to write scalable code and only measuring on my home
computer. I can do my best to project performance on larger systems, but
I *know* how little value unmeasured performance projects really have.

The best solution I can think of is some form of collaboration with
people who do have the hardware resources to measure scalability. Perhaps the user community have some ideas?

Patricia

Reply via email to