Excerpts from Jay Pipes's message of 2016-06-21 12:47:46 -0400: > On 06/21/2016 04:25 AM, Chris Dent wrote: > > However, I worry deeply that it could become astronauts with finger > > paints. > > Yes. This. > > I will happily take software design suggestions from people that > demonstrate with code and benchmarks that their suggestion actually > works outside of the theoretical/academic/MartinFowler landscape and > actually solves a real problem better than existing code. >
So, I want to be careful not to descend too far into reductionism. Nobody is suggesting that an architecture WG goes off into the corner and applies theory to problems without practical application. However, some things aren't measured in how fast, or even how reliable a piece of code is. Some things are measured in how understandable the actual code and deployment is. If we architect a DLM solution, refactor out all of the one-off DLM-esque things, and performance and measured reliability stay flat, did we fail? What about when the locks break, and an operator has _one_ way to fix locks, instead of 5? How do we measure that? So I think what I want to focus on is: We have to have some real basis on which to agree that an architecture that is selected is worth the effort to refactor things around it. But I can't promise that every problem has an objectively measurable solution. Part of the point of having a working group is to develop a discipline for evaluating hard problems like that. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev