Re: [Gluster-users] glusterfs performance issues - meta

2013-01-08 Thread Whit Blauvelt
On Tue, Jan 08, 2013 at 04:49:30PM +0100, Stephan von Krawczynski wrote:

  Pointing out that a complex system can go wrong doesn't invalidate complex
  systems as a class. It's well established in ecological science that more
  complex natural systems are far more resilient than simple ones. A rich,
  complex local ecosystem has a higher rate of stability and survival than a
  simple, poorer one. That's assuming the systems are evolved and have niches
  well-fitted with organisms - that the complexity is organic, not just
  random.
 
 That is a good example for excluded corner cases, just like the current split
 brain discussion. All I need to do to your complex natural system to
 invalidate is to throw a big stone on it. Ask dinosaurs for real life
 experience after that. 

Throw a big enough stone and anything can be totally crushed. The question
is one of resilience when the stone is less than totally crushing. The
ecosystem the big stone was thrown at which included the dinosaurs survived,
because in its complexity it also included little mammals - which themselves
were more complex organisms than the dinosaurs. Not that some simpler
organisms didn't make it through the extinction event too. Plenty did. The
chicken I ate for dinner is a descendant of feathered dinosaurs.

Take two local ecosystems, one more complex than the other. Throw in some
big disturbance, the same size of disruption in each. On average, the
complex local ecosystem is more likely to survive and bounce back, while the
simple one is more likely to go into terminal decline. This is field data,
not mere conjecture. Your argument here could be that technological systems
don't obey the same laws as ecosystems. But work in complexity theory shows
that the right sorts of complexity produce greater stability across a broad
range of systems, not just biological ones. 

Free, open source software's particular advantage is that it advances in a
more evolutionary manner than closed software, since there is evolutionary
pressure from many directions on each part of it, at every scale.
Evolutionary pressure produces complexity, the _right sort_ of complexity.
That's why Linux systems are more complex, and at the same time more stable
and manageable, than Windows systems. 

Simplicity does not have the advantage. Even when smashing things with
rocks, the more complex thing is more likely to survive the assault, if it
has the right sort of complexity.

Best,
Whit
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs performance issues - meta

2013-01-08 Thread Stephan von Krawczynski
On Tue, 8 Jan 2013 11:44:15 -0500
Whit Blauvelt whit.glus...@transpect.com wrote:

 On Tue, Jan 08, 2013 at 04:49:30PM +0100, Stephan von Krawczynski wrote:
 
   Pointing out that a complex system can go wrong doesn't invalidate complex
   systems as a class. It's well established in ecological science that more
   complex natural systems are far more resilient than simple ones. A rich,
   complex local ecosystem has a higher rate of stability and survival than a
   simple, poorer one. That's assuming the systems are evolved and have 
   niches
   well-fitted with organisms - that the complexity is organic, not just
   random.
  
  That is a good example for excluded corner cases, just like the current 
  split
  brain discussion. All I need to do to your complex natural system to
  invalidate is to throw a big stone on it. Ask dinosaurs for real life
  experience after that. 
 
 Throw a big enough stone and anything can be totally crushed. The question
 is one of resilience when the stone is less than totally crushing. The
 ecosystem the big stone was thrown at which included the dinosaurs survived,
 because in its complexity it also included little mammals - which themselves
 were more complex organisms than the dinosaurs. Not that some simpler
 organisms didn't make it through the extinction event too. Plenty did. The
 chicken I ate for dinner is a descendant of feathered dinosaurs.
 
 Take two local ecosystems, one more complex than the other. Throw in some
 big disturbance, the same size of disruption in each. On average, the
 complex local ecosystem is more likely to survive and bounce back, while the
 simple one is more likely to go into terminal decline. This is field data,
 not mere conjecture. Your argument here could be that technological systems
 don't obey the same laws as ecosystems. But work in complexity theory shows
 that the right sorts of complexity produce greater stability across a broad
 range of systems, not just biological ones. 
 
 Free, open source software's particular advantage is that it advances in a
 more evolutionary manner than closed software, since there is evolutionary
 pressure from many directions on each part of it, at every scale.
 Evolutionary pressure produces complexity, the _right sort_ of complexity.
 That's why Linux systems are more complex, and at the same time more stable
 and manageable, than Windows systems. 
 
 Simplicity does not have the advantage. Even when smashing things with
 rocks, the more complex thing is more likely to survive the assault, if it
 has the right sort of complexity.

Listen, I don't really want to lengthen the discussion about complexity issues
in ecosystems. But let me please point out that the fundamental flaw in your
example as you turn it now is that a natural ecosystem has no _goal_ of
existence. Whereas programmed code should at least have _some_.
Which means you can take it as negative example for reasons why something does
not work. But you cannot elaborate it as positive example why something should
work. Glusterfs(d) has a clearly stated goal of being, an ecosystem has not.
So you cannot say that only because _something_ survived a crashing ecosystem
proves that an equally complex code does something useful after an equally
complex crash. In fact it most certainly does not.
Contrary is true. You should strip down complexity to the lowest possible
level to make the code more obvious and therefore more debuggable and readable
to a larger number of people. That will have a positive effect on its
stability. But if you throw in more and more code for fragile corner cases
instead of drawing a clear line between clearly working and clearly
failing you will not end up at the desired state where everything works
stable.
This path is as wrong as it was to release a complete fileserver installation
image back in the old days of glusterfs.
In the end, everything boils down to the question where efforts are invested
best in order to make the project more successful. And its really not that
hard to find out what the biggest show stopper is. simply count the articles
in the list dealing with performance issues and strange effects of not-synced
files during _normal_ operation. there is not much left.
Read the fs comparison between NFS, Samba, Ceph and Glusterfs in some german
linux magazine lately? Guess who's last...

 Best,
 Whit

-- 
Regards,
Stephan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users