I think this is the real point here, and you put it well.

One of the books I've admired greatly over the years is "The Molecular Biology 
of The Cell" -- especially the "green one" (3rd edition?) -- which does a 
terrific job of starting with a few findings of chemistry and spinning out a 
pretty good and deep coverage of the subject in about 1000 extremely readable 
pages.

This is not quite a good enough analogy for what we are talking about, because 
it doesn't have be to constructive. 


If we look at elementary geometry books and decide -- subjectively as you say 
-- 
that one seems to be the clearest description of how to construct geometry up 
to 
some point, then this is perhaps a much better analogy because we've got at 
least three different representation systems working together with the 
important 
indefinable of "style" that brings life to structures.

Hard to measure .... and not everyone agrees that a particular artifact is 
worthy of great respect.

On the other hand, we've all had encounters with such creations, and have felt 
lifted by them.

Cheers,

Alan




________________________________
From: John Zabroski <johnzabro...@gmail.com>
To: Fundamentals of New Computing <fonc@vpri.org>
Sent: Tue, July 13, 2010 4:01:03 PM
Subject: Re: [fonc] goals

Of course, in many ways, code size is not at all related to performance, and 
you 
might have discovered the smallest code size to model a problem, but that code 
size has a "bug" in that its performance is mostly a function of evaluation 
strategy (e.g., call-by-need performance model is not even compositional with 
respect to the lexicographic syntax!).  If we separate meaning from 
specification, then this is no longer true, but we've increased complexity in 
the meta-interpreter.  It is not "the muck of the human political process" we 
are trying to get out of.  Instead, it is the Turing tarpit we are trying to 
step out of.  Already, many of our systems are like fossils.

For example, traditional measures of software complexity, such as Cyclomatic 
Complexity, are basically metrics on parse trees.  You simply count the 
appearance of a set of static productions, and you get the rough idea for the 
complexity of the software.  But these metrics don't apply well to languages 
that have cores that make heavy use of partial application.  Can you spot the 
bug in [1]. Another measure of complexity is the Dependency Structure Matrix 
(DSM), which measures dependencies.  But these "dependencies" are based on 
linking and loading dependencies -- again, evaluation strategy -- and not 
problem domain abstraction issues.  Actually, a DSM in a way does show problem 
domain abstraction issues, but it looks at them in terms of dependencies.  
Likewise, Cyclomatic Complexity does show problem domain abstraction issues, 
but 
it looks at them in terms of the degree to which you are not explicitly 
modeling 
the context in which messages pass to and from objects.  Neither is the true 
thing: The real measure is simply how well your problem domain is abstracted, 
which is subjective and based on requirements in most problems and cannot be 
summarized with algebraic equations -- math is only ONE problem domain, and if 
we base our non-math systems entirely on functional decomposition then we will 
get spaghetti code as a result, since modeling non-math systems as math systems 
is a problem domain abstraction error.

[1] http://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR141.pdf  FOR FUN: Where is the 
bug here?  The authors claim they are measuring the *economic* expressiveness 
of 
languages.  Show me at least one counter-example and explain why your 
counter-example shows this is a cargo cult science (there are many famous 
programming language papers about this, and you can feel free to just point to 
one of those).


On Sun, Jul 11, 2010 at 10:20 PM, Max OrHai <max.or...@gmail.com> wrote:


>
>
>On Sat, Jul 10, 2010 at 3:22 AM, Steve Dekorte <st...@dekorte.com> wrote:
>
>It seems as if each computing culture fails to establish a measure for it's 
>own 
>goals which leaves it with no means of critically analyzing it's assumptions 
>resulting in the technical equivalent of religious dogma. From this 
>perspective, 
>new technical cultures are more like religious reform movements than new 
>scientific theories which are measured by agreement with experiment. e.g. had 
>the Smalltalk community said "if it can reduce the overall code >X without a 
>performance cost >Y" it's better, perhaps prototypes would have been adopted 
>long ago.

>But code size versus performance is only one of many concurrent trade-offs, 
>when 
>it comes to defining 'better'. Different individuals or groups can have 
>legitimately different needs. The more people are involved (and the more 
>invested they are), the more difficult the consensus-building process. 
>Measurements can help, but they are human artifacts as well, in their own way. 
>They don't necessarily pull you up out of the muck of the human political 
>process. 
>
>
>I'd say the issue isn't with computing culture per se, but with culture in 
>general. There's a big gap between Science as the rational, disinterested 
>pursuit of knowledge and any engaged "technical culture", even of people as 
>enlightened (and as few) as Smalltalkers.
>
>-- Max
>_______________________________________________
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>



      
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to