John,

Have you been able to find any good definitions for your use of
"trustworthiness"? The wikipedia article
about trustworthy computing [1] makes it sound like something which
originated in Microsoft's marketing department.

Using intuitive definitions, the three metrics you mention seem to
be synonymous. Code size, when measured in "the number of thoughts it takes
to conceptualize it", is synonymous to "complexity". As far as the right way
to write "trustworthy" code, the only convincing argument I've heard is from
SPJ:

"Tony Hoare has this wonderful turn of phrase in which he says your code
> should obviously have no bugs rather than having no obvious bugs. So for me
> I suppose beautiful code is code that is obviously right. It's kind of
> limpidly transparent." -Simon Peyton Jones, from Peter Seibel's "Coders At
> Work"


Just keep it as simple (and short) as possible.

Cheers,
Andrey

1. http://en.wikipedia.org/wiki/Trustworthy_Computing

On Fri, Feb 26, 2010 at 6:15 PM, John Zabroski <johnzabro...@gmail.com>wrote:

> I've been following this project for a long time, and only recently joined
> the mailing list.
>
> For a long time, I did not fully understand Alan Kay's thoughts on software
> architecture, despite reading many of his press interviews and watching his
> public presentations.  What I've come to feel is that Alan has a partially
> complete vision, and some inconsistent viewpoints likely blocking a complete
> vision of computer science.
>
> For example, I had heard Alan refer to Lisp as Maxwell's Equations of
> computer science, but did not fully grasp what that really meant.  When I
> first played around with OMeta, I described it to a friend at MIT as
> "ridiculously meta". This idea was pretty much confirmed by Ian Piumarta's
> "widespread unreasonable behavior" whitepaper, which basically argues that
> we can't truly do "software engineering" until we actually know what that
> means, so the best approach to go with is extremely late-binding.  The idea
> to use syntax-directed interpretation via PEG is an obvious way to achieve
> this, as it addresses one of the three key stumbling blocks to building real
> "software engineering" solutions -- size.
>
> But I am not convinced VPRI really has a solution to the remaining two
> stumbling blocks: complexity and trustworthiness.
>
> In terms of complexity, I think I'll refer back to Alan Kay's 1997 OOPSLA
> speech, where he talks about doghouses and cathedrals.  Alan mentions Gregor
> Kiczales' The Art of the Meta Object Protocol as one of the best books
> written in the past 10 years on OOP-work.  I don't really understand this,
> because AMOP is entirely about extending the block-structured, procedural
> message passing approach to OO using computational reflection.  From what
> I've read about Smalltalk and the history of its development, it appears the
> earliest version of Smalltalk I could read about/heard of, Smalltalk-72,
> used an actors model for message passing.  While metaobjects allow
> implementation hiding, so do actors.  Actors seems like a far better
> solution, but it is also obviously not covered by Goldberg and Robson's
> Smalltalk-80 Blue Book.  To be clear, I very much dislike Kiczales model and
> think it says a lot about current practice in Java-land that most people
> abuse reflection through the use of tools like AspectJ.  Yet, aspect-weaving
> is also seen in the Executable UML realm, where you draw pictures about a
> problem domain, and separate concerns into their own domains. But it seems
> way more pure than AMOP because a model-driven compiler necessarily will
> bind things as late as necessary, in part thanks to a clockless, concurrent,
> asynchronous execution model.  The aspect-weaving seen here is therefore
> different, and the entire model is "connect the dots" using handshaking
> protocols.
>
> For me, besides the execution model, the other most basic measure of
> complexity is, for how much complexity you add to the system, how many more
> features can you produce?  UNIX hit a blocking point almost immediately due
> to its process model, where utility authors would tack on extra functions to
> command-line programs like cat.  This is where Kernighan and Pike coined the
> term "cat -v Considered Harmful", becaise cat had become way more than just
> a way to concatenate two files.  But I'd argue what K&P miss is that the
> UNIX process model, with pipes and filters as composition mechanisms on
> unstructured streams of data, not only can't maximize performance, it can't
> maximize modularity, because once a utility hits a performance wall, a
> programmer goes into C and adds a new function to a utility like cat so that
> the program does it all at once.  So utilities naturally grow to become
> monolithic.  Creating Plan9 and Inferno Operating Systems just seem like
> incredibly pointless from this perspective, and so does Google's Go
> Programming Language (even the tools for Go are monolithic).
>
> Apart from AMOP, Alan has not really said much about what interests him and
> what doesn't interest him.  He's made allusions to people writing OSes in
> C++.  Fair enough, as C++ is a block-structured, procedural message passing
> solution that also features language constructs like pointers that tightly
> couples the programmer to a specific memory model.  Moreover, C++ has
> concurrency glued on via threads.  These three physical coupling issues
> (block-structured, procedural message passing; manual memory management;
> manual concurrency) are things the average programmer should never have to
> touch, and so that is all sort of an Original Sin that harms the complexity
> and trustworthiness of any application written in C++.  Edward A. Lee has a
> pretty good argument against threads from a complexity standpoint, in a
> paper called The Problem With Threads, where he talks about threads like
> they killed his whole family, and it's fantastic.  The argument is so simple
> and so dead-on accurate.  And for trustworthiness, C++ stinks because
> portability is pretty much a myth.  The language itself can't deal with
> portability, so programmers delegate it to the preprocessor.
>
> So I am fairly convinced PEGs are a good way to bring down size, but not
> complexity or trustworthiness.  So I've been looking around, asking, "Who is
> competing with VPRI's FONC project?"
>
> What I've basically come up with is UI-Urbana-Champagne and  SRI
> International's term rewriting project, Maude.  Like OMeta, Maude does a
> pretty good job shrinking down the size of code, but it is fundamentally
> geared for executable specifications and not late binding, and also requires
> the programmer to think of 'valid programs' as confluent and terminating.
> Maude is pretty much the Smalltalk of term rewriting languages, and nothing
> really appears to compare to it.  People have interest in other term rewrite
> systems like Pure, but only because it leverages the "big idea of the
> decade" compiler-compiler concept LLVM.  I don't think LLVM is a good
> long-term solution here, so long-term Pure will have to write something like
> LLVM in their own host language.  Currently, Core Maude is written in C++
> and everything about Core Maude is written in Maude in terms of Core Maude.
> This is obviously upsetting to anyone with half a brain, since the one thing
> VPRI has gotten right so far is the fast bootstrap process.  On the
> complexity and trustworthiness end, Maude layers Object Maude and Mobile
> Maude on top of Core Maude.  To achieve trustworthiness, you need to think
> upfront about your requirements and pick which layer of Maude to write your
> solution in.
>
> The other competition I see is more language-agnostic, and is based on Yuri
> Gurevich's evolving algebras idea (aka Abstract State Machines).  These are
> covered in two books by Robert Stark, that are both very readable.  Abstract
> State Machines do handle trustworthiness fairly well, but I dislike the fact
> it is based entirely on hierarchical decomposition.  This criticism could be
> unjustified, but intuitively I dislike it and prefer to reason about systems
> in terms of correctly emulating parts of a problem domain, with objects as
> peers interacting with each other.  If I have to decompose things, then I
> have to also start looking at a different problem domain entirely.
>
> Right now, I'd say it takes time to become an expert, and I'm not an expert
> in OMeta, Maude, or ASMs.  But I clearly grasp the size issue and how all
> these approaches appear to have solved the size problem.  What I have yet to
> grasp is the complexity and trustworthiness issues.
>
> On the trustworthiness front, I've been reading Vladimir Safonov's two
> books, (1) Trustworth Compilers (2) Using Aspect-Oriented Programming for
> Trustworthy Software Development.
>
> Where else should I look?  What do FONC people like Alan and Ian have to
> say?
>
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to