On 1/2/2013 10:31 PM, Simon Forman wrote:
On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay <alan.n...@yahoo.com> wrote:
The most recent discussions get at a number of important issues whose
pernicious snares need to be handled better.

In an analogy to sending messages "most of the time successfully" through
noisy channels -- where the noise also affects whatever we add to the
messages to help (and we may have imperfect models of the noise) -- we have
to ask: what kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So a
message that can be proved perfectly "received as sent" can still be
interpreted poorly by a human directly, or by software written by humans.

A wonderful "specification language" that produces runable code good enough
to make a prototype, is still going to require debugging because it is hard
to get the spec-specs right (even with a machine version of human level AI
to help with "larger goals" comprehension).

As humans, we are used to being sloppy about message creation and sending,
and rely on negotiation and good will after the fact to deal with errors.

We've not done a good job of dealing with these tendencies within
programming -- we are still sloppy, and we tend not to create negotiation
processes to deal with various kinds of errors.

However, we do see something that is "actual engineering" -- with both care
in message sending *and* negotiation -- where "eventual failure" is not
tolerated: mostly in hardware, and in a few vital low-level systems which
have to scale pretty much "finally-essentially error-free" such as the
Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error
detection and improvements (if possible). Dan Ingalls was (and is) a master
at getting a whole system going in such a way that it has enough integrity
to "exhibit its failures" and allow many of them to be addressed in the
context of what is actually going on, even with very low level failures. It
is interesting to note the contributions from what you can say statically
(the higher the level the language the better) -- what can be done with
"meta" (the more dynamic and deep the integrity, the more powerful and safe
"meta" becomes) -- and the tradeoffs of modularization (hard to sum up, but
as humans we don't give all modules the same care and love when designing
and building them).

Mix in real human beings and a world-wide system, and what should be done?
(I don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers
contrasted with engineers. The second is human systems contrasted with
biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million
engineers (some of them in computing). The current estimates of "programmers
in the US" are about 1.3 million (US Dept of Labor counting "programmers and
developers"). Also, the Internet and multinational corporations, etc.,
internationalizes the impact of programming, so we need an estimate of the
"programmers world-wide", probably another million or two? Add in the ad hoc
programmers, etc? The populations are similar in size enough to make the
contrasts in methods and results quite striking.

Looking for analogies, to my eye what is happening with programming is more
similar to what has happened with law than with classical engineering.
Everyone will have an opinion on this, but I think it is partly because
nature is a tougher critic on human built structures than humans are on each
other's opinions, and part of the impact of this is amplified by the simpler
shorter term liabilities of imperfect structures on human safety than on
imperfect laws (one could argue that the latter are much more of a disaster
in the long run).

And, in trying to tease useful analogies from Biology, one I get is that the
largest gap in complexity of atomic structures is the one from polymers to
the simplest living cells. (One of my two favorite organisms is Pelagibacter
unique, which is the smallest non-parasitic standalone organism. Discovered
just 10 years ago, it is the most numerous known bacterium in the world, and
accounts for 25% of all of the plankton in the oceans. Still it has about
1300+ genes, etc.)

What's interesting (to me) about cell biology is just how much stuff is
organized to make "integrity" of life. Craig Ventor thinks that a minimal
hand-crafted genome for a cell would still require about 300 genes (and a
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here
should be scrutinized -- but this one harmonizes with one of Butler
Lampson's conclusions/prejudices: that you are much better off making --
with great care -- a few kinds of relatively big modules as basic building
blocks than to have zillions of different modules being constructed by
vanilla programmers. One of my favorite examples of this was the "Beings"
master's thesis by Doug Lenat at Stanford in the 70s. And this influenced
the partial experiment we did in Etoys 15 years ago.

There is probably a "nice size" for such modules -- large enough to both
provide and be well tended, and small enough to minimize internal disasters.

An interesting and important design problem is to try to (a) vet this idea
in or out, and (b) if in, then what kinds of "semi-universal" modules would
be most fruitful?

One could then contemplate trying -- inducing -- to get most programmers to
program in terms of these modules (they would be the components of an IDE
for commerce, etc., instead of the raw programming components of today).
This tack would almost certainly also help the mess the law is in going
forward ...

Note that desires for runable specifications, etc., could be quite
harmonious with a viable module scheme that has great systems integrity.

Cheers,

Alan

Reminds me of Margulis' endosymbiotic theory that cells are
communities of smaller organisms that glommed together and
inter-adapted to the point of appearing to be organelles in a single
organism.  Mitochondria being an example of being between organism and
organelle.

this works for eukaryotic cells (plant & animal), but it has problems for procaryotic cells (like bacteria).

for example, the mitochondria has its own DNA, as does the chloroplast, and in both cases there are bacteria which have a similar internal structure and similar DNA (such as between chloroplasts and cyanobacteria).


the problem with prokaryotic cells, is that they are themselves fairly minimal, and what parts they do have (such as ribosomes), have no clear function outside of a cell.


although, this reminds me of something I had before idly wondered:
if structures vaguely similar to ribosomes could be used to build rudimentary molecular processing devices, like instead of RNA strands driving protein synthesis, they could instead be used to execute commands, possibly to allow something like cells with basic microcontrollers.

granted, it would also require some mechanism for it to do something (trigger an action in the cell), and probably also some sort of mechanisms to implement things like memory or state (idle thought of maybe a strand or ring of RNA which would function similarly to the paper-tape in a turing machine, and/or "stateful proteins" which could bend or change chemical or electrical properties or similar when triggered, ...).

probably with the sequence to build the device being included in the cell's genome or similar (so each cell gets a CPU, ...).

or such...

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to