On 1/3/2013 2:25 AM, Simon Forman wrote:
On Wed, Jan 2, 2013 at 10:35 PM, BGB <cr88...@gmail.com> wrote:
On 1/2/2013 10:31 PM, Simon Forman wrote:
On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay <alan.n...@yahoo.com> wrote:
The most recent discussions get at a number of important issues whose
pernicious snares need to be handled better.

In an analogy to sending messages "most of the time successfully" through
noisy channels -- where the noise also affects whatever we add to the
messages to help (and we may have imperfect models of the noise) -- we
have
to ask: what kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So
a
message that can be proved perfectly "received as sent" can still be
interpreted poorly by a human directly, or by software written by humans.

A wonderful "specification language" that produces runable code good
enough
to make a prototype, is still going to require debugging because it is
hard
to get the spec-specs right (even with a machine version of human level
AI
to help with "larger goals" comprehension).

As humans, we are used to being sloppy about message creation and
sending,
and rely on negotiation and good will after the fact to deal with errors.

We've not done a good job of dealing with these tendencies within
programming -- we are still sloppy, and we tend not to create negotiation
processes to deal with various kinds of errors.

However, we do see something that is "actual engineering" -- with both
care
in message sending *and* negotiation -- where "eventual failure" is not
tolerated: mostly in hardware, and in a few vital low-level systems which
have to scale pretty much "finally-essentially error-free" such as the
Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error
detection and improvements (if possible). Dan Ingalls was (and is) a
master
at getting a whole system going in such a way that it has enough
integrity
to "exhibit its failures" and allow many of them to be addressed in the
context of what is actually going on, even with very low level failures.
It
is interesting to note the contributions from what you can say statically
(the higher the level the language the better) -- what can be done with
"meta" (the more dynamic and deep the integrity, the more powerful and
safe
"meta" becomes) -- and the tradeoffs of modularization (hard to sum up,
but
as humans we don't give all modules the same care and love when designing
and building them).

Mix in real human beings and a world-wide system, and what should be
done?
(I don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers
contrasted with engineers. The second is human systems contrasted with
biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million
engineers (some of them in computing). The current estimates of
"programmers
in the US" are about 1.3 million (US Dept of Labor counting "programmers
and
developers"). Also, the Internet and multinational corporations, etc.,
internationalizes the impact of programming, so we need an estimate of
the
"programmers world-wide", probably another million or two? Add in the ad
hoc
programmers, etc? The populations are similar in size enough to make the
contrasts in methods and results quite striking.

Looking for analogies, to my eye what is happening with programming is
more
similar to what has happened with law than with classical engineering.
Everyone will have an opinion on this, but I think it is partly because
nature is a tougher critic on human built structures than humans are on
each
other's opinions, and part of the impact of this is amplified by the
simpler
shorter term liabilities of imperfect structures on human safety than on
imperfect laws (one could argue that the latter are much more of a
disaster
in the long run).

And, in trying to tease useful analogies from Biology, one I get is that
the
largest gap in complexity of atomic structures is the one from polymers
to
the simplest living cells. (One of my two favorite organisms is
Pelagibacter
unique, which is the smallest non-parasitic standalone organism.
Discovered
just 10 years ago, it is the most numerous known bacterium in the world,
and
accounts for 25% of all of the plankton in the oceans. Still it has about
1300+ genes, etc.)

What's interesting (to me) about cell biology is just how much stuff is
organized to make "integrity" of life. Craig Ventor thinks that a minimal
hand-crafted genome for a cell would still require about 300 genes (and a
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here
should be scrutinized -- but this one harmonizes with one of Butler
Lampson's conclusions/prejudices: that you are much better off making --
with great care -- a few kinds of relatively big modules as basic
building
blocks than to have zillions of different modules being constructed by
vanilla programmers. One of my favorite examples of this was the "Beings"
master's thesis by Doug Lenat at Stanford in the 70s. And this influenced
the partial experiment we did in Etoys 15 years ago.

There is probably a "nice size" for such modules -- large enough to both
provide and be well tended, and small enough to minimize internal
disasters.

An interesting and important design problem is to try to (a) vet this
idea
in or out, and (b) if in, then what kinds of "semi-universal" modules
would
be most fruitful?

One could then contemplate trying -- inducing -- to get most programmers
to
program in terms of these modules (they would be the components of an IDE
for commerce, etc., instead of the raw programming components of today).
This tack would almost certainly also help the mess the law is in going
forward ...

Note that desires for runable specifications, etc., could be quite
harmonious with a viable module scheme that has great systems integrity.

Cheers,

Alan

Reminds me of Margulis' endosymbiotic theory that cells are
communities of smaller organisms that glommed together and
inter-adapted to the point of appearing to be organelles in a single
organism.  Mitochondria being an example of being between organism and
organelle.

this works for eukaryotic cells (plant & animal), but it has problems for
procaryotic cells (like bacteria).

for example, the mitochondria has its own DNA, as does the chloroplast, and
in both cases there are bacteria which have a similar internal structure and
similar DNA (such as between chloroplasts and cyanobacteria).


the problem with prokaryotic cells, is that they are themselves fairly
minimal, and what parts they do have (such as ribosomes), have no clear
function outside of a cell.


although, this reminds me of something I had before idly wondered:
if structures vaguely similar to ribosomes could be used to build
rudimentary molecular processing devices, like instead of RNA strands
driving protein synthesis, they could instead be used to execute commands,
possibly to allow something like cells with basic microcontrollers.

granted, it would also require some mechanism for it to do something
(trigger an action in the cell), and probably also some sort of mechanisms
to implement things like memory or state (idle thought of maybe a strand or
ring of RNA which would function similarly to the paper-tape in a turing
machine, and/or "stateful proteins" which could bend or change chemical or
electrical properties or similar when triggered, ...).

probably with the sequence to build the device being included in the cell's
genome or similar (so each cell gets a CPU, ...).
Whoa, I think you just invented "nanotech organelles", at least this
is the first time I've heard that idea and it seems pretty
mind-blowing.  What would a cell use a cpu for?

mostly so that microbes could be programmed in a manner more like larger-scale computers.

say, the microbe has its basic genome and capabilities, which can be treated more like hardware, and then a person can write behavioral programs in a C-like language or similar, and then compile them and run them on the microbes.

for larger organisms, possibly the cells could network together and form into a sort of biological computer, then you can possibly have something the size of an insect with several GB of storage and processing power rivaling a modern PC, as well as possibly other possibilities, such as the ability to communicate via WiFi or similar.

alternatively could be the possibility of having an organism with more powerful neurons, such that rather than neurons communicating via simple impulses, they can send more complex messages (neuron fires with extended metadata, ...). then neurons can make more informed decisions about whether to fire off a message.

granted, yes, such organisms would likely be heavily engineered.


I think when we get in there we'll discover something a lot cooler
than Turing machines happening in cellular metabolism, but then I'm a
loon who thinks cells are sentient.. lol.

cells do lots of nifty stuff, but most of their functionality is more based around cellular survival than about computational tasks.

so, you have microbes that eat things, or produce useful byproducts, but none that actually accomplish specific tasks or perform actions on-command (such as, say, microbes which build things).

like, say, you could have a tub of gloop, which when instructed, could make objects like cell-phones, ..., which a person can make use of, and when done using it, they can put it back into the tub and issue a command and the organic gloop would decompose it again (back into raw materials).

then, if it runs low on raw materials, you can dump in some "gloop food", which gives it more of the things it needs both to survive and to build more stuff.


potentially, you could also make things like essentially "living robots", which can perform usual robotic tasks, but which can replicate themselves and heal damage more like living organisms, solar power plants that are, essentially, giant plants, ...


although, yes, some of this does bring up the possibility that scary/nasty stuff could also be possible, like, say, living buildings which eat their occupants, mass replication of near-indestructible critters, random emergence of things like "the blob", ... (some could arise either by accident, such as genetic mutations leading to malfunction, or by deliberate acts, such as sabotage or weaponized critters).


I was being more metaphorical in my first post.  I've been looking at
an "algorithmic biology" website (
http://algorithmicbotany.org/papers/ ) and thinking about L-systems
that, instead of being geometrical, instead are somehow mapping into
"spaces" of...  I don't know exactly how to explain it.   They've got
dynamic models mimicking forest growth by modelling growth and light
availability and such.  What if, for example, your distributed
application could use something like this to allocate nodes or other
resources dynamically in response to usage and available resources?

It seems like a start on how to think about a program/OS that can
cooperate with copies of itself to cope dynamically with changing
conditions and inputs.

yep, fair enough...


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to