On 8/9/2011 9:44 AM, Craig Weinberg wrote:
On Aug 9, 9:11 am, Stathis Papaioannou<stath...@gmail.com> wrote:
On Tue, Aug 9, 2011 at 10:12 AM, Craig Weinberg<whatsons...@gmail.com> wrote:
We're not programmed to keep ourselves alive, we have to learn how to
do that ourselves. There is some vestigial programming from thousands
of years of hominid evolution, but that programming is actually
hostile to our survival now and we are having to hack into our own
diet to keep it from making us obese. A thermostat doesn't do that.
Living things including humans are programmed by the evolution. Their
program includes algorithms which allow learning. Machines differ in
that they are programmed by humans rather than evolution, but they
also can have algorithms that allow learning and alteration of
behaviour based on previously unseen environmental inputs. The Mars
rover is an example of such a machine. Machines aren't yet nearly as
smart as humans but they are getting smarter all the time, while we
aren't.
I understand what you're saying, and indeed we can program machines to
simulate learning - and that can be considered a form of learning in a
broad sense of the word, but it is not learning from the inside out.
The machine doesn't care if it learns. It's only going to learn what
we program it to learn. It has no capacity to decide what it wants to
learn for itself. There is a fundamental and unbridgeable chasm
between something doing something because it wants to or needs to and
something being programmed by an exterior source as a servant. I'm not
being poetic or sentimental about this, I'm asserting an ontological-
topological incompatibility.
Take a look at this quick video:
http://abject.ca/do-trees-communicate/
It can give us a hint at how organisms symbiotically feel their
environment to grow and protect themselves. This could be just the
beginning of our understanding of sensorimotive entanglement. Our
bodies have more bacteria in them than they do cells of their own. Our
minds use exterior behaviors and events as part of their functioning,
so that it's not clear that even an exact simulation of a brain by
itself would have the same kind of experience that we do.
Evolution is only a program in hindsight. It does not set out to do
anything, it's just a record of what happens to have worked in the
past. It's a teleonomy through which teleology is experienced. To
generate a program teleologically, with foresight, the result is
limited teleonomy... consequences. I can write a book, but a book
cannot write me. Even a really complicated electronic book that tries
pattern combinations by the trillion. It will never find the color
blue or the flavor of a plum. It won't guess what I remember from
fifth grade.
It might help to use the word 'organization' instead of machine. A
computer program is an organization, as is the legal system of a
nation or the assembly of a machine gun. We are more than an
organization - we are an organization too, but of very specific and
not necessarily transferable organisms made of specific and non-
transferable substances. The organization by itself - etched in
silicon or chiseled into ice, is meaningless. The substances and
organisms by themselves are meaningless also (as far as simulating a
person - they aren't meaningless to the organisms themselves). It
takes the mutual coherence of essential and existential topologies to
form a single conscious ontology.
Craig
Hi,
May I chance the subject?
I think the fundamental point here is that computers do not have a
sense of self. Not a 'sense of self in the world or just a sense of
being. They just exist. I believe that the current notion of computers
is grossly oversimplified.
As I see it, computation is 'the transformation of information" and
to limit the notion of information to just what can be represented by a
binary valued logical algebra is pathetic and sad. Why not consider the
idea that all forms of information, ala Bateson modulo Peirce: is 'a
difference between two referents that is a difference for a third
referent"?
All of Shannon's ideas would still work as far as I can tell. This
yields Boolean algebras a a simple case. But consider a collection of
referents that have mutual differences and relations that cannot be
reduced to 2-logics? This would fall outside of the usual Turing
A-machine ideas...
Onward!
Stephen
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.