On 12/5/06, Richard Loosemore wrote:

There are so few people who speak up against the conventional attitude
to the [rational AI/irrational humans] idea, it is such a relief to hear
any of them speak out.

I don't know yet if I buy everything Minsky says, but I know I agree
with the spirit of it.

Minsky and Hofstadter are the two AI thinkers I most respect.



The customer reviews on Amazon are rather critical of Minsky's new book.
They seem to be complaining that the book is more of a general
discussion rather than providing detailed specifications for building
an AI engine.  :)
<http://www.amazon.com/gp/product/customer-reviews/0743276639/ref=cm_cr_dp_pt/102-3984994-3498561?ie=UTF8&n=283155&s=books>


The good news is that Minsky appears to be making the book available
online at present on his web site.     *Download quick!*

<http://web.media.mit.edu/~minsky/>
See under publications, chapters 1 to 9.
The Emotion Machine 9/6/2006    ( 1 2 3 4 5 6 7 8 9 )


I like very much Minsky's summing up from the end of the book:


---------------------
All of these kinds of inventiveness, combined with our unique
expressiveness, have empowered our communities to deal with huge
classes of new situations. The previous chapters discussed many
aspects of what gives people so much resourcefulness:

We have multiple ways to describe many things—and can quickly switch
among those different perspectives.
We make memory-records of what we've done—so that later we can reflect on them.
We learn multiple ways to think so that when one of them fails, we can
switch to another.
We split hard problems into smaller parts, and use goal-trees, plans,
and context stacks to help us keep making progress.
We develop ways to control our minds with all sorts of incentives,
threats, and bribes.
We have many different ways to learn and can also learn new ways to learn.
We can often postpone a dangerous action and imagine, instead, what
its outcome might be in some Virtual World.


Our language and culture accumulates vast stores of ideas that were
discovered by our ancestors. We represent these in multiple realms,
with metaphors interconnecting them.

Most every process in the brain is linked to some other processes. So,
while any particular process may have some deficiencies, there will
frequently be other parts that can intervene to compensate.

Nevertheless, our minds still have bugs. For, as our human brains
evolved, each seeming improvement also exposed us to the dangers
making new types of mistakes. Thus, at present, our wonderful powers
to make abstractions also cause us to construct generalizations that
are too broad, fail to deal with exceptions to rules, accumulate
useless or incorrect information, and to believe things because our
imprimers do. We also make superstitious credit assignments, in which
we confuse real thing with ones that we merely imagine; then we become
obsessed with unachievable goals, and set out on unbalanced, fanatical
searches and quests. Some persons become so unwilling to acknowledge a
serious failure or a great loss that they try to relive their lives of
the past. Also, of course, many people suffer from mental disorders
that range from minor incapacities to dangerous states of dismal
depression or mania.

We cannot expect our species to evolve ways to escape from all such
bugs because, as every engineer knows, as every engineer knows, most
every change in a large complex system will introduce yet other
mistakes that won't show up till the system moves to a different
environment. Furthermore, we also face an additional problem: each
human brain differs from the next because, first, it is built by pairs
of inherited genes, each chosen by chance from one of its parent's
such pairs. Then, during the early development of each brain, many
other smaller details depend on other, small accidental events. An
engineer might wonder how such machines could possibly work, in spite
of so many possible variations.

To explain how such large systems could function reliably, quite a few
thinkers have suggested that our brains must be based on some
not-yet-understood 'holistic' principles, according to which every
fragment of process or knowledge is 'distributed' (in some unknown
global way) so that the system still could function well in spite of
the loss of any part of it because such systems act as though they
were "more than the sums of all their parts." However, the arguments
in this book suggest that we do not need to look for any such magical
tricks—because we have so many ways to accomplish each job that we can
tolerate the failure of many particular parts, simply by switching to
using alternative ones. (In other words, we function well because we
can perform with far less than the sum of all of our parts.)

Furthermore, it makes sense to suppose that many of the parts of our
brains are involved with helping to correct or suppress the effects of
defects and bugs in other parts. This means that we will find it hard
to guess both how our brains function as well as they do and why they
evolved in the ways that they did, until we have had more experience
at trying to build such systems ourselves, to learn which kinds of
bugs are likely to appear and to find ways to keep them from disabling
us.

In the coming decades, many researchers will try to develop machines
with Artificial Intelligence. And every system that they build will
keep surprising us with their flaws (that is, until those machines
become clever enough to conceal their faults from us). In some cases,
we'll be able to diagnose specific errors in those designs and then be
able to remedy them. But whenever we fail to find any such simple fix,
we will have little choice except to add more checks and balances—for
example, by adding increasingly elaborate Critics. And through all
this, we can never expect to find any foolproof strategy to balance
the advantage of immediate action against the benefit of more careful,
reflective thought. Whatever we do, we can be sure that the road
toward designing 'post-human minds' will be rough.
------------------------


BillK

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to