Mike,

On 5/28/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Steve: I have been advocating fixing the brain shorts that lead to
> problems, rather than jerking the entire world around to make brain shorted
> people happy.
>
> Which "brain shorts"? IMO the brain's capacity for shorts in one situation
> is almost always a capacity for short-cuts in another - and dangerous to
> tamper with.
>

It appears that the principle of reverse reductio ad absurdum is SO
non-obvious that it has escaped human notice for about a million years. In
its absence, we have countless needless problems, and have evolved into a
specie who would rather fight than solve problems with advanced reasoning
methods (that we haven't had). Yes, I AM including AGIers in this list, and
excepting only those with a working understanding of reverse reductio ad
absurdum. By my count, that means that all but maybe a few dozen people on
the face of the earth are SERIOUSLY brain shorted, as will be any AGIs that
they construct.

This discussion reminds me of the floating-point that IBM adopted on their
mainframe computers, complete with normalization that shifted 4 bits at a
time. As one CS person noted, it made roundoff errors faster than any other
computer on the face of the earth. Hence, let's first get things working
right, and then lets work on the short cuts.


>  Steve:Let's instead 1.  make something USEFUL, like knowledge management
> programs that do things that people (and future AGIs) are fundamentally poor
> at doing
>
> Well, in principle, a general expert system that can be a problem-solving
> aid in many domains would be a fine thing. But - if you'll forgive the
> ignorance of this question - my impression was that expert systems were a
> big fad that has largely failed??? If you have a link to some survey here,
> I'd appreciate it.
>

My own Dr. Eliza incorporates the "missing pieces" that doomed prior
efforts. Not the least of these is coding regular expressions for what
people say who both have a particular problems, and who are ignorant of its
workings. Surprisingly, this is not nearly as difficult as it sounds.

>
> Steve, the capacity for general thinking/intelligence HAS to be - and is
> being - explored. William may be right that all the main AGI-ers are like
> him avoiding the challenge of general problemsolving, and hoping that the
> answer will "emerge" later on in the development of their systems. But
> roboticists are setting themselves general problems nbw  - in the shape if
> nothing else of the ICRA challenge, as I've pointed out before.
>

This has been an ongoing effort for the last ~40 years, so while we all
remain hopeful, I am not expecting anything spectacular anytime soon.

Do you have some reason to expect a breakthrough?

Steve Richfield



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to