I am completely unable to understand what this paragraph is supposed to
mean:

***
One reasonable way of avoiding the "humans are magic" explanation of
this (or "humans use quantum gravity computing", etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
needed to define our own intelligence. Therefore, we can't engineer
human-level AGI. I don't like this conclusion! I want a different way
out.
***

Explanation of WHAT?  Of your intuitive feeling that you are uncomputable,
that you have no limits in what you can do?

Why is this intuitive feeling any more worthwhile than some peoples'
intuitive
feeling that they have some kind of absolute free will not allowed by
classical
physics or quantum theory??

Personally my view is as follows.  Science does not need to intuitively
explain all
aspects of our experience: what it has to do is make predictions about
finite sets of finite-precision observations, based on previously-collected
finite sets of finite-precision observations.

It is not impossible that we are unable to engineer intelligence, even
though
we are intelligent.  However, your intuitive feeling of awesome
supercomputable
powers seems an extremely weak argument in favor of this inability.

You have not convinced me that you can do anything a computer can't do.

And, using language or math, you never will -- because any finite set of
symbols
you can utter, could also be uttered by some computational system.

-- Ben G




On Tue, Oct 21, 2008 at 9:13 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Charles,
>
> You are right to call me out on this, as I really don't have much
> justification for rejecting that view beyond "I don't like it, it's
> not elegant".
>
> But, I don't like it! It's not elegant!
>
> About the connotations of "engineer"... more specifically, I should
> say that this prevents us from making one universal normative
> mathematical model of intelligence, since our logic cannot describe
> itself. Instead, we would be doomed to make a series of more and more
> general models (AIXI being the first and most narrow), all of which
> fall short of human logic.
>
> Worse, the implication is that this is not because human logic sits at
> some sort of maximum; human intelligence would be just another rung in
> the ladder from the perspective of some mathematically more powerful
> alien species, or human mutant.
>
> --Abram
>
> On Tue, Oct 21, 2008 at 3:29 PM, Charles Hixson
> <[EMAIL PROTECTED]> wrote:
> > Abram Demski wrote:
> >>
> >> Ben,
> >> ...
> >> One reasonable way of avoiding the "humans are magic" explanation of
> >> this (or "humans use quantum gravity computing", etc) is to say that,
> >> OK, humans really are an approximation of an ideal intelligence
> >> obeying those assumptions. Therefore, we cannot understand the math
> >> needed to define our own intelligence. Therefore, we can't engineer
> >> human-level AGI. I don't like this conclusion! I want a different way
> >> out.
> >>
> >> I'm not sure the "guru" explanation is enough... who was the Guru for
> >> Humankind?
> >>
> >> Thanks,
> >>
> >> --Abram
> >>
> >>
> >
> > You may not like "Therefore, we cannot understand the math needed to
> define
> > our own intelligence.", but I'm rather convinced that it's correct.
>  OTOH, I
> > don't think that it follows from this that humans can't build a better
> than
> > human-level AGI.  (I didn't say "engineer", because I'm not certain what
> > connotations you put on that term.)  This does, however, imply that
> people
> > won't be able to understand the "better than human-level AGI".  They may
> > well, however, understand parts of it, probably large parts.  And they
> may
> > well be able to predict with fair certitude how it would react in
> numerous
> > situations.  Just not in numerous other situations.
> >
> > The care, then, must be used in designing so that we can predict
> favorable
> > motivations behind the actions in important-to-us  areas.  Even this is
> > probably impossible in detail, but then it doesn't *need* to be
> understood
> > in detail.  If you can predict that it will make better choices than we
> can,
> > and that it's motives are benevolent, and that it has a good
> understanding
> > of our desires...that should suffice.  And I think we'll be able to do
> > considerably better than that.
> >
> >
> >
> > -------------------------------------------
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to