On 9/19/06, Samantha Atkins <[EMAIL PROTECTED]> wrote:
> This would depend if the extremly smart AGI would want us to know that
> it is smarter than we. If it had that desire it could e.g. formulate a
> proof for the Poncare Conjecture in such a way that it was as
> accessible to an average pers
On Sep 18, 2006, at 6:25 PM, Stefan Pernar wrote:
Hi Matt,
On 9/18/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Suppose in case 1, the AGI is smarter than humans as humans are
smarter than
monkeys. How would you convince a monkey that you are smarter
than it? How could
an AGI convince you
Hi Matt,
On 9/18/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Suppose in case 1, the AGI is smarter than humans as humans are smarter than
monkeys. How would you convince a monkey that you are smarter than it? How
could
an AGI convince you, other than to demonstrate that you cannot control it?
On 9/18/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
I think that before we can debate whether AGI will be friendly, there are some
fundamental questions that need to be answered first.
1. If we built an AGI, how would we know if we succeeded?
2. How do we know that AGI does not already exist?
I think that before we can debate whether AGI will be friendly, there are some
fundamental questions that need to be answered first.
1. If we built an AGI, how would we know if we succeeded?
2. How do we know that AGI does not already exist?
Suppose in case 1, the AGI is smarter than humans as h