OK, that's fair enough. But the thing is, in the particular approach to AGI
I'm taking, the compression ability of the overall AGI system at preliminary
and intermediate stages of development is not a very interesting thing
to look at. Even though some internal aspects of the system can be
fruit
On Apr 24, 2007, at 2:35 PM, Benjamin Goertzel wrote:
Let's call this the "University of Phoenix" test.
Does anyone have an argument against this test for AGI? Clearly it is
a sufficient but not necessary condition for human-level AGI, just
like the Turing test.
I think it makes sense to
Of course compression is not a requirement for AGI. I just think it is a
useful tool for development.
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > For instance, if someone built a robotic dog that was as good as a real
> > dog
> > > at perception,
> > > cognition and action, I would cons
For instance, if someone built a robotic dog that was as good as a real
dog
> at perception,
> cognition and action, I would consider that a big step toward powerful
AGI.
> But dogs really
> suck at compression. (Yeah, their brains may carry out compression
> operations internally.
> But, if you
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> As Loosemore has argued, compression is a poor AGI test in general, as shown
> by
> the fact that humans are generally intelligent but are poor compressors!
> Some AGIs
> may be great compressors, others not.
Well it is true that people are poor
As Loosemore has argued, compression is a poor AGI test in general, as shown
by
the fact that humans are generally intelligent but are poor compressors!
Some AGIs
may be great compressors, others not.
Novamente as it happened, once it became highly generally intelligent, could
be turned into
a gr
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> I don't think there are any good, general incremental tests for progress
> toward AGI.
Compression?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, pl
Well, in my 1993 book "The Structure of Intelligence" I defined intelligence
as
"The ability to achieve complex goals in complex environments."
I followed this up with a mathematical definition of complexity grounded in
algorithmic information theory (roughly: the complexity of X is the amount
o
On 4/24/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
I actually like the "University of Phoenix" test for AGI.
Of course, all you really need to do is pass the exams.
Well, I intentionally **didn't** suggest just passing the exams.
My version of the University of Phoenix test requires some
I think it'd be hard to make incremental progress on the Phoenix Test
before the AI has AGI-complete understanding of natural language and
the environment. That is, like the Turing Test, it serves as a
propositional test for completion but not an incremental test for
progress.
Agreed... I w
If language mastery is a prime requirement of AGI, I think you're buggered
[technical term]. That's like trying to jump a billion or more years of
evolution in ... what? .. a few years?
There are many good reasons why language was such an EXTREMELY belated
development in the evolution of gener
I actually like the "University of Phoenix" test for AGI.
Of course, all you really need to do is pass the exams. We have already done
that with the word analogies section of the SAT. (Maybe that is why they
removed it).
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >
> > I also don'
--- Eugen Leitl <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 24, 2007 at 01:08:02PM -0700, Matt Mahoney wrote:
>
> > Thus, the fallacy of immortality through uploading is exposed. A copy of
> you
> > is not you.
>
> No. As long there's no fork all systems are identical. There's no
> measurement wh
Benjamin Goertzel wrote:
Distribution requirements mean that the AGI must master a number of
different skills (math, writing, critical thinking, etc.). Also,
some classes require intelligent conversation with the prof and
other students, though there is not any requirement for flawless
humanli
I also don't think you will recognize AGI. You have never seen examples
of
it. Earlier I posted examples of Google passing the Turing test, but
nobody
believes that is AGI. If nothing is ever labeled AGI, then nothing ever
will
be.
Google does not pass the Turing test. Giving human-like r
On Tuesday 24 April 2007 13:28, Mike Tintner wrote:
> I'd be interested to know, JSH, why your early teleological AI efforts
> failed.
Actually, it succeeded about as well as most of the AI projects of the day
did, in the sense that we built a big LISP program, produced results on a
handful of
On Tue, Apr 24, 2007 at 01:08:02PM -0700, Matt Mahoney wrote:
> Thus, the fallacy of immortality through uploading is exposed. A copy of you
> is not you.
No. As long there's no fork all systems are identical. There's no
measurement which allows you to tell one discrete synchronized system
from
--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:
> The more problematic issue is what happens if you non-destructively up-load
> your mind? What do you do with the original which still considers itself
> you? The up-load also considers itself you and may suggest a bullet.
Thus, the fallacy of immo
I like the thoughts here.
My hunch is that the human ability to learn new activities is based on
conceiving all of them as goal-seeking journeys, in which we have try to find
the way.to our goals, using a set of basic paths and series of basic steps,
[literally steps, if you're walking somewhe
On 24/04/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
Mining plus matching, analogy, and interpolation/extrapolation. The key to
making it work is to form the abstractions that allow the robot/AI to
interpret the actions as "grasp broom; lower until it touches floor"
instead
of "move hand
On Tue, Apr 24, 2007 at 07:21:31AM -0700, Eric B. Ramsay wrote:
>
>Your twin example is not a good choice. The upload will consider
It's the same in principle, though. The only difference is that
you'll be getting a really young 'copy' (not as an exact copy
as a real upload; I know).
>it
Your twin example is not a good choice. The upload will consider itself to have
a claim on the contents of your life - financial resources for example.
Eugen Leitl <[EMAIL PROTECTED]> wrote: On Tue, Apr 24, 2007 at 07:09:22AM
-0700, Eric B. Ramsay wrote:
> The more problematic issue is what ha
On Tue, Apr 24, 2007 at 07:09:22AM -0700, Eric B. Ramsay wrote:
>The more problematic issue is what happens if you non-destructively
>up-load your mind? What do you do with the original which still
It's a theoretical problem for any of us on this list. Nondestructive
scans require medical
The more problematic issue is what happens if you non-destructively up-load
your mind? What do you do with the original which still considers itself you?
The up-load also considers itself you and may suggest a bullet.
Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- "John G. Rose" wrote:
> A baby
On Tuesday 24 April 2007 07:42, Bob Mottram wrote:
> Incidentally, once a significant amount of data is recorded from human use
> of a telerobot getting the robot to do some things autonomously becomes a
> data mining exercise.
Mining plus matching, analogy, and interpolation/extrapolation. The k
The prospect of robots causing harm is not only a theoretical SIAI-style
consideration. At the moment I'm adding a manipulator arm to a telerobot,
and the intention here is to allow some useful household jobs to be done
using the robot, such as sweeping or mopping up, via an internet based
interf
How will it handle the Mid-East crisis?
God comes crying to me every night about that one. I tell Him to shut up, be
a Man and get on with it.
Or the Iraq crisis?
As for humanising the US gun laws - even God doesn't go there.
How will it sell more Coke, or get Yahoo back on top of Google?
H
27 matches
Mail list logo