Compression can have many variants - encryption, encoding, storage
pre-processing, information distillation, extraction and assimilation,
lossily (sp?) and losslessly all in different levels and durations of
preservation. If an AGI is receiving data from 1000 video camera's running
full time it ca
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> H... I think my point may have gotten lost in the confusion here.
> >>
> >> What I was trying to say was *suppose* I produced an AGI design that
> >> used pretty
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
H... I think my point may have gotten lost in the confusion here.
What I was trying to say was *suppose* I produced an AGI design that
used pretty much the same principles as those that operate in the human
cognitive sy
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> H... I think my point may have gotten lost in the confusion here.
>
> What I was trying to say was *suppose* I produced an AGI design that
> used pretty much the same principles as those that operate in the human
> cognitive system (non-det
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
Would an AGI with exactly my (human) intelligence be able to pass your
compression test?
Only if your intelligence was uploaded to a deterministic ma
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > Q. What is the life expectancy of a giraffe?
> > A. Only 25 to 50% of giraffe calves reach adulthood; the life expectancy
> is
> > between 20 and 25 years in the wild and 28 years in captivity. ...
> > en.wikipedia.org/wiki/Giraffe
>
> OK, now
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >>
> >> Would an AGI with exactly my (human) intelligence be able to pass your
> >> compression test?
> >
> > Only if your intelligence was uploaded to a deterministic
Q. What is the life expectancy of a giraffe?
A. Only 25 to 50% of giraffe calves reach adulthood; the life expectancy is
between 20 and 25 years in the wild and 28 years in captivity. ...
en.wikipedia.org/wiki/Giraffe
OK, now try this one.
Q. What is the life expectancy of a dead giraffe?
The
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
Would an AGI with exactly my (human) intelligence be able to pass your
compression test?
Only if your intelligence was uploaded to a deterministic machine. The human
brain is not deterministic.
Then your test is surely
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > Google is also close to natural
> > language understanding; it can answer simple questions up to a few words.
>
> Not really. Given a simple short question, it can present a human
> with text that a human can recognize as containing the answ
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- James Ratcliff <[EMAIL PROTECTED]> wrote:
> >
> >> Do you not agree that we can achieve AGI without massive compression?
> >> Given a large enough storage size?
> >
> > No, storage has nothing to do with it. My
Matt Mahoney wrote:
--- James Ratcliff <[EMAIL PROTECTED]> wrote:
Do you not agree that we can achieve AGI without massive compression?
Given a large enough storage size?
No, storage has nothing to do with it. My claim is that AGI (on a
deterministic machine) + some simple code = capabil
--- James Ratcliff <[EMAIL PROTECTED]> wrote:
> Do you not agree that we can achieve AGI without massive compression?
> Given a large enough storage size?
No, storage has nothing to do with it. My claim is that AGI (on a
deterministic machine) + some simple code = capability to compress tex
>I think that to solve A, you have to solve B. The reason I proposed B is that
>it is easier to test, and maybe this will speed development. Of course it is
>the capabilities of A that will ultimately prove its usefulness.
Do you not agree that we can achieve AGI without massive compression?
G
Google is also close to natural
language understanding; it can answer simple questions up to a few words.
Not really. Given a simple short question, it can present a human
with text that a human can recognize as containing the answer (among
other things). That is pretty different from answeri
--- James Ratcliff <[EMAIL PROTECTED]> wrote:
> I still dont really follow this entire line of argument as well.
>
>
> Youve given two main and not similar proposals here:
> A.
>
> An AGI residing in your PC should be able to do the same
> tasks as a human assistant, at least as fast and
>
bject: Re: Goals of AGI (was Re: [agi] AGI interests)
I wasn't aware when I posted, in response your initial email, that you were
proposing a *test* to determine if a program was actually an AGI. This
test, I presume, being passable only by an intelligent mind but wouldn't be
that mi
OK then, I'll get right on it. ;-) I'll start with the known laws of
physics and work my way out...
Hmmm... starting with quantum chromodynamics, Feynman integrals,
general relativity and such may make the process a long haul ;)
Seriously though: much relevant knowledge does exist, it just ha
On 4/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> On a related note, did anyone see any useful response to the query
> about lists of inductive biases?
>
> - Jef
No, but a bunch of people wrote me and said such a list would be useful ;-)
OK then, I'll get right on it. ;-) I'll start
On a related note, did anyone see any useful response to the query
about lists of inductive biases?
- Jef
No, but a bunch of people wrote me and said such a list would be useful ;-)
ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, pl
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> > Can you argue
> > that the representation is at least half of the information?
>
> Yes, I can. Take any case that involves a unit of measurement. The
> statements "John is x inches tall", "John is y centimeters tall", "John is
> a.b feet tall", "Jo
On 4/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
I don't disagree with "awesome compression abilities" as a test for
"advanced AGI"
However, I think that trying to achieve awesome compression by
incrementally improving current compressors, is sorta like trying to
reach the moon by increm
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Benjamin Goertzel wrote:
> > I don't disagree with "awesome compression abilities" as a test for
> > "advanced AGI"
> >
> > However, I think that trying to achieve awesome compression by
> > incrementally improving current compressors, is sorta
tation (which sounds like 100% of
added unnecessary effort to me).
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, April 19, 2007 2:15 PM
Subject: Re: Goals of AGI (was Re: [agi] AGI interests)
--- David Clark <[EMAIL PROTECTED]> w
Benjamin Goertzel wrote:
I don't disagree with "awesome compression abilities" as a test for
"advanced AGI"
However, I think that trying to achieve awesome compression by
incrementally improving current compressors, is sorta like trying to
reach the moon by incrementally improving current pogo s
I don't disagree with "awesome compression abilities" as a test for
"advanced AGI"
However, I think that trying to achieve awesome compression by
incrementally improving current compressors, is sorta like trying to
reach the moon by incrementally improving current pogo sticks ;-)
A different sor
--- David Clark <[EMAIL PROTECTED]> wrote:
> Turing's test is obviously not sufficient for AGI. Why would an AGI waste
> it's time learning to lie, miscompute numbers, simulate a forgetful memory
> etc, to pass a test? Why would the creators of an AGI spend time and money
> to create the worst a
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, April 18, 2007 3:43 PM
Subject: Re: Goals of AGI (was Re: [agi] AGI interests)
I wasn't aware when I posted, in response your initial email, that you were
proposing a *test* to deter
to add losslessness.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, April 18, 2007 9:19 PM
Subject: Re: Goals of AGI (was Re: [agi] AGI interests)
--- Mark Waser <[EMAIL PROTECTED]> wrote:
>> I could have used a lossy test by using h
It occurs to me the problem I'm having with this definition of AI as
compression. There are two different tasks here, recognition of "sensory"
data and reproduction of it. It sounds like this definition proposes that
they are exactly equivalent, or that any recognition system is automatically
in
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> >> I could have used a lossy test by using human subjects to judge the
> >> equivalence of the reproduced output text, but it seemed like more
> >> trouble than it is worth. The lossless test is fair because everyone
> >> still has to encode the (inc
that requirement doesn't seem to have any benefit)."
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, April 18, 2007 6:43 PM
Subject: Re: Goals of AGI (was Re: [agi] AGI interests)
I want to first clarify my earlier proposed definiti
--- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 3. Standing [3] had subject memorize 10,000 pictures, one every 5.6 seconds
> over 5 days. Two days later they could recall about 80% in tests. This is
> about the result you would get if you reduced each picture to a 16 bit
> feature
> vector and ch
I want to first clarify my earlier proposed definition of AGI, and then
address the concerns that were posted in response to my claim of the
equivalence of compression and AI. I will propose just one specific
application: an operating system for personal computers. An AGI residing in
your PC shou
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, April 17, 2007 7:15 PM
Subject: Goals of AGI (was Re: [agi] AGI interests)
> In http://cs.fit.edu/~mmahoney/compression/rationale.html I argue the
> equivalence of text compression
This is jumping ahead of ourselves as well... we really have to prioritize and
take small steps... We first have to get it to basic understand of teh words
and the direct interaction of these words... and just from Text stories even,
not moveis, before we can go to global moral plots and long te
rement
doesn't seem to have any benefit).
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, April 17, 2007 10:15 PM
Subject: Goals of AGI (was Re: [agi] AGI interests)
> On 4/17/07, James Ratcliff <[EMAIL PROTECTED]> wrote:
Not only is each movie different for each person, it is different each time
one person sees it. The movie itself is different from the movie-witnessing
experience, and there seems to be a feeling that you could compress it by just
grabbing the inner experience. But you notice different things eac
[Spelling corrected and reworded...]
I'm not convinced by this reasoning. First, the way individuals store
audiovisual information differs, simply because of slight differences in
brain development (nurture). Also, memory is condensed information about the
actual high-level sensory/experience inf
On 4/18/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
...
I would go further and include lossy compression tests. In theory, you
could
compress speech to 10 bits per second by converting it to text and using
text
compression. The rate at which the human brain can remember video is not
much
great
On 4/17/07, James Ratcliff <[EMAIL PROTECTED]> wrote:
>
> A simple list, or set of goals for an AGI to accomplish reasonably I would
> find very useful, and something to work for.
I think an important goal is to solve the user interface problem. The current
approach is for the computer to present
41 matches
Mail list logo