On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> I think your notion that post-grads with powerful machines would only
> operate in the space of ideas that don't work is unfair.
Yeah, i can agree - it was harsh. My real intention was to suggest
that NOT having a bigger computer is not ex
Mike Tintner said in his 10/3/2007 9:38 PM post:
I DON'T THINK AGI - CORRECT ME - HAS SOLVED A SINGLE CREATIVE PROBLEM -
E.G. CREATIVITY - UNPROGRAMMED ADAPTIVITY - DRAWING ANALOGIES - VISUAL
OBJECT RECOGNITION - NLP - CONCEPTS - CREATING AN EMOTIONAL SYSTEM -
GENERAL LEARNING - EMBODIED/ GROUN
--- Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > And text is the only data type with this property. Images, audio,
> executable
> > code, and seismic data can all be compressed with very little memory.
>
> How sure are we of that? Of course
On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> And text is the only data type with this property. Images, audio, executable
> code, and seismic data can all be compressed with very little memory.
How sure are we of that? Of course all those things _can_ be
compressed with very little memor
MessageEdward:The biggest brick wall is the small-hardware mindset that has
been absolutely necessary for decades to get anything actually accomplished on
the hardware of the day
Completely disagree. It's that purely numerical mindset about small/big
hardware that I see as so widespread and tha
--- Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Yes, but it has nothing to do with AI. You are modeling physics, a much
> > harder problem.
>
> Well, I think compression in general doesn't have much to do with AI,
> like I said before :) B
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> The same probably goes for text
> compression: clever (but not intelligent) statistics-gathering
> algorithm on texts can probably do a much better job for compressing
> than human-like intelligence which just chunks this information
> according to i
On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> The biggest brick wall is the small-hardware mindset that has been
> absolutely necessary for decades to get anything actually accomplished on
> the hardware of the day. But it has caused people to close their minds to
> the vast power of b
On 10/4/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> > Suppose 50% is the absolute max you can get - that's still worth
> > having, in cases where you don't want to throw away data.
>
> But why is it going to correlate with intelligence?
On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Yes, but it has nothing to do with AI. You are modeling physics, a much
> harder problem.
Well, I think compression in general doesn't have much to do with AI,
like I said before :) But I'm surprised you call physics modeling a
harder problem,
On 10/4/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Lossless video compression would not get far. The brightness of a pixel
> > depends on the number of photons striking the corresponding CCD sensor. The
> > randomness due to quantum me
Mike Tintner wrote in his Wed 10/3/2007 6:22 PM post:
"BUT THERE'S NO ONE REMOTELY CLOSE TO THE LEVEL, SAY, OF VON NEUMANN OR
TURING, RIGHT? AND DO YOU REALLY THINK A REVOLUTION SUCH AS AGI IS GOING
TO COME ABOUT WITHOUT THAT KIND OF REVOLUTIONARY, CREATIVE THINKER? JUST
BY TWEAKING EXISTING SYS
--- Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Lossless video compression would not get far. The brightness of a pixel
> > depends on the number of photons striking the corresponding CCD sensor.
> The
> > randomness due to quantum mechan
On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Lossless video compression would not get far. The brightness of a pixel
> depends on the number of photons striking the corresponding CCD sensor. The
> randomness due to quantum mechanics is absolutely incompressible and makes up
> a significa
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
>
> Second, You mention the 3-body problem in Newtonian mechanics. Although
> I did not use it as such in the paper, this is my poster child of a
> partial complex system. I often cite the case of planetary system
> dynamics a
--- Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > I realize that a language model must encode both the meaning of a text
> string
> > and its representation. This makes lossless compression an inappropriate
> test
> > for evaluating models o
On Wednesday 03 October 2007 06:21:46 pm, Mike Tintner wrote:
> Yes there are bright people in AGI. But there's no one remotely close to the
level, say, of von Neumann or Turing, right? And do you really think a
revolution such as AGI is going to come about without that kind of
revolutionary, cr
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
> When the first AGI is built, its first actions will be to make sure that
> nobody is trying to build a dangerous, unfriendly AGI.
Yes, OK, granted, self-preservation is a reasonable character trait.
> After that
> point, the
Lossless compression can be far from what intelligence does because
structure of categorization that intelligence performs on the world
probably doesn't correspond to its probabilistic structure.
As I see it, intelligent system can't infer many universal laws that
will hold in the distant future a
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote:
>
> One of them once told me that in Japan it was common for high school boys
> who were interested in math, science, or business to go to abacus classes
> after school or on weekends. He said once they fully mastered using
> physi
Re: The following statement in Linas Vepstass 10/3/2007 5:51 PM post:
P.S. THE INDIAN MATHEMATICIAN RAMANUJAN SEEMS TO HAVE MANAGED TO TRAIN A
SET OF NEURONS IN HIS HEAD TO BE A VERY FAST SYMBOLIC MULTIPLIER/DIVIDER.
WITH THIS, HE WAS ABLE TO SEE VAST AMOUNTS (SIX VOLUMES WORTH BEFORE DYING
AT A
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> > So do you claim that there are universal moral truths that can be applied
> > unambiguously in every situation?
>
> What a stupid question. *Anything* can be ambiguous if you're clueless.
> The moral truth of "Thou shalt not destroy the universe" is
On 9/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I realize that a language model must encode both the meaning of a text string
> and its representation. This makes lossless compression an inappropriate test
> for evaluating models of visual or auditory perception. The tiny amount of
> releva
RE: [agi] Religion-free technical contentEdward Porter:I don't know about you,
but I think there are actually a lot of very bright people in the interrelated
fields of AGI, AI, Cognitive Science, and Brain science. There are also a lot
of very good ideas floating around.
Yes there are bright p
On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
[snipped parts of post agreed with]
> I think with a better understanding of this algorithm, that a visual
> perception knowledge base can be trained in a hierarchical manner, building
> from simple visual patterns to more abstract concepts. Jus
To Mike Douglas regarding the below comment to my prior post:
I think your notion that post-grads with powerful machines would only
operate in the space of ideas that dont work is unfair.
A lot of post-grads may be drones, but some of them are cranking some
really good stuff. The article, Learn
--- Mike Dougherty <[EMAIL PROTECTED]> wrote:
> On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The higher levels detect complex objects like airplanes or printed words
> or
> > faces. We could (lossily) compress images much smaller if we knew how to
> > recognize these features. The id
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
> From what you say below it would appear human-level AGI would not require
> recursive self improvement,
[...]
> A lot of people on this list seem to hang a lot on RSI, as they use it,
> implying it is necessary for human-level AGI
On 03/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> RSI is not necessary for human-level AGI.
How about: RSI will not be possible until human-level AGI.
Specifically, the AGI will need the same skills as its builders with regard to
language understanding, system engineering, and softwar
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to play
> with, things would really start jumping. Within ten years the equivents
> of such machines could easily be sold for somewhere between $10k and
> $100k, and lots of po
On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> The higher levels detect complex objects like airplanes or printed words or
> faces. We could (lossily) compress images much smaller if we knew how to
> recognize these features. The idea would be to compress a movie to a written
> script, the
On Wednesday 03 October 2007 03:47:31 pm, Bob Mottram wrote:
> On 03/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > RSI is not necessary for human-level AGI.
>
> I think it's too early to be able to make a categorical statement of
> this kind. Does not a new born baby recursively impro
On Wed, Oct 03, 2007 at 02:09:05PM -0400, Richard Loosemore wrote:
>
> RSI is only what happens after you get an AGI up to the human level: it
> could then be used [sic] to build a more intelligent version of itself,
> and so on up to some unknown plateau. That plateau is often referred to
>
Again a well reasoned response.
With regard to the limitations of AM, I think if the young Doug Lenat and
those of his generation had had 32K processor Blue Gene Ls, with 4TBytes
of RAM, to play with they would have soon started coming up with things
way way beyond AM.
In fact, if the average A
--- [EMAIL PROTECTED] wrote:
> Relating to the idea that text compression (as demonstrated by general
> compression algorithms) is a measure of intelligence,
> Claims:
> (1) To understand natural language requires knowledge (CONTEXT) of the
> social world(s) it refers to.
> (2) Communication inclu
Good distinction!
Edward W. Porter
-Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:22 PM
To: agi@v2.listbox.com
Subject: RE: [agi] RSI
Edward W. Porter writes:
> As I say, what is, and is not, RSI would appear to be a matter of
> de
On 03/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> RSI is not necessary for human-level AGI.
I think it's too early to be able to make a categorical statement of
this kind. Does not a new born baby recursively improve its thought
processes until it reaches "human level" ?
-
This
I wrote:
> If we do not give arbitrary access to the mind model itself or its
> implementation, it seems safer than if we do -- this limits the
> extent that RSI is possible: the efficiency of the model implementation
> and the capabilities of the model do not change.
An obvious objection to t
Thanks!
It's worthwhile being specific about levels of interpretation in the
discussion of self-modification. I can write self-modifying assembly code
that yet does not change the physical processor, or even its microcode it
it's one of those old architectures. I can write a self-modifying Lisp
Edward W. Porter writes:> As I say, what is, and is not, RSI would appear to be
a matter of> definition.> But so far the several people who have gotten back to
me, including> yourself, seem to take the position that that is not the type of
recursive> self improvement they consider to be "RSI." S
--- Mike Dougherty <[EMAIL PROTECTED]> wrote:
> On 10/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > It says a lot about the human visual perception system. This is an
> extremely
> > lossy function. Video contains only a few bits per second of useful
> > information. The demo is able to re
Edward W. Porter wrote:
From what you say below it would appear human-level AGI would not require
recursive self improvement, because as you appear to define it human's
don't either (i.e., we currently don't artificially substantially expand
the size of our brain).
I wonder what percent of the
>From what you say below it would appear human-level AGI would not require
recursive self improvement, because as you appear to define it human's
don't either (i.e., we currently don't artificially substantially expand
the size of our brain).
I wonder what percent of the AGI community would accept
Josh,
Thank you for your reply, copied below. It was as have been many of
your posts thoughtful and helpful.
I did have a question about the following section
THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVI
I criticised your original remarks because they demonstrated a complete
lack of understanding of what complex systems actually are. You said
things about complex systems that were, quite frankly, ridiculous:
Turing-machine equivalence, for example, has nothing to do with this.
In your more
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
> [...]
> RSI (Recursive Self Improvement)
> [...]
> I didn't know exactly what the term covers.
>
> So could you, or someone, please define exactly what its meaning is?
>
> Is it any system capable of learning how to improve its c
On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:
Thank you!
> I have one major question for Josh. You said
>
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS
> TO DO, WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
>
So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?
What a stupid question. *Anything* can be ambiguous if you're clueless.
The moral truth of "Thou shalt not destroy the universe" is universal. The
ability to interpret it and apply it
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > I find your argument quotidian and lacking in depth. ...
> What you said above was pure, unalloyed bullshit: an exquisite cocktail
> of complete technical ignorance, patronizing insults and breathtak
... or maybe they can be inferred from texts alone. It all depends on
learning ability of particular design, and we as yet have none. Cart
before the horse.
On 10/3/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
> On 03/10/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > Given (1), no context-fr
On 03/10/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Given (1), no context-free analysis can understand natural language.
> Given (2), no adaptive agent can learn (proper) understanding of natural
> language given only texts.
> For human-like understanding, an AGI would need to participat
Relating to the idea that text compression (as demonstrated by general
compression algorithms) is a measure of intelligence,
Claims:
(1) To understand natural language requires knowledge (CONTEXT) of the
social world(s) it refers to.
(2) Communication includes (at most) a shadow of the context nece
52 matches
Mail list logo