> From: Tom McCabe [mailto:[EMAIL PROTECTED]
>
> You're missing the point- obviously this specific
> example isn't going to be completely accurate, because
> an AI doesn't require dead organics. And it's not like
> the animals are actively helping us- they just sit
> there, growing, until we harve
Matt Mahoney wrote:
I doubt you could model sentence structure usefully with a neural network
capable of only a 200 word vocabulary. By the time children learn to use
complete sentences they already know thousands of words after exposure to
hundreds of megabytes of language. The problem seems
Tom McCabe wrote:
From whence do you get the idea that there is no
relationship between the low-level mechanisms and the
overall behavior? Even if the relationship is
horrendously confusing, it must exist if the entire
thing is to be described as a "system"; if there is no
relationship between t
If such neural systems can actually spit out sensible
analyses of natural language, it would obviously be a
huge discovery and could probably be sold to a good
number of people as a commercial product. So why
aren't more people investing in this, if you've
already got working software that just nee
>From whence do you get the idea that there is no
relationship between the low-level mechanisms and the
overall behavior? Even if the relationship is
horrendously confusing, it must exist if the entire
thing is to be described as a "system"; if there is no
relationship between the mechanisms and th
I was referring to Matt Mahoney, who said that you
could formally prove intelligence's unpredictability
and then cited a paper proving it so long as
"intelligence" really meant "algorithmic complexity".
To quote:
"We cannot rule out this possibility because a lesser
intelligence cannot predict wha
Because such an algorithm would have a large
algorithmic complexity and yet be completely
unintelligent. And you could produce them with the
push of a sufficiently large button.
- Tom
--- Shane Legg <[EMAIL PROTECTED]> wrote:
> Tom,
>
> I'm sure any computer scientist worth their salt
> could
Thank you for that. It would be an interesting problem
to build a "box" AGI without morality, which
paperclips everything within a given radius of some
fixed position and then stops without disturbing the
matter outside. It would obviously be far simpler to
build such an AGI than a true FAI, and it
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> > From: Tom McCabe [mailto:[EMAIL PROTECTED]
> > > The AGI is going to have to embed itself into
> some
> > > organizational
> > > bureaucracy in order to survive. It'll appear
> > > friendly to individual humans
> > > but to society it will need t
--- Eugen Leitl <[EMAIL PROTECTED]> wrote:
> On Mon, May 14, 2007 at 08:21:45PM -0700, Tom McCabe
> wrote:
>
> > Hmmm, this is true. However, if these techniques
> were
> > powerful enough to design new, useful AI
> algorithms,
> > why is writing algorithms almost universally done
> by
> > progr
A simple (greedy compression) probabilistic inference algorithm for
determining context relevant mutual information that requires O[n log(n)]
connections and a similar time complexity for mutual information
calculations, where n is the length of the phrase.
It's just an inference algorithm, no
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Tom McCabe <[EMAIL PROTECTED]> wrote:
> >> --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >>> Personally, I would experiment with
> >>> neural language models that I can't currently
> >>> implement because I lack the
>
Matt Mahoney wrote:
--- Tom McCabe <[EMAIL PROTECTED]> wrote:
--- Matt Mahoney <[EMAIL PROTECTED]> wrote:
Personally, I would experiment with
neural language models that I can't currently
implement because I lack the
computing power.
Could you please describe these models?
Essentially models
--- Tom McCabe <[EMAIL PROTECTED]> wrote:
> --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Personally, I would experiment with
> > neural language models that I can't currently
> > implement because I lack the
> > computing power.
>
> Could you please describe these models?
Essentially models in
I have a Ph.D. in Nuclear Physics and I don't understand half of what is said
on this board (as well as the AGI board). I appreciate all simplifications that
anyone cares to make.
Eric B. Ramsay
Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
Shane,
Thankyou for being patronizing.
Some of us
Matt Mahoney wrote:
Richard,
I looked at your 2006 AGIRI talk, the one I believe you referenced in our
previous discussion on the definition of intelligence,
http://www.agiri.org/forum/index.php?act=ST&f=21&t=137
You use the description "complex adaptive system", which I agree is a
reasonable
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Shane Legg wrote:
> > Ben (and others),
> >
> > My impression is that there is a general lack of understanding
> > when it comes to AIXI and related things. It seems that someone
> > who doesn't understand the material makes a statement, which
>
Benjamin Goertzel wrote:
... [snip]
Richard,
While you do have the math background to understand the AIXI material,
plenty of list members don't. I think Shane's less-technical summary may
be helpful in helping those with less math background to understand what
AIXI and related ideas are all ab
Shane,
Thankyou for being patronizing.
Some of us do understand the AIXI work in enough depth to make valid
criticism.
The problem is that you do not understand the criticism well enough to
address it.
Richard Loosemore.
Richard,
While you do have the math background to understand the A
Shane Legg wrote:
Ben (and others),
My impression is that there is a general lack of understanding
when it comes to AIXI and related things. It seems that someone
who doesn't understand the material makes a statement, which
others then take as fact, and the cycle repeats.
Part of the problem,
Ben (and others),
My impression is that there is a general lack of understanding
when it comes to AIXI and related things. It seems that someone
who doesn't understand the material makes a statement, which
others then take as fact, and the cycle repeats.
Part of the problem, I think, is that th
On 5/14/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
Hmmm, this is true. However, if these techniques were
powerful enough to design new, useful AI algorithms,
why is writing algorithms almost universally done by
programmers instead of supercomputers, despite the
fact that programmers only work
tom, I think the point is, it seems like you didn't actually
read and understand Shane's definition of intelligence...
ben
On 5/15/07, Shane Legg <[EMAIL PROTECTED]> wrote:
Tom,
I'm sure any computer scientist worth their salt could
>
use a computer to write up random ten-billion-byte-long
a
Tom,
I'm sure any computer scientist worth their salt could
use a computer to write up random ten-billion-byte-long
algorithms that would do exactly nothing. Defining intelligence
that way because it's mathematically neat is just cheating
Let's assume that you can make a very long program
On 15/05/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
We would all like to build a machine smarter than us, yet still be able to
predict what it will do. I don't believe you can have it both ways. And
if
you can't predict what a machine will do, then you can't control it. I
believe this is tru
> From: Tom McCabe [mailto:[EMAIL PROTECTED]
> > The AGI is going to have to embed itself into some
> > organizational
> > bureaucracy in order to survive. It'll appear
> > friendly to individual humans
> > but to society it will need to get itself fed, kind
> > of like a queen ant, and
> > we are
On Mon, May 14, 2007 at 08:21:45PM -0700, Tom McCabe wrote:
> Hmmm, this is true. However, if these techniques were
> powerful enough to design new, useful AI algorithms,
> why is writing algorithms almost universally done by
> programmers instead of supercomputers, despite the
> fact that program
27 matches
Mail list logo