Nice Occam's Razor argument. I understood it simply because I knew there are
always an infinite number of possible explanations for every observation
that are more complicated than the simplest explanation. So, without a
reason to choose one of those other interpretations, then why choose it? You
c
Jim, to address all of your points,
Solomonoff induction claims that the probability of a string is proportional to
the number of programs that output the string, where each program M is weighted
by 2^-|M|. The probability is dominated by the shortest program (Kolmogorov
complexity), but it is
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer wrote:
>
>There cannot be a one to one correspondence to the representation of
>> the shortest program to produce a string and the strings that they produce.
>> This means that if the consideration of the hypotheses were to be put into
>> general math
Sounds like everyone would want one, or, one AGI could service us all. And
that AGI could do all of the heavy thinking for us. We could become pleasure
seeking, fibrillating blobs of flesh and bone suckling on the electronic
brains of one big giant AGI.
John
From: Matt Mahoney [mailto:matmaho.
An AGI only has to predict your behavior so that it can serve you better by
giving you what you want without you asking for it. It is not a copy of your
mind. It is a program that can call a function that simulates your mind for
some arbitrary purpose determined by its programmer.
-- Matt Maho
To all,
There may be a fundamental misdirection here on this thread, for your
consideration...
There have been some very rare cases where people have lost the use of one
hemisphere of their brains, and then subsequently recovered, usually with
the help of recently-developed clot-removal surgery.
On Fri, Jul 2, 2010 at 2:09 PM, Jim Bromer wrote:
> On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney wrote:
>
>> Jim, what evidence do you have that Occam's Razor or algorithmic
>> information theory is wrong,
>> Also, what does this have to do with Cantor's diagonalization argument?
>> AIT consi
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney wrote:
> Jim, what evidence do you have that Occam's Razor or algorithmic
> information theory is wrong,
> Also, what does this have to do with Cantor's diagonalization argument? AIT
> considers only the countably infinite set of hypotheses.
>
>
> -
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney wrote:
> Jim, what evidence do you have that Occam's Razor ... is wrong, besides
> your own opinions? It is well established that elegant (short) theories are
> preferred in all branches of science because they have greater predictive
> power.
>
>
Occam's Razor is not a provable theory, and I have come across philosophers
of science who also question it's value as a scientific heuristic. I can
look for some more thorough presentations and I am willing to give you some
of my opinions on that question if you want me to. The "evidence" would
I found the answer as given by Legg, *Machine Superintelligence*, p. 72,
copied below. A reward function is used to bypass potential difficulty in
communicating a utility function to the agent.
Joshua
The existence of a goal raises the problem of how the agent knows what the
goal is. One possibil
An AGI may not really think like we do, it may just execute code.
Though I suppose you could program a lot of fuzzy loops and idle
speculation, entertaining possibilities, having human "think" envy..
John
From: Matt Mahoney [mailto:matmaho...@yahoo.com]
Sent: Friday, July 02, 2010 8:
Matt:
AGI is all about building machines that think, so you don't have to.
Matt,
I'm afraid that's equally silly and also shows a similar lack of understanding
of sensors and semiotics.
An AGI robot won't know what it's like to live inside a human skin, and will
have limited understanding of
AGI is all about building machines that think, so you don't have to.
-- Matt Mahoney, matmaho...@yahoo.com
From: Mike Tintner
To: agi
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad
that's like saying cartography or
cartoons c
that's like saying cartography or cartoons could be done a lot faster if they
just used cameras - ask Michael to explain what the hand can draw that the
camera can't
From: Matt Mahoney
Sent: Friday, July 02, 2010 2:21 PM
To: agi
Subject: Re: [agi] masterpiece on an iPad
It could be done a
Well, first, you're not dealing with open sets in my broad sense - containing a
potentially unlimited number of different SPECIES of things.
[N.B. Extension to my definitions here - I should have added that all members
of a set have fundamental SIMILARITIES or RELATIONSHIPS - and the set is
c
It could be done a lot faster if the iPad had a camera.
-- Matt Mahoney, matmaho...@yahoo.com
From: Mike Tintner
To: agi
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad
http://www.telegraph.co.uk/culture/culturevideo/artvideo/78657
narrow AI is a term that describes the solution to a problem, not the
problem. It is a solution with a narrow scope. General AI on the other hand
should have a much larger scope than narrow ai and be able to handle
unforseen circumstances.
What I don't think you realize is that open sets can be de
http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html
McLuhan argues that touch is the central sense - the one that binds the others.
He may be right. The i-devices integrate touch into intelligence.
-
19 matches
Mail list logo