Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread David Jones
Nice Occam's Razor argument. I understood it simply because I knew there are always an infinite number of possible explanations for every observation that are more complicated than the simplest explanation. So, without a reason to choose one of those other interpretations, then why choose it? You c

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Matt Mahoney
Jim, to address all of your points, Solomonoff induction claims that the probability of a string is proportional to the number of programs that output the string, where each program M is weighted by 2^-|M|. The probability is dominated by the shortest program (Kolmogorov complexity), but it is

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer wrote: > >There cannot be a one to one correspondence to the representation of >> the shortest program to produce a string and the strings that they produce. >> This means that if the consideration of the hypotheses were to be put into >> general math

RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
Sounds like everyone would want one, or, one AGI could service us all. And that AGI could do all of the heavy thinking for us. We could become pleasure seeking, fibrillating blobs of flesh and bone suckling on the electronic brains of one big giant AGI. John From: Matt Mahoney [mailto:matmaho.

Re: [agi] masterpiece on an iPad

2010-07-02 Thread Matt Mahoney
An AGI only has to predict your behavior so that it can serve you better by giving you what you want without you asking for it. It is not a copy of your mind. It is a program that can call a function that simulates your mind for some arbitrary purpose determined by its programmer. -- Matt Maho

Re: [agi] Reward function vs utility

2010-07-02 Thread Steve Richfield
To all, There may be a fundamental misdirection here on this thread, for your consideration... There have been some very rare cases where people have lost the use of one hemisphere of their brains, and then subsequently recovered, usually with the help of recently-developed clot-removal surgery.

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:09 PM, Jim Bromer wrote: > On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney wrote: > >> Jim, what evidence do you have that Occam's Razor or algorithmic >> information theory is wrong, >> Also, what does this have to do with Cantor's diagonalization argument? >> AIT consi

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney wrote: > Jim, what evidence do you have that Occam's Razor or algorithmic > information theory is wrong, > Also, what does this have to do with Cantor's diagonalization argument? AIT > considers only the countably infinite set of hypotheses. > > > -

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney wrote: > Jim, what evidence do you have that Occam's Razor ... is wrong, besides > your own opinions? It is well established that elegant (short) theories are > preferred in all branches of science because they have greater predictive > power. > >

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
Occam's Razor is not a provable theory, and I have come across philosophers of science who also question it's value as a scientific heuristic. I can look for some more thorough presentations and I am willing to give you some of my opinions on that question if you want me to. The "evidence" would

Re: [agi] Reward function vs utility

2010-07-02 Thread Joshua Fox
I found the answer as given by Legg, *Machine Superintelligence*, p. 72, copied below. A reward function is used to bypass potential difficulty in communicating a utility function to the agent. Joshua The existence of a goal raises the problem of how the agent knows what the goal is. One possibil

RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
An AGI may not really think like we do, it may just execute code. Though I suppose you could program a lot of fuzzy loops and idle speculation, entertaining possibilities, having human "think" envy.. John From: Matt Mahoney [mailto:matmaho...@yahoo.com] Sent: Friday, July 02, 2010 8:

Re: [agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
Matt: AGI is all about building machines that think, so you don't have to. Matt, I'm afraid that's equally silly and also shows a similar lack of understanding of sensors and semiotics. An AGI robot won't know what it's like to live inside a human skin, and will have limited understanding of

Re: [agi] masterpiece on an iPad

2010-07-02 Thread Matt Mahoney
AGI is all about building machines that think, so you don't have to. -- Matt Mahoney, matmaho...@yahoo.com From: Mike Tintner To: agi Sent: Fri, July 2, 2010 9:37:51 AM Subject: Re: [agi] masterpiece on an iPad that's like saying cartography or cartoons c

Re: [agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
that's like saying cartography or cartoons could be done a lot faster if they just used cameras - ask Michael to explain what the hand can draw that the camera can't From: Matt Mahoney Sent: Friday, July 02, 2010 2:21 PM To: agi Subject: Re: [agi] masterpiece on an iPad It could be done a

Re: [agi] Open Sets vs Closed Sets

2010-07-02 Thread Mike Tintner
Well, first, you're not dealing with open sets in my broad sense - containing a potentially unlimited number of different SPECIES of things. [N.B. Extension to my definitions here - I should have added that all members of a set have fundamental SIMILARITIES or RELATIONSHIPS - and the set is c

Re: [agi] masterpiece on an iPad

2010-07-02 Thread Matt Mahoney
It could be done a lot faster if the iPad had a camera. -- Matt Mahoney, matmaho...@yahoo.com From: Mike Tintner To: agi Sent: Fri, July 2, 2010 6:28:58 AM Subject: [agi] masterpiece on an iPad http://www.telegraph.co.uk/culture/culturevideo/artvideo/78657

Re: [agi] Open Sets vs Closed Sets

2010-07-02 Thread David Jones
narrow AI is a term that describes the solution to a problem, not the problem. It is a solution with a narrow scope. General AI on the other hand should have a much larger scope than narrow ai and be able to handle unforseen circumstances. What I don't think you realize is that open sets can be de

[agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html McLuhan argues that touch is the central sense - the one that binds the others. He may be right. The i-devices integrate touch into intelligence. -