Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
27;s useful to say so and make your assumptions concrete. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
how much processing power you need: if processing is very expensive, it makes less sense to re-run an extensive test suite whenever you make a change. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philos

Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Philip Hunt
2008/12/29 Matt Mahoney : > --- On Mon, 12/29/08, Philip Hunt wrote: > >> Incidently, reading Matt's posts got me interested in writing a >> compression program using Markov-chain prediction. The prediction bit >> was a piece of piss to write; the compression code is

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/29 Philip Hunt : > 2008/12/29 Matt Mahoney : >> >> Please remember that I am not proposing compression as a solution to the AGI >> problem. I am proposing it as a measure of progress in an important >> component (prediction). > >[...] > Turning a p

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
at prediction. Whereas all programs that're good at prediction are guaranteed to be good at prediction. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archiv

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/28 Philip Hunt : > > Now, consider if I build a program that can predict how some sequences > will continue. For example, given > > ABACADAEA > > it'll predict the next letter is "F", or given: > > 1 2 4 8 16 32 > > it'll predict

Re: [agi] Universal intelligence test benchmark

2008-12-28 Thread Philip Hunt
2008/12/27 Matt Mahoney : > --- On Fri, 12/26/08, Philip Hunt wrote: > >> > Humans are very good at predicting sequences of >> > symbols, e.g. the next word in a text stream. >> >> Why not have that as your problem domain, instead of text >> compression?

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
source code. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
GI could be >> written in about a tenth that, say 75 MB. > > The human genome size has no meaningful relationship to the complexity of > coding AGI. Yes it does -- it's an existance proof that it's possible to do it in 750 MB. > And what ever happened to Machine is Software is

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
changes, your problem domain would be a more useful one. While you're at it you may want to change the size of the "chunks" in each item of prediction, from characters to either strings or s-expressions. Though doing so doesn't fundamentally alter the problem. -- Philip Hunt,

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
osen training sets which would bulk up its code and data to many times that (I'm assuming a model where an AI stores the results of learning in additions to its source code). By way of comparison, the Linux kernel is about 60 MB; the Copycat program is about 0.4 MB (not including g

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Philip Hunt
ining intelligence this way. Care to enlighten me? -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Fe

Re: [agi] Levels of Self-Awareness?

2008-12-24 Thread Philip Hunt
to do tasks better than they can (e.g. play chess) and I see no reason why it shouldn't be possible for self awareness. Indeed it would be rather trivial to give an AGI access to its source code. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.

Re: [agi] Relevance of SE in AGI

2008-12-21 Thread Philip Hunt
ing serve any use in this field? I've never used formal proofs of correctness of software, so can't comment. I use software testing (unit tests) on pretty much all non-trivial software thast I write -- i find doing so makes things much easier. -- Philip Hunt, Please avoid sending me Word

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
eels like to the touch. For me, it was the former. So I don't think touch is clearly more fundamental, in terms of how it interacts with our internal model of the world, than vision is. > Is the reason just that AI researchers spend all day staring at screens and > ignoring their physical

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
re was a difference in capacitance when the wires where further apart or closer together. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://w

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
umans -- most (if not all) mammalian species can do it. Until an AI can do this, there's no point in trying to get it to play at making cakes, etc. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
could move around and manipulate a blocks world. My understanding is that all, or nearly all, the difficulty comes in programming it. Which is where AI comes in. > Actually, $$ aside, we don't even **know how** to make a decent humanoid > robot. > > Or, a decently functional m

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
umanoid, since it's more obviously a machine). > On the other hand, making a virtual world such as I envision, is more than a > spare-time project, but not more than the project of making a single > high-quality video game. GTA IV cost $5 million, so we're not talking about pean

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
taste would probably help too. -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listb

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
ords, will the simulation be deep enough to allow that). -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RS

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
ces that are safe to sit on, and others that are too wobbly, even if they look the same. An animals intuitive physics is a complex system. I expect that in humans a lot of this machinery isd re-used to create intelligence. (It may be true, and IMO probably is true, that it's not necessary to re-c

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
there's no reason why a toy domain needs to be anything like a virtual world, it could for example be a "software modality" that can see/understand source code as easily and fluently as humans interprete visual input.) AIUI you're mostly thinking in terms of 2 or 3. Fair comm

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
robably need to train it in the real world (at least some of the time). If you don't care whether your AGI can use a screwdriver, why have one in the virtual world? -- Philip Hunt, Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/phi

Re: [agi] Religious attitudes to NBIC technologies

2008-12-08 Thread Philip Hunt
t; to occupy. Having said that, I'm not aware that nanotechnology or AI are specifically prohibited by any of the major religions. And if one society forgoes science, they'll just get outcompeted by their neighbours. -- Philip Hunt, <[EMAIL PROTECTED]> Please avoid sending me Word o

Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Philip Hunt
for laying down long-term memories and for short-term thinking over the order of a few seconds. -- Philip Hunt, <[EMAIL PROTECTED]> Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html -

Re: [agi] Lamarck Lives!(?)

2008-12-03 Thread Philip Hunt
ently, my understanding is[*] that DNA in various cells in the mammalian immune system does change as the immune system evolves to cope with infectious agents; but these changes aren't passed along to the next generation.) * if there are any molecular biologists reading, feel free to correct

Re: [agi] AIXI

2008-12-01 Thread Philip Hunt
That was helpful. Thanks. 2008/12/1 Matt Mahoney <[EMAIL PROTECTED]>: > --- On Sun, 11/30/08, Philip Hunt <[EMAIL PROTECTED]> wrote: > >> Can someone explain AIXI to me? > > AIXI models an intelligent agent interacting with an environment as a pair of > interact

Re: >> RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Philip Hunt
hey are not two separate theories, they are merely rewordings of the same theory. And choosing between them is arbitrary; you may prefer one to the other because human minds can visualise it more easily, or it's easier to calculate, or you have an aethetic preference for it. -- Philip Hun

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
be more useful to >> the advancement of AI, since the Loebner prize is silly. >> >> -- >> Philip Hunt, <[EMAIL PROTECTED]> > > How does that differ from what is generally called "transfer learning" ? I don't think it does differ. ("Transfer learnin

[agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Philip Hunt
could be practically written or is it purely a theoretical construct? In short, is there something to AIXI or is it something I can safely ignore? -- Philip Hunt, <[EMAIL PROTECTED]> Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-at

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
imilar domain). A bit like the Loebner Prize, except that it would be more useful to the advancement of AI, since the Loebner prize is silly. -- Philip Hunt, <[EMAIL PROTECTED]> Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --

Re: [agi] DARPA funds using memsistors to model synapses in neuromorphic computing

2008-11-27 Thread Philip Hunt
and > 10^8 seconds in the cortex. 10^8 seconds is 3 years! I think that number's wrong. -- Philip Hunt, <[EMAIL PROTECTED]> Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --