Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Nathan Cook
e your hand "wet" with sticky beads etc. > This would require at least a two-factor adhesion-cohesion model. But Ben has a good rejoinder to my comment. -- Nathan Cook --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Nathan Cook
te, even a very fine powder of very low friction feels different to water - how can you capture the sensation of water using beads and blocks of a reasonably large size? -- Nathan Cook --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS F

[agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Nathan Cook
~brings/ , http://www.cogsci.rpi.edu/research/rair/projects.php), it seems clear to me that the program is more than an Eliza clone. -- Nathan Cook --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/3

Re: [agi] Active Learning

2007-08-02 Thread Nathan Cook
re,I think. No doubt there are all kinds of NDAs, but could you tell us what you think will happen when the engine is successfully connected to Second Life? Do you see this as 'narrow AI' or general? Nathan Cook - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscrib

Re: [agi] Growing a Brain in Switzerland

2007-04-03 Thread Nathan Cook
On 4/3/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: Growing a Brain in Switzerland MANFRED DWORSCHAK - Der Spiegel (Germany) From their website: "Although we may one day achieve insights into the basic nature of intelligenc

Re: [agi] Sophisticated models of spiking neural networks.

2006-12-26 Thread Nathan Cook
his case - but I see two circles and that the square is more similar to the circle(s) because of the higher number of sides. Therefor the triangle is the "odd one." What rules does an evolving neural net use for determining the pattern in order to determine the exception to the patter

Re: [agi] Sophisticated models of spiking neural networks.

2006-12-26 Thread Nathan Cook
rather far fetched concept, but as you can see, the neurons have to be capable of doing a lot. I think I can justify taking this one of many options in neural networks, if only because no-one seems to have let the neurons themselves compete before. Nathan Cook On 12/26/06, Kingma, D.P. < [EM

[agi] Sophisticated models of spiking neural networks.

2006-12-25 Thread Nathan Cook
o) at once, and even do some form of induction on this information. Nathan Cook - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] Failure scenarios

2006-09-25 Thread Nathan Cook
Ben, I take it you're using the word hypergraph in the strict mathematical sense. What do you gain from a hypergraph over an ordinary graph, in terms of representability, say?To return to the topic, didn't Minsky say that 'the trick is that there is no trick'? I doubt there's any single point of fa

Re: [agi] Failure scenarios

2006-09-25 Thread Nathan Cook
A difficulty that I think few people have ever addressed in a general context is what I would term 'generative power'. This is in contrast to learning ability: It is technically quite easy to create a system that can learn anything you like, so long as you know exactly what it's supposed to learn!