Re: [agi] demos and papers

2004-09-17 Thread Pei Wang
> My suggestion (which applies to all AGI researchers) to assess the > merits of AGI models is to consider the following 4 points: > 1) speed > 2) approximation (=fault tolerance/robustness) > 3) flexibility > 4) adaptiveness > And it seems that speed is the limiting factor with current hardware.

RE: [agi] Psychometric AI

2004-09-17 Thread J. W. Johnston
I noticed that too. Seemed like this list doesn't archive attachments (or has particularly good SPAM filter :-). I don't have the paper posted on any site. Will send you a PDF (748 KB). If others want a copy, let me know via email. Thanks! J. W. -Original Message- From: [EMAIL PROTECTED]

Re: [agi] demos and papers

2004-09-17 Thread Yan King Yin
> I just put demos of NARS 4.2 (a Java version and a Prolog version) and > several recent papers at > http://www.cogsci.indiana.edu/farg/peiwang/papers.html. > > Comments are welcome. > > Pei Hello =) I just took a brief look at your web site and demos. It's good that you have probably the onl

RE: [agi] Psychometric AI

2004-09-17 Thread Peter Voss
I can't find it in the archives. Can you give me a link? Thanks, Peter -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of J. W. Johnston ...As AGI testing and validation goes, some might recall in my IVI Architecture posted here about a year ago, I specified

RE: [agi] Psychometric AI

2004-09-17 Thread J. W. Johnston
I like the gist of it ... though just did quick skim of the paper. In particular I like the idea of pushing/orienting AGI systems toward NLU and human standards to promote "usability" (or more properly: our ability to mutually relate). As AGI testing and validation goes, some might recall in my I

RE: [agi] Psychometric AI

2004-09-17 Thread Ben Goertzel
Hi, I don't think that trying to "overfit" one's AGI system to some specific set of tests is a really useful approach. Also, I don't think that intelligence tests, as currently formulated for psychometric testing purposes, form a very natural set of "developmental milestones" for an AGI system.

Re: [agi] Psychometric AI

2004-09-17 Thread Shane
Hi Ben, You think it's a silly approach because...? I'm just about to read their paper and thus I haven't formed an opinion on their approach yet myself. Thanks Shane --- Ben Goertzel <[EMAIL PROTECTED]> wrote: > > This may be of interest to someone... > > Psychometric AI: > > http://www

RE: [agi] Re: AI boxing

2004-09-17 Thread Ben Goertzel
Mentifex, Thanks for the entertaining post! However, I personally consider it a bit of an overreaction ;-) Dubya is not my favorite US President; however, in all probability, who is or isn't the leader of one particular country in 2004 is unlikely to have a large effect on the future of mind in

[agi] Psychometric AI

2004-09-17 Thread Ben Goertzel
This may be of interest to someone... Psychometric AI: http://www.cogsci.rpi.edu/peri/main.html A slightly silly approach, IMO, but it would certainly be a tractable research program to apply NM to these tasks I'm more interested in the AGI-SIM approach, however... -- Ben --- To unsubsc

[agi] Re: AI boxing

2004-09-17 Thread Arthur T. Murray
On Fri, 17 Sep 2004, Ben Goertzel wrote: > [...] > In short, it really makes no sense to create an AI, allow it to > indirectly affect human affairs, and then make an absolute decision > to keep it in a box. > > And it also makes no sense to create an AI and not allow it > to affect human affairs