Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
> Your job is to be diplomatic. Mine is to call a spade a spade. ;-) > > > Richard Loosemore I would rephrase it like this: Your job is to make me look diplomatic ;-p - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.

Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
Richard, I don't think Shane and Marcus's overview of definitions-of-intelligence is "poor quality". I think it is just doing something different than what you think it should be doing. The overview is exactly that: A review of what researchers have said about the definition of intelligence. Th

[agi] Re: [singularity] The establishment line on AGI

2008-01-14 Thread Benjamin Goertzel
> Also, this would involve creating a close-knit community through > conferences, journals, common terminologies/ontologies, email lists, > articles, books, fellowships, collaborations, correspondence, research > institutes, doctoral programs, and other such devices. (Popularization is > not on the

Re: [agi] Ben's Definition of Intelligence

2008-01-12 Thread Benjamin Goertzel
On definitions of intelligence, the canonical reference is http://www.vetta.org/shane/intelligence.html which lists 71 definitions. Apologies if someone already pointed out Shane's page in this thread, I didn't read every message carefully. > An AGI definition of intelligence surely has, by def

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
On Jan 10, 2008 10:03 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > All this discussion of building a grammar seems to ignore the obvious fact > that in humans, language learning is a continuous process that does not > require any explicit encoding of rules. I think either your model should > lear

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
hen L. Reed > > Artificial Intelligence Researcher > http://texai.org/blog > http://texai.org > 3008 Oak Crest Ave. > Austin, Texas, USA 78704 > 512.791.7860 > > > > - Original Message > From: Benjamin Goertzel <[EMAIL PROTECTED]> > To: a

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
Hi, > Yes, the Texai implementation of Incremental Fluid Construction Grammar > follows the phrase structure approach in which leaf lexical constituents are > grouped into a structure (i.e. construction) hierarchy. Yet, because it is > incremental and thus cognitively plausible, it should scale t

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
On Jan 10, 2008 10:26 AM, William Pearson <[EMAIL PROTECTED]> wrote: > On 10/01/2008, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > I'll be a lot more interested when people start creating NLP systems > > > that are syntactically and semantically pro

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Benjamin Goertzel
> I'll be a lot more interested when people start creating NLP systems > that are syntactically and semantically processing statements about > words, sentences and other linguistic structures and adding syntactic > and semantic rules based on those sentences. Depending on exactly what you mean by

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
Processing a dictionary in a useful way requires quite sophisticated language understanding ability, though. Once you can do that, the hard part of the problem is already solved ;-) Ben On Jan 9, 2008 7:22 PM, William Pearson <[EMAIL PROTECTED]> wrote: > > On 09/01/2008, Benja

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
> > Can you give about ten examples of rules? (That would answer a lot of my > questions above) That would just lead to really long list of questions that I don't have time to answer right now In a month or two, we'll write a paper on the rule-encoding approach we're using, and I'll post it

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
,you) Note that the RelEx output is already abstracted and "semantified" compared to what comes out of a grammar parser. -- Ben On Jan 9, 2008 5:59 PM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > > Can you give about ten examples of rule

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
> And how would a young child or foreigner interpret on the Washington > Monument or "shit list"? Both are physical objects and a book *could* be > resting on them. Sorry, my shit list is purely mental in nature ;-) ... at the moment, I maintain a task list but not a shit list... maybe I nee

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
purpose of interacting with a multitude of non-linguists to > extend its linguistic knowledge. > > -Steve > > Stephen L. Reed > > Artificial Intelligence Researcher > http://texai.org/blog > http://texai.org > 3008 Oak Crest Ave. > Austin, Texas, USA 78704 > 512.7

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-09 Thread Benjamin Goertzel
Steve, The output of FCG seems very syntax-ish... Do you have mechanisms in texai for mapping the output of FCG into higher-level, more semantic-ish relationships like the ones use in OpenCyc? As you know better than me, within Cyc they have a large system of rules for mapping the syntactic outp

Re: [agi] Re: AGI-08 - Call for Participation

2008-01-07 Thread Benjamin Goertzel
I'll forward this request to those who will be handling such things... thx ben On Jan 7, 2008 3:35 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > Ben, > > I'm certainly not in position to ask for it, but if it's possible, can > some kind of microphones be used during presentations on agi-08 (if

Re: [agi] Re: AGI-08 - Call for Participation

2008-01-07 Thread Benjamin Goertzel
Nothing of that nature is planned at present ... as we the conference organizers are rather busy with other stuff, we've been pretty much fully whelmed with the organization of the First Life conference... It might be fun to do an in-world AGI meet-up a couple weeks after AGI-08, with an aim of di

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 12:08 PM, David Butler <[EMAIL PROTECTED]> wrote: > Would two AGI's with the same initial learning program, same hardware in a > controlled environment (same access to a specific learning base-something > like an encyclopedia) learn at different rates and excel in different tasks? Y

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 9:12 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: > > > Robert, > > Look, the basic reality is that computers have NOT yet been creative in any > significant way, and have NOT yet achieved AGI - general intelligence, - or > indeed any significant rulebreaking adaptivity; (If you disag

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
If you believe in principle that no digital computer program can ever be creative, then there's no point in me or anyone else rambling on at length about their own particular approach to digital-computer-program creativity... One question I have is whether you would be convinced that digital progr

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
Mike, > The short answer is that I don't believe that computer *programs* can be > creative in the hard sense, because they presuppose a line of enquiry, a > predetermined approach to a problem - ... > But I see no reason why computers couldn't be "briefed" rather than > programmed, and freely ass

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 6, 2008 5:52 PM, a <[EMAIL PROTECTED]> wrote: > Benjamin Goertzel wrote: > > So, is your argument that digital computer programs can never be creative, > > since you have asserted that programmed AI's can never be creative > Hard-wired AI (such as KB, NLP, symbo

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 6, 2008 4:00 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > Ben, > > Sounds like you may have missed the whole point of the test - though I mean > no negative comment by that - it's all a question of communication. > > A *program* is a prior series or set of instructions that shapes and > det

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
I don't really understand what you mean by "programmed" ... nor by "creative" You say that, according to your definitions, a GA is programmed and ergo cannot be creative... How about, for instance, a computer simulation of a human brain? That would be operated via program code, hence it would be

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 5, 2008 10:52 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > I think I've found a simple test of cog. sci. > > I take the basic premise of cog. sci. to be that the human mind - and > therefore its every activity, or sequence of action - is programmed. No. This is one perspective taken by so

Re: [agi] NL interface

2007-12-30 Thread Benjamin Goertzel
Matt, I agree w/ your question... I actually think KB's can be useful in principle, but I think they need to be developed in a pragmatic way, i.e. where each item of knowledge added can be validated via how useful it is for helping a functional intelligent agent to achieve some interesting goals.

Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 8:28 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Benjamin Goertzel wrote: > > I wish you much luck with your own approach And, I would imagine > > that if you create a software framework supporting your own approach > > in a convenient way,

Re: [agi] OpenCog

2007-12-28 Thread Benjamin Goertzel
On Dec 28, 2007 5:59 AM, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: > > OpenCog is definitely a positive thing to happen in the AGI scene. It's > been all vaporware so far. Yes, it's all vaporware so far ;-) On the other hand, the code we hope to release as part of OpenCog actually exists, bu

[agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Re the recent discussion of OpenCog -- this recent post I made to the OpenCog mailing list may perhaps help clarify the intentions underlying the project further. -- Ben -- Forwarded message -- From: Benjamin Goertzel <[EMAIL PROTECTED]> Date: Dec 27, 2007 11:07 AM Subje

Re: [agi] OpenCog

2007-12-27 Thread Benjamin Goertzel
Loosemore wrote: > I am sorry, but I have reservations about the OpenCog project. > > The problem of building an open-source AI needs a framework-level tool > that is specifically designed to allow a wide variety of architectures > to be described and expressed. > > OpenCog, as far as I can see, do

[agi] AGI, NLP, embodiment and gesture

2007-12-26 Thread Benjamin Goertzel
Hi all, Here you'll find a paper http://goertzel.org/new_research/WCCI_AGI.pdf that I've submitted to the WCCI 2008 Special Session on Human-level AI. It tries to summarize the "big picture" about how advanced AI can be achieved via synthesizing NLP and virtual embodiment... The paper refers t

Re: [agi] BMI/BCI Growing Fast

2007-12-26 Thread Benjamin Goertzel
> I think that at first sight this goes to support my position in the original > argument with Ben- namely that there are all kinds of ways to get at or read > minds, and there is now an increasing momentum to do that. Being able to read the stream of subvocalizations coming out from a person's mi

[agi] Mizar translated to TPTP !

2007-12-22 Thread Benjamin Goertzel
For those interested in automated theorem-proving, I'm pleased to announce a major advance in tools has occurred... The Mizar library of formalized math has finally been translated into a sensible format, usable for training automated theorem-proving systems ;-) Josef Urban informed me that

Re: [agi] List of Java AI tools & librarie

2007-12-21 Thread Benjamin Goertzel
On Dec 17, 2007 2:03 PM, Stephen Reed <[EMAIL PROTECTED]> wrote: > I've published a roughly categorized link list of Java AI tools and > libraries, that may be helpful to Java developers here: > > http://texai.org/blog/software-links > > Are there useful Java components that are missing? My colle

Re: [agi] BMI/BCI Growing Fast

2007-12-15 Thread Benjamin Goertzel
I would add that the Chinese universities are extremely eager to recruit Western professors to lead research labs in AI and other areas. Hugo DeGaris relocated there a year or so ago, and is quite relieved to be supplied with a bunch of excellent research assistants and loads of computational fire

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
orm? > > > On Dec 14, 2007 3:48 PM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > > > > Is China pushing its people into being smarter? Are they giving > > > incentives beyond the US-style capitalist reasons for being smart? > > > > > &

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
> > Is China pushing its people into being smarter? Are they giving > incentives beyond the US-style capitalist reasons for being smart? > The incentive is that if you get smart enough, you may figure out a way to get out of China ;-) Thus, they let the top .01% out, so as to keep the rest of th

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
> Bear in mind that science has used very little imagination here to date. > Science only started studying consciousness ten years ago. It still hasn't > started studying "Thought" - the actual contents of consciousness: the > streams of thought inside people's heads. In both cases, the reason has

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike: > Making the general public smarter is not in the best interest of > government, who wants to keep us fat dumb and (relatively) happy > (read: distracted). > > If we're not making people smarter with currently available resources, > why would we invest in research to discover expensive new te

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike wrote: > Personally, my guess is that serious mindreading machines will be a reality > in the not too distant future - before AGI and seriously autonomous mobile > robots. No way. Tell that to the neuroscientists in your local university neuro lab, and they'll get a good laugh ;-) The

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Hi, >From Bob Mottram on the AGI list: > However, I'm not expecting to see the widespread cyborgisation of > human society any time soon. As the article suggests the first > generation implants are all devices to fulfill some well defined > medical need, and will have to go through all the usual

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Benjamin Goertzel
Mike, My comment is that this is GREAT research and development, but, for the near and probably medium future is very likely to be about perception and action rather than cognition. I.e., we are sort of on the verge of understanding how to hook up new sensors to the brain, and hook the brain up t

Re: [agi] The Function of Emotions is Torture

2007-12-12 Thread Benjamin Goertzel
Mike In case you're curious I wrote down my theory of emotions here http://www.goertzel.org/dynapsyc/2004/Emotions.htm (an early version of text that later became a chapter in The Hidden Pattern) Among the conclusions my theory of emotions leads to are, as stated there: * * AI systems

Re: [agi] The same old explosions?

2007-12-11 Thread Benjamin Goertzel
"Self-organizing complexity" and "computational complexity" are quite separate technical uses of the word "complexity", though I do think there are subtle relationships. As an example of a relationship btw the two kinds of complexity, look at Crutchfield's work on using formal languages to model t

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
> So I reckon roboticists ARE actually focussed on an AGI challenge - whereas, > as I've pointed out before, there is nothing comparable in pure AGI. To my knowledge, none of the work on the ICRA Robotic Challenge is at this point taking a strong AGI approach >And > with all those millions of inv

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
> Yes I expect to see more narrow AI robotics in future, but as time > goes on there will be pressures to consolidate multiple abilities into > a single machine. Ergonomics dictates that people will only accept a > limited number of mobile robots in their homes or work spaces. > Physical space is

Re: [agi] AGI communities and support

2007-12-08 Thread Benjamin Goertzel
> Thanks Bob. But I meant, it looks more likely that robots will achieve - and > have already taken the first concrete steps to achieve - the goals of AGI - > the capacity to learn a range of abilities and activities. Can you point to any single robot that has demonstrated the capability to learn

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
> Clearly the brain works VASTLY differently and more efficiently than current > computers - are you seriously disputing that? It is very clear that in many respects the brain is much less efficient than current digital computers and software. It is more energy-efficient by and large, as Read Mon

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 10:21 AM, Bob Mottram <[EMAIL PROTECTED]> wrote: > If I had 100 of the highest specification PCs on my desktop today (and > it would be a big desk!) linked via a high speed network this wouldn't > help me all that much. Provided that I had the right knowledge I > think I could produ

Re: [agi] None of you seem to be able ...

2007-12-07 Thread Benjamin Goertzel
On Dec 6, 2007 8:06 PM, Ed Porter <[EMAIL PROTECTED]> wrote: > Ben, > > To the extent it is not proprietary, could you please list some of the types > of parameters that have to be tuned, and the types, if any, of > Loosemore-type complexity problems you envision in Novamente or have > experienced

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 7:09 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: > > > > Matt,:AGI research needs > >>> special hardware with massive computational capabilities. > > > > Could you give an example or two of the kind of problems that your AGI > system(s) will need such massive capabilities to solve? I

Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Benjamin Goertzel
> Show me ONE other example of the reverse engineering of a system in > which the low level mechanisms show as many complexity-generating > characteristics as are found in the case of intelligent systems, and I > will gladly learn from the experience of the team that did the job. > > I do not belie

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
> Conclusion: there is a danger that the complexity that even Ben agrees > must be present in AGI systems will have a significant impact on our > efforts to build them. But the only response to this danger at the > moment is the bare statement made by people like Ben that "I do not > think that t

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
There is no doubt that complexity, in the sense typically used in dynamical-systems-theory, presents a major issue for AGI systems. Any AGI system with real potential is bound to have a lot of parameters with complex interdependencies between them, and tuning these parameters is going to be a majo

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
On Dec 5, 2007 6:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > > Ben: To publish your ideas > > in academic journals, you need to ground them in the existing research > > literature, > > not in your own personal introspective observations. > > Big mistake. Think what would have happened if Freu

Re: [agi] None of you seem to be able ...

2007-12-05 Thread Benjamin Goertzel
Tintner wrote: > Your paper represents almost a literal application of the idea that > creativity is ingenious/lateral. Hey it's no trick to be just > ingenious/lateral or fantastic. Ah ... before creativity was what was lacking. But now you're shifting arguments and it's something else that is l

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
OK, understood... On Dec 4, 2007 9:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > > Benjamin Goertzel wrote: > >> Thus: building a NL parser, no matter how good it is, is of no use > >> whatsoever unless it can be shown to emerge from (or at least fit with)

[agi] Re: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Benjamin Goertzel
> What makes anyone think OpenCog will be different? Is it more > understandable? Will there be long-term aficionados who write > books on how to build systems in OpenCog? Will the developers > have experience, or just adolescent enthusiasm? I'm watching > the experiment to find out. Well, Ope

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
Richard, Well, I'm really sorry to have offended you so much, but you seem to be a mighty easy guy to offend! I know I can be pretty offensive at times; but this time, I wasn't even trying ;-) > The argument I presented was not a "conjectural assertion", it made the > following coherent case: >

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
> Thus: building a NL parser, no matter how good it is, is of no use > whatsoever unless it can be shown to emerge from (or at least fit with) > a learning mechanism that allows the system itself to generate its own > understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A > MECHAN

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Benjamin Goertzel wrote: > [snip] > > And neither you nor anyone else has ever made a cogent argument that > > emulating the brain is the ONLY route to creating powerful AGI. The closest > > thing

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
> More generally, I don't perceive any readiness to recognize that the brain > has the answers to all the many unsolved problems of AGI - Obviously the brain contains answers to many of the unsolved problems of AGI (not all -- e.g. not the problem of how to create a stable goal system under recu

Re: FW: [agi] AGI DARPA-style

2007-11-30 Thread Benjamin Goertzel
Yeah, I've been following that for a while. There are some very smart people involved, and it's quite possible they'll make a useful software tool, but I don't feel they have a really viable unified cognitive architecture. It's the sort of architecture where different components are written in di

Re: [agi] Funding AGI research

2007-11-30 Thread Benjamin Goertzel
On Nov 30, 2007 7:57 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: > Ben: It seems to take tots a damn lot of trials to learn basic skills > > Sure. My point is partly that human learning must be pretty quantifiable in > terms of number of times a given action is practised, Definitely NOT ... it's v

Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 30, 2007 12:03 AM, Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Benjamin, > > >> That proves my point [that AGI project can be successfully split > >> into smaller narrow AI subprojects], right? > > > Yes, but it's a largely irrelevant point. Because building a narrow-AI > > system in an AGI

Re: Re[12]: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
> So far only researchers/developers who picked narrow-AI approach > accomplished something useful for AGI. > E.g.: Google, computer languages, network protocols, databases. These are tools that are useful for AGI R&D but so are computer monitors, silicon chips, and desk chairs. Being a useful to

Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
On Nov 29, 2007 11:35 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > Presumably, human learning isn't that slow though - if you simply count the > number of attempts made before any given movement is mastered at a basic > level (.e.g crawling/ walking/ grasping/ tennis forehand etc)? My guess > woul

Re: [agi] Funding AGI research

2007-11-29 Thread Benjamin Goertzel
> > [What related principles govern the Novamente's figure's trial and error > learning of how to pick up a ball?] Pure trial and error learning is really slow though... we are now relying on a combination of -- reinforcement from a teacher -- imitation of others' behavior -- trial and error -- a

Re: Re[10]: [agi] Funding AGI research

2007-11-28 Thread Benjamin Goertzel
> ED>I must admit, I have never heard cortical column described as > containing 10^5 neurons. The figure I have commonly seen is 10^2 neurons > for a cortical column, although my understanding is that the actual number > could be either less or more. I guess the 10^5 figure would relate to >

Re: Re[8]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
> My claim is that it's possible [and necessary] to split massive amount > of work that has to be done for AGI into smaller narrow AI chunks in > such a way that every narrow AI chunk has it's own business meaning > and can pay for itself. You have not addressed my claim, which has massive evidenc

Re: Re[10]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
> > Nearly any AGI component can be used within a narrow AI, > > That proves my point [that AGI project can be successfully split > into smaller narrow AI subprojects], right? Yes, but it's a largely irrelevant point. Because building a narrow-AI system in an AGI-compatible way is HARDER than bui

Re: Re[4]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
> > Still, this is the most > > resource-intensive part of > > the Novamente system (the part that's most likely to require > > supercomputers to > > achieve human-level AI). > > > Why is it the most resource intensive, is it the evolutionary computational > cost? Is this where MOSES is used? Corr

Re: Re[4]: [agi] Funding AGI research

2007-11-26 Thread Benjamin Goertzel
Well, there is a discipline of computer science devoted to automatic programming, i.e. synthesizing software based on specifications of desired functionality. State of the art is: -- Just barely, researchers have recently gotten automated program learning to synthesize an nlogn sorting algorithm

Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Cassimatis's system is an interesting research system ... it doesn't yet have lotsa demonstrated practical functionality, if that's what you mean by "work"... He wants to take a bunch of disparately-functioning agents, and hook them together into a common framework using a common logical interling

Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Linas: > I find it telling that no one is saying "I've got the code, I just need to > scale it up > 1000-fold to make it impressive ..." Yes, that's an accurate comment. Novamente will hopefully reach that point in a few years. For now, we will need (and use) a lotta machines for commercial prod

Re: Re[8]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
> > > > Could you describe a piece of technology that simultaneously: > - Is required for AGI. > - Cannot be required part of any useful narrow AI. > The key to your statement is the word "required" Nearly any AGI component can be used within a narrow AI, but, the problem is, it's usually a bunch

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
On Nov 20, 2007 11:22 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Jiri, > > > AGI is IMO possible now but requires very different approach than narrow > AI. > > AGI requires properly tune some existing narrow AI technologies, > combine them together and may be add couple of more. > > That's ma

Re: Re[6]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
> > > > No. > My point is that massive funding without having a prototype prior to > funding is worthless most of the times. > If prototype cannot be created at reasonably low cost then fully working > product > most likely cannot be created even with massive funding. > Well, this seems to dissol

Re: Re[4]: [agi] Funding AGI research

2007-11-20 Thread Benjamin Goertzel
> > > > Are you asking for success stories regarding research funding in any > domain, > > or regarding research funding in AGI? > > Any domain, please. > OK, so your suggestion is that research funding, in itself, is worthless in any domain? I don't really have time to pursue this kind of silly

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 11:24 PM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > There are a lot of worthwhile points in your post, and a number of things > I don't fully agree with, but I don't have time to argue them all right > now... > > Instead I'll just pic

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
:04 PM, J. Andrew Rogers <[EMAIL PROTECTED]> wrote: > > On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote: > > > > Navigating complex social and business situations requires a quite > > different set of capabilities than creating AGI. Potentially they > > c

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
James, I really don't think that these statements > > On the other hand, if you have the mad computer science skills > required to produce AGI, maybe your time would be better spent > solving on of the myriad of other important problems in computer > science so that you can have both the quick m

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Hi, > > The majority of VC's do, as you say, want a technology that is sewn up, > from the point of view of technical feasibility. But this is not always > true. There is always a gray area at the fringe of feasibility where > the last set of questions has not been *fully* answered before money

Re: [agi] Multi-agent learning

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 6:45 PM, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote: > Ben, > > Have you already considered what form of "multi-agent epistemic logic" > (or whatever extension to PLN) Novamente will use to merge knowledge > from different avatars? Well, standard PLN handles this in principle via

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Hmmm... IMO, there was really nothing conceptually new in Hawkins' HTM, he just did a really good job of expressing some ideas that were already there in the literature. Which is an important and valuable thing, but doesn't really make his HTM model a good example of profound creativity. To me a

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
> I have not heard a *creative* new idea > here that directly addresses and shows the power to solve even in part the > problem of creating general intelligence. To be quite frank, the most creative and original ideas inside the Novamente design are quite technical. I suspect you don't have th

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
> Proactively minimizing risk in as > many areas as possible make a venture much more salable, but most AI > ventures tend to be very apparently risky at many levels that have no > relation to the AI research per se and the inability of these > ventures to minimize all that unnecessary risk is a g

Re: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
Novamente as a whole is definitely a research project, albeit one with a very well fleshed out research plan. I have a strong hypothesis about how the project will come out, and arguments in favor of this hypothesis; but I don't have the level of confidence I'd have in, say, the stability of a bri

Re: Re[2]: [agi] Funding AGI research

2007-11-18 Thread Benjamin Goertzel
On Nov 18, 2007 12:50 AM, Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Benjamin, > > Do you have any success stories of such research funding in the last > 20 years? > Something that resulted in useful accomplishments. Are you asking for success stories regarding research funding in any domain, o

Re: [agi] Funding AGI research

2007-11-17 Thread Benjamin Goertzel
Richard, Though we have theoretical disagreements, I largely agree with your analysis of the value of prototypes for AGI. Experience has shown repeatedly that prototypes displaying "apparently intelligent behavior" in various domains are very frequently dead-ends, because they embody various sort

Re: [agi] Funding AGI research

2007-11-17 Thread Benjamin Goertzel
On Nov 17, 2007 1:08 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Jiri, > > Give $1 for the research to who? > Research team can easily eat millions $$$ without producing any useful > results. > If you just randomly pick researchers for investment, your chances to > get any useful outcome from

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
cellular medium in ways no one understands really.. etc. etc. etc. ;-) ben On Nov 15, 2007 10:07 PM, Bryan Bishop <[EMAIL PROTECTED]> wrote: > On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote: > > On Nov 15, 2007 8:57 PM, Bryan Bishop <[EMAIL PROTECTED]> wrote

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
On Nov 15, 2007 8:57 PM, Bryan Bishop <[EMAIL PROTECTED]> wrote: > On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote: > > non-brain-based AGI. After all it's not like we know how real > > chemistry gives rise to real biology yet --- the dynamics underlying >

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
I think that linguistic interaction with human beings is going to be what lifts Second Life proto-AGI's beyond the glass ceiling... Our first SL agents won't have language generation or language learning capability, but I think that introducing it is really essential, esp. given the limitations of

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Benjamin Goertzel
About PolyWorld and Alife in general... I remember playing with PolyWorld 10 years ago or so And, I had a grad student at Uni. of Western Australia build a similar system, back in my Perth days... (it was called SEE, for Simple Evolving Ecology. We never published anything on it, as I left A

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > RL:In order to completely ground the system, you need to let the system > build its own symbols Correct. Novamente is designed to be able to build its own symbols. what is built-in, are mechanisms for building symbols, and for

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard, > > So here I am, looking at this situation, and I see: > > AGI system intepretation (implicit in system use of it) > Human programmer intepretation > > and I ask myself which one of these is the real interpretation? > > It matters, because they do not necessarily match up.

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi, > > > No: the real concept of "lack of grounding" is nothing so simple as the > way you are using the word "grounding". > > Lack of grounding makes an AGI fall flat on its face and not work. > > I can't summarize the grounding literature in one post. (Though, heck, > I have actually tried t

Re: [agi] advice-level dev collaboration

2007-11-13 Thread Benjamin Goertzel
Correct: technical discussions of current AGI projects are apropos for this list -- Ben G On Nov 13, 2007 6:12 PM, Benjamin Johnston <[EMAIL PROTECTED]> wrote: > > Hi Jiri, > > The "[agi]" list is billed as being for "more technical discussions > about current AGI projects". I joined this partic

Re: [agi] Human uploading

2007-11-13 Thread Benjamin Goertzel
Bob, The two biologists I know who are deep into mind uploading (Randal Koene and Todd Huffman) both agree with your basic assessment, I believe... ben g On Nov 13, 2007 4:37 PM, Bob Mottram <[EMAIL PROTECTED]> wrote: > > > It seems quite possible that what we need is a detailed map of every >

  1   2   3   4   5   >