[agi] The same old explosions?

2007-12-11 Thread Mike Tintner
Essentially, Richard & others are replaying the same old problems of computational explosions - see "computational complexity" in this history of cog. sci. review - no? Mechanical Mind Gilbert Harman Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two volumes, xlviii + 1631

Re: [agi] The same old explosions?

2007-12-11 Thread Richard Loosemore
Mike Tintner wrote: Essentially, Richard & others are replaying the same old problems of computational explosions - see "computational complexity" in this history of cog. sci. review - no? No: this is a misunderstanding of "complexity" unfortunately (cf the footnote on p1 of my AGIRI paper):

RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
Mike: MIKE TINTNER#> Science's autistic, emotionally deprived, insanely rational nature in front of the supernatural (if it exists), and indeed the whole world, needs analysing just as much as the overemotional, underrational fantasies of the religious about the supernatural. ED PORTER#>

RE: [agi] news bit: "DRAM Appliance" 10TB extended memory in one rack

2007-12-11 Thread Ed Porter
Dave, Such large memories are cool, but of course AGI requires a lot of processing power to go with the memory. The price cited at the bottom of the below article for 120GB wasn’t that much less than the price for which would could by 128GB with 8 quad core opterons in one of the links you sent m

Re: [agi] The same old explosions?

2007-12-11 Thread Mike Tintner
Thanks. But one way and another, although there are different variations, cog sci and AI have been obsessed with computational explosions? Ultimately, it seems to me, these are all the problems of algorithms - of a rigid, rational approach and system - which inevitably get stuck in dealing with

Re: [agi] AGI and Deity

2007-12-11 Thread Mark Waser
Hey Ben, Any chance of instituting some sort of moderation on this list? - Original Message - From: "Ed Porter" <[EMAIL PROTECTED]> To: Sent: Tuesday, December 11, 2007 10:18 AM Subject: RE: [agi] AGI and Deity Mike: MIKE TINTNER#> Science's autistic, emotionally deprived,

Re: [agi] The same old explosions?

2007-12-11 Thread Benjamin Goertzel
"Self-organizing complexity" and "computational complexity" are quite separate technical uses of the word "complexity", though I do think there are subtle relationships. As an example of a relationship btw the two kinds of complexity, look at Crutchfield's work on using formal languages to model t

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
*I just want to jump in here and say I appreciate the content of this post as opposed to many of the posts of late which were just name calling and bickering... hope to see more content instead.* Richard Loosemore <[EMAIL PROTECTED]> wrote: Ed Porter wrote: > Jean-Paul, > > Although complexity

Re: [agi] The same old explosions?

2007-12-11 Thread Richard Loosemore
Mike, You are talking about two different occurrences of a computational explosion here, so we need to distinguish them. One is a computational explosion that occurs at design time: this is when a researcher gets an algorithm to do something on a "toy" problem, but then they figure out ho

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
>However, part of the key to intelligence is **self-tuning**. >I believe that if an AGI system is built the right way, it can effectively >tune its own parameters, hence adaptively managing its own complexity. I agree with Ben here, isnt one of the core concepts of AGI the ability to modify its

Re: [agi] The same old explosions?

2007-12-11 Thread Richard Loosemore
Benjamin Goertzel wrote: "Self-organizing complexity" and "computational complexity" are quite separate technical uses of the word "complexity", though I do think there are subtle relationships. As an example of a relationship btw the two kinds of complexity, look at Crutchfield's work on using

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-11 Thread James Ratcliff
Here's a basic abstract I did last year I think: http://www.falazar.com/AI/AAAI05_Student_Abtract_James_Ratcliff.pdf Would like to work with others on a full fledged Reprensentation system that could use these kind of techniques I hacked this together by myself, so I know a real team could

RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
From: Joshua Cowan [mailto:[EMAIL PROTECTED] > > It's interesting that the "field" of memetics is moribund (ex. the > Journal > of Memetics hasn't published in two years) but the meme of memetics is > alive > and well. I wonder, do any of the AGI researchers find the concept of > Memes > useful in

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore
James Ratcliff wrote: >However, part of the key to intelligence is **self-tuning**. >I believe that if an AGI system is built the right way, it can effectively >tune its own parameters, hence adaptively managing its own complexity. I agree with Ben here, isnt one of the core concepts of AGI

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-11 Thread James Ratcliff
irrationality - is used to describe thinking and actions which are, or appear to be, less useful or logical than the other alternatives. and rational would be the opposite of that. This line of thinking is more concerned with the behaviour of the entities, which requires Goal orienting and othe

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Mike Tintner
Richard:> Suppose, further, that the only AGI systems that really do work are ones in which the symbols never use "truth values" but use other stuff (for which there is no interpretation) and that the thing we call a "truth value" is actually the result of an operator that can be applied to a

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-11 Thread Ed Porter
James, I read your paper. Your project seems right on the mark. It provides a domain-limited example of the general type of learning algorithm that will probably be the central learning algorithm of AGI, i.e., finding patterns, and hierarchies of patterns in the AGI's experience in a largely

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
James: Either of these systems described will have a Complexity Problem, any AGI will because it is a very complex system. System 1 I dont believe is strictly practical, as few Truth values can be stored locally directly to the frame. More realistic is there may be a temporary value such as:

RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
> From: Charles D Hixson [mailto:[EMAIL PROTECTED] > The evidence in favor of an external god of any traditional form is, > frankly, a bit worse than unimpressive. It's lots worse. This doesn't > mean that gods don't exist, merely that they (probably) don't exist in > the hardware of the universe.

RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
John, You implied there "might be a very extremely efficient way of conquering certain cognitive engineering issues" by using religion in AGIs. Obviously any powerful AGI that deals with a complex and uncertain world like ours would have to have belief systems, but it is not clear to me their wou

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-11 Thread Matt Mahoney
--- Jean-Paul Van Belle <[EMAIL PROTECTED]> wrote: > Hi Matt, Wonderful idea, now it will even show the typical human trait of > lying...when i ask it "do you still love me?" most answers in its database > will have Yes as an answer but when i ask it 'what's my name?' it'll call > me John? My pr

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore
Mike Tintner wrote: Richard:> Suppose, further, that the only AGI systems that really do work are ones in which the symbols never use "truth values" but use other stuff (for which there is no interpretation) and that the thing we call a "truth value" is actually the result of an operator that

Re: Re[2]: [agi] Do we need massive computational capabilities?

2007-12-11 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Why do you need image recognition in your AGI prototype? > You can feed it with text. Then AGI would simply parse text [and > optionally - Google it]. > > No need for massive computational capabilities. Not when you can just use Google's 10^6 CPU cl

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore
Well, this wasn't quite what I was pointing to: there will always be a need for parameter tuning. That goes without saying. The point was that if an AGI developer were to commit to system 1, they are never going to get to the (hypothetical) system 2 by anything as trivial as parameter tuni

An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > Is an AGI really going to feel pain or is it just going to be some numbers? > I guess that doesn't have a simple answer. The pain has to be engineered > well for it to REALLY understand it. An agent capable of reinforcement learning has an upper bou

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Richard Loosemore
Matt Mahoney wrote: --- "John G. Rose" <[EMAIL PROTECTED]> wrote: Is an AGI really going to feel pain or is it just going to be some numbers? I guess that doesn't have a simple answer. The pain has to be engineered well for it to REALLY understand it. An agent capable of reinforcement learnin

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
What I dont see then, is anywhere where System 2 ( a neural net?) is better than system 1, or where it avoids the complexity issues. I dont have a goal of system 2 from system one yet. James Richard Loosemore <[EMAIL PROTECTED]> wrote: Well, this wasn't quite what I was pointing to: there wil

RE: [agi] AGI and Deity

2007-12-11 Thread Matt Mahoney
What do you call the computer that simulates what you perceive to be the universe? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=7

RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
Ed, It's a very complicated subject and requires a certain theoretical mental background and somewhat unbiased mindset. Though a biased mindset, for example a person, who is religious, could use the theory to propel their religion into post humanity - maybe a good idea to help preserve humanity -

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > I have to say that this is only one interpretation of what it would mean > for an AGI to experience something, and I for one believe it has no > validity at all. It is purely a numeric calculation that makes no > reference to what "pain" (or an

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bryan Bishop <[EMAIL PROTECTED]> wrote: > On Monday 10 December 2007, Matt Mahoney wrote: > > The worst case scenario is that AI wipes out all life on earth, and > > then itself, although I believe at least the AI is likely to survive. > > http://lifeboat.com/ex/ai.shield SIAI has not yet so

RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
John, For a reply of its short length, given the subject, it was quite helpful in letting me know the type of things you were talking about. Thank you. Ed Porter -Original Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 11, 2007 2:41 PM To: agi@v2.listbox.

Re: [agi] Worst case scenario

2007-12-11 Thread Bob Mottram
On 11/12/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > http://lifeboat.com/ex/ai.shield That's quite amusing. Safeguarding humanity against dancing robots. I don't believe that technology is something you can run away from, in a space lifeboat or any other sort of refuge. You just have to t

[agi] AGI-08 - Call for Participation

2007-12-11 Thread Bruce Klein
The First Conference on Artificial General Intelligence (AGI-08) March 1-3, 2008 at Memphis, Tennessee, USA Early Registration Deadline: January 31, 2008 Conference Website: http://www.agi-08.org Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI --- t

Re: [agi] Worst case scenario

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Matt Mahoney wrote: > --- Bryan Bishop <[EMAIL PROTECTED]> wrote: > > Re: how much computing power is needed for ai. My worst-case > > scenario accounts for nearly any finite computing power, via the > > production of semiconductant silicon wafer tech. > > A human brain

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-11 Thread Vladimir Nesov
On Dec 11, 2007 7:26 PM, James Ratcliff <[EMAIL PROTECTED]> wrote: > Here's a basic abstract I did last year I think: > > http://www.falazar.com/AI/AAAI05_Student_Abtract_James_Ratcliff.pdf > > Would like to work with others on a full fledged Reprensentation system that > could use these kind of te

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: I have to say that this is only one interpretation of what it would mean for an AGI to experience something, and I for one believe it has no validity at all. It is purely a numeric calculation that makes no reference to what

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore
James Ratcliff wrote: What I dont see then, is anywhere where System 2 ( a neural net?) is better than system 1, or where it avoids the complexity issues. I was just giving an example of the degree of flexibility required - the exact details of this example are not important. My point was th

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bob Mottram <[EMAIL PROTECTED]> wrote: > > SIAI has not yet solved the friendliness problem. > > I've always had problems with the concept of "friendliness" spoken > about by folks from SIAI. It seems like a very ill-defined concept. > What does "friendly to humanity" really mean? It seems t

Re: [agi] AGI and Deity

2007-12-11 Thread Charles D Hixson
John G. Rose wrote: From: Charles D Hixson [mailto:[EMAIL PROTECTED] The evidence in favor of an external god of any traditional form is, frankly, a bit worse than unimpressive. It's lots worse. This doesn't mean that gods don't exist, merely that they (probably) don't exist in the hardware of th

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bryan Bishop <[EMAIL PROTECTED]> wrote: > On Tuesday 11 December 2007, Matt Mahoney wrote: > > --- Bryan Bishop <[EMAIL PROTECTED]> wrote: > > > Re: how much computing power is needed for ai. My worst-case > > > scenario accounts for nearly any finite computing power, via the > > > production

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > >> I have to say that this is only one interpretation of what it would mean > >> for an AGI to experience something, and I for one believe it has no > >> validity at all.

[agi] CyberLover passing Turing Test

2007-12-11 Thread Dennis Gorelik
http://blog.pmarca.com/2007/12/checking-in-on.html === If CyberLover works as described, it will qualify as one of the first computer programs ever written that is actually passing the Turing Test. === - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change you

Re: [agi] CyberLover passing Turing Test

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Dennis Gorelik wrote: > If CyberLover works as described, it will qualify as one of the first > computer programs ever written that is actually passing the Turing > Test. I thought the Turing Test involved fooling/convincing judges, not clueless men hoping to get some

RE: [agi] AGI-08 - Call for Participation

2007-12-11 Thread Ed Porter
Bruce, The following is a good idea "Different from conventional conferences, AGI-08 is planned to be intensively discussion oriented. All the research papers accepted for publication in the Proceedings (49 papers total) will be available in advance online, so that attendees may arrive prepared

Re[4]: [agi] Do we need massive computational capabilities?

2007-12-11 Thread Dennis Gorelik
Matt, >> You can feed it with text. Then AGI would simply parse text [and >> optionally - Google it]. >> >> No need for massive computational capabilities. > Not when you can just use Google's 10^6 CPU cluster and its database with 10^9 > human contributors. That's one of my points: our current

Re[2]: [agi] CyberLover passing Turing Test

2007-12-11 Thread Dennis Gorelik
Bryan, >> If CyberLover works as described, it will qualify as one of the first >> computer programs ever written that is actually passing the Turing >> Test. > I thought the Turing Test involved fooling/convincing judges, not > clueless men hoping to get some action? In my taste, testing with