Re[2]: [agi] Self-building AGI

2007-11-30 Thread Dennis Gorelik
John, > Example - When we create software applications we use compilers. When the > applications get more complex we have to improve the compilers (otherwise > AutoCad 2007 could be built with QBasic). For AGI do we need to improve the > compliers to the point where they actually write the source

Re[2]: [agi] Lets count neurons

2007-11-30 Thread Dennis Gorelik
Matt, > Using pointers saves memory but sacrifices speed. Random memory access is > slow due to cache misses. By using a matrix, you can perform vector > operations very fast in parallel using SSE2 instructions on modern processors, > or a GPU. I doubt it. http://en.wikipedia.org/wiki/SSE2 - do

Re[2]: [agi] Self-building AGI

2007-11-30 Thread Dennis Gorelik
Ed, 1) Human-level AGI with access to current knowledge base cannot build AGI. (Humans can't) 2) When AGI is developed, humans will be able to build AGI (by copying successful AGI models). The same with human-level AGI -- it will be able to copy successful AGI model. But that's not exactly self-

RE: Cortical Columns [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Ed Porter
>Vlad Could you clarify what you mean by additional information here? Ed Of a very general class that includes what you described in your prior quote: "In my current model there are context-sensitive links between nodes, AND and NOT combinators. Whenever one node is active ('link origin'

Re: [agi] Funding AGI research

2007-11-30 Thread Mike Tintner
Ben, A further thought occurs to me about how learning derives from practice - & I wonder whether you've thought about this. Basically, in humans and animals, the kinds of learning we've been talking about - learning to crawl, manipulate, walk, talk, play tennis forehands or pianos - involve

Re: Cortical Columns [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Vladimir Nesov
On Dec 1, 2007 1:59 AM, Ed Porter <[EMAIL PROTECTED]> wrote: > Vladimir, > > I thought some additional information would be required to separate somewhat > similar, but different cases, as your system contains. Edward, Could you clarify what you mean by additional information here? Do you mean th

RE: Cortical Columns [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Ed Porter
Vladimir, I thought some additional information would be required to separate somewhat similar, but different cases, as your system contains. I assume Valiant is not an idiot so he has some half-way reasonable explanation for his method of hebbian learning without full connection, but in the AGI

RE: [agi] Self-building AGI

2007-11-30 Thread John G. Rose
> From: Dennis Gorelik [mailto:[EMAIL PROTECTED] > John, > > >> Note, that compiler doesn't build application. > >> Programmer does (using compiler as a tool). > > > Very true. So then, is the programmer + compiler more complex that the > AGI > > ever will be? > > No. > I don't even see how it r

[agi] Critical modules for AGI

2007-11-30 Thread Dennis Gorelik
Bob, Yes, losing useful modules degrades intelligence, but system still can be intelligent without most of such modules. Good example - blind and deaf people. Besides, such modules can be replaced by external tools. I'd say that critical modules for AGI are: - Super Goals (permanent). - Sub Goal

Re[14]: [agi] Funding AGI research

2007-11-30 Thread Dennis Gorelik
Benjamin, > Obviously, most researchers who have developed useful narrow-AI > components have not gotten rich from it. My example is "Google founders" who developed narrow-AI component -- Google). What is your example of useful narrow AI component developers who have not got rich from it? > Th

[agi] Self-building AGI

2007-11-30 Thread Dennis Gorelik
John, >> Note, that compiler doesn't build application. >> Programmer does (using compiler as a tool). > Very true. So then, is the programmer + compiler more complex that the AGI > ever will be? No. I don't even see how it relates to what I wrote above ... > Or at some point does the AGI build

Re[14]: [agi] Funding AGI research

2007-11-30 Thread Dennis Gorelik
Benjamin, >> E.g.: Google, computer languages, network protocols, databases. > These are tools that are useful for AGI R&D but so are computer > monitors, silicon chips, and desk chairs. 1) Yes, creating monitor contributed into AGI a lot too. 2) Technologies that I mentioned above are useful o

Re: [agi] Funding AGI research

2007-11-30 Thread Vladimir Nesov
Ben, What about adults? Things like language learning, dynamics of memorizing. Feature spaced repetition ( http://en.wikipedia.org/wiki/Spaced_repetition ) tries to hijack looks to me like a good candidate for memorization heuristics. On Nov 30, 2007 4:45 PM, Benjamin Goertzel <[EMAIL PROTECTED]

Re: Cortical Columns [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Vladimir Nesov
On Nov 30, 2007 9:58 PM, Ed Porter <[EMAIL PROTECTED]> wrote: > Vladimir Nesov There are no well-articulated theories here. I guess that > columns are induction chips: they have potential all-to-all connectivity, so > they can learn the rule in form 'after this signal comes that signal' for > a

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread John G. Rose
Ed, That is probably a good rough estimate. There are more headers for the more frequently transmitted smaller messages but a 16 byte header may be a bit large. Here is a speedtest link - http://www.speedtest.net/ My Comcast cable from Denver to NYC tests at 3537 kb/sec DL and 1588 kb/sec UL mu

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Ed Porter
John, Thanks. I guess that means and AGI-at-home system could be both up-loading and receiving about 27 1K msgs/sec if it wasn't being used for anything else and the networks weren't backed up in its neck of the woods. Presumably the number for say 128Byte messages would be say, roughly, 8 tim

RE: Cortical Columns [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Ed Porter
Vladimir Nesov There are no well-articulated theories here. I guess that columns are induction chips: they have potential all-to-all connectivity, so they can learn the rule in form 'after this signal comes that signal' for any two signals in column. Ed how does the induction chip avoid cr

Re: [agi] Where are the women?

2007-11-30 Thread James Ratcliff
Yeah I couldnt resist, thanks for the video though, I handt seen that one, was well done. James Matt Mahoney <[EMAIL PROTECTED]> wrote: --- BillK wrote: > On Nov 30, 2007 2:37 PM, James Ratcliff wrote: > > More Women: > > > > Kokoro (image attached) > > > > > So that's what a women is! I

RE: FW: [agi] AGI DARPA-style

2007-11-30 Thread Ed Porter
Ben, >From reading the list of publications, it looks just like you said. Most of its individual papers relate to how to make computers gain a particular competency that we hope our systems would learn largely automatically. But some of the black boxes sound quite interesting. Like one of the p

Re: [agi] Where are the women?

2007-11-30 Thread Matt Mahoney
--- BillK <[EMAIL PROTECTED]> wrote: > On Nov 30, 2007 2:37 PM, James Ratcliff wrote: > > More Women: > > > > Kokoro (image attached) > > > > > So that's what a women is! I wondered.. Wrong. http://www.youtube.com/watch?v=N7mZStNNN7g -- Matt Mahoney, [EMAIL PROTECTED] - This list

Re: [agi] Lets count neurons

2007-11-30 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Matt, > > > > And some of the Blue Brain research suggests it is even worse. A mouse > > cortical column of 10^5 neurons is about 10% connected, > > What does mean 10% connected? > How many connections does average mouse neuron have? > 1? A

Re: FW: [agi] AGI DARPA-style

2007-11-30 Thread Benjamin Goertzel
Yeah, I've been following that for a while. There are some very smart people involved, and it's quite possible they'll make a useful software tool, but I don't feel they have a really viable unified cognitive architecture. It's the sort of architecture where different components are written in di

FW: [agi] AGI DARPA-style

2007-11-30 Thread Ed Porter
Also checkout http://caloproject.sri.com/publications/ for a list of CALO related publications Ed Porter -Original Message- From: Ed Porter [mailto:[EMAIL PROTECTED] Sent: Friday, November 30, 2007 12:58 PM To: 'agi@v2.listbox.com' Subject: RE: [agi] AGI DARPA-style Checkout AGI DARPA-s

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Mike Tintner
RL:However, I have previously written a good deal about the design of different types of motivation system, and my understanding of the likely situation is that by the time we had gotten the AGI working, its motivations would have been arranged in such a way that it would *want* to be extremely co

RE: [agi] AGI DARPA-style

2007-11-30 Thread Ed Porter
Checkout AGI DARPA-style: "Software That Learns from Users-- A massive AI project called CALO could revolutionize machine learning" at http://www.technologyreview.com/Infotech/19782/?a=f Ed Porter - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your op

RE: [agi] Self-building AGI

2007-11-30 Thread Ed Porter
Computers are currently designed by human-level intellitences, so presumably they could be designed by human-level AGI's. (Which if they were human-level in the tasks that are currently hard for computers means they could be millions of times faster than humans for tasks at which computers already

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread John G. Rose
Hi Ed, If the peer is not running other apps utilizing the network it could do the same. Typically a peer first needs to locate other peers. There may be servers involved but these are just for the few bytes transmitted for public IP address discovery as many(or most) peers reside hidden behind NA

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Ed Porter
John, Thanks. Can P2P transmission match the same roughly 27 1Kmsg/sec rate as the client to server upload you discribed? Ed Porter -Original Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Thursday, November 29, 2007 11:40 PM To: agi@v2.listbox.com Subject: RE: Hacker in

Re: Re[10]: [agi] Funding AGI research

2007-11-30 Thread James Ratcliff
The overall architecture is what is needed, the glue to hold the modules together. Lots of talk has been goign on about the Narrow AI pieces being used to build complete AGI. We will not be able to use much of any of the pieces directly in the core AGI, unless as Ben says they are modeled for

Re: [agi] Funding AGI research

2007-11-30 Thread Richard Loosemore
Benjamin Goertzel wrote: On Nov 30, 2007 7:57 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: Ben: It seems to take tots a damn lot of trials to learn basic skills Sure. My point is partly that human learning must be pretty quantifiable in terms of number of times a given action is practised, Def

Re: [agi] Funding AGI research

2007-11-30 Thread Richard Loosemore
Dennis Gorelik wrote: Richard, Not collecting the *content* of the AGI's knowledge, collecting data about the relationship between low-level mechanisms and high-level behavior (within a well-defined context). So your approach is to reverse engineer human's brain? To reverse-engineer the co

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-11-30 Thread Richard Loosemore
Charles D Hixson wrote: Ed Porter wrote: Richard, Since hacking is a fairly big, organized crime supported, business in eastern Europe and Russia, since the potential rewards for it relative to most jobs in those countries can be huge, and since Russia has a tradition of excellence in math an

Re: [agi] Where are the women?

2007-11-30 Thread BillK
On Nov 30, 2007 2:37 PM, James Ratcliff wrote: > More Women: > > Kokoro (image attached) > So that's what a women is! I wondered.. BillK - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?mem

Re: [agi] Funding AGI research

2007-11-30 Thread Benjamin Goertzel
On Nov 30, 2007 7:57 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: > Ben: It seems to take tots a damn lot of trials to learn basic skills > > Sure. My point is partly that human learning must be pretty quantifiable in > terms of number of times a given action is practised, Definitely NOT ... it's v

Re: [agi] Funding AGI research

2007-11-30 Thread Mike Tintner
Ben: It seems to take tots a damn lot of trials to learn basic skills Sure. My point is partly that human learning must be pretty quantifiable in terms of number of times a given action is practised, & I wonder whether anyone's counting. I know Anders Ericsson & other expert psychologists quan

Re: [agi] Lets count neurons

2007-11-30 Thread Bob Mottram
On 30/11/2007, Dennis Gorelik <[EMAIL PROTECTED]> wrote: > For example, mouse has strong image and sound recognition ability. > AGI doesn't require that. > Mouse has to manage its muscles in a very high pace. > AGI doesn't need that. I'm not convinced that it is yet possible to make categorical a

Re: [agi] Lets count neurons

2007-11-30 Thread Bob Mottram
If you want to see what cortical columns actually look loke, see http://brainmaps.org In some of these images you can clearly see what appear to be neurons stacked on top of each other perpendicular to the cortical plane. However, unlike the architecture of a computer there are no clear divisions