Re: [agi] Nirvana

2008-06-11 Thread William Pearson
2008/6/12 J Storrs Hall, PhD <[EMAIL PROTECTED]>: > I'm getting several replies to this that indicate that people don't understand > what a utility function is. > > If you are an AI (or a person) there will be occasions where you have to make > choices. In fact, pretty much everything you do involv

Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Thu, Jun 12, 2008 at 6:30 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > A very diplomatic reply, it's appreciated. > > However, I have no desire (or time) to argue people into my point of view. I > especially have no time to argue with people over what they did or didn't > understand. And

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote: > On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > I claim that there's plenty of historical evidence that people fall into this > > kind of attractor, as the word nirvana indicates (and you'll find sim

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
A very diplomatic reply, it's appreciated. However, I have no desire (or time) to argue people into my point of view. I especially have no time to argue with people over what they did or didn't understand. And if someone wishes to state that I misunderstood what he understood, fine. If he wishe

Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > I'm getting several replies to this that indicate that people don't understand > what a utility function is. > I don't see any specific indication of this problem in replies you received, maybe you should be a little

Re: [agi] Nirvana

2008-06-11 Thread Jey Kottalam
On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > The real problem with a self-improving AGI, it seems to me, is not going to be > that it gets too smart and powerful and takes over the world. Indeed, it > seems likely that it will be exactly the opposite. > > If you

Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-11 Thread J Storrs Hall, PhD
Hmmph. I offer to build anyone who wants one a human-capacity machine for $100K, using currently available stock parts, in one rack. Approx 10 teraflops, using Teslas. (http://www.nvidia.com/object/tesla_c870.html) The software needs a little work... Josh On Wednesday 11 June 2008 08:50:58 p

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
I'm getting several replies to this that indicate that people don't understand what a utility function is. If you are an AI (or a person) there will be occasions where you have to make choices. In fact, pretty much everything you do involves making choices. You can choose to reply to this or to

[agi] How the brain separates audio signals from noise

2008-06-11 Thread Brad Paulsen
Hi Kids! Article summary: http://www.physorg.com/news132290651.html Article text: http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0060138&ct=1 Enjoy! --- agi Archives: http://www.listbox.com/member/archive/303/=no

Cognitive Science 'unusable' for AGI [WAS Re: [agi] Pearls Before Swine...]

2008-06-11 Thread Richard Loosemore
Steve Richfield wrote: Richard, On 6/8/08, *Richard Loosemore* <[EMAIL PROTECTED] > wrote: You also failed to address my own previous response to you: I basically said that you make remarks as if the whole of cognitive science does not exist. Quite th

Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > Vladimir, > > You seem to be assuming that there is some objective utility for which the > AI's internal utility function is merely the indicator, and that if the > indicator is changed it is thus objectively wrong and

Re: [agi] Nirvana

2008-06-11 Thread William Pearson
2008/6/11 J Storrs Hall, PhD <[EMAIL PROTECTED]>: > Vladimir, > > You seem to be assuming that there is some objective utility for which the > AI's internal utility function is merely the indicator, and that if the > indicator is changed it is thus objectively wrong and irrational. > > There are tw

Re: [agi] More brain scanning and language

2008-06-11 Thread J. Andrew Rogers
On Jun 11, 2008, at 5:56 AM, Mark Waser wrote: It is an open question as to whether or not mathematics will arrive at an elegant solution that out-performs the sub-optimal wetware algorithm. What is the basis for your using the term sub-optimal when the question is still open? If mathem

Re: [agi] Nirvana

2008-06-11 Thread Jiri Jelinek
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: >If you can modify your mind, what is the shortest path to satisfying all your goals? Yep, you got it: delete the goals. We can set whatever goals/rules we want for AGI, including rules for [particular [types of]] goal/

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
Vladimir, You seem to be assuming that there is some objective utility for which the AI's internal utility function is merely the indicator, and that if the indicator is changed it is thus objectively wrong and irrational. There are two answers to this. First is to assume that there is such an

Re: [agi] More brain scanning and language

2008-06-11 Thread Vladimir Nesov
On Wed, Jun 11, 2008 at 4:56 PM, Mark Waser <[EMAIL PROTECTED]> wrote: >> It is an open question as to whether or not mathematics will arrive at an >> elegant solution that out-performs the sub-optimal wetware algorithm. > > What is the basis for your using the term sub-optimal when the question i

Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > The real problem with a self-improving AGI, it seems to me, is not going to be > that it gets too smart and powerful and takes over the world. Indeed, it > seems likely that it will be exactly the opposite. > > If you

[agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
The real problem with a self-improving AGI, it seems to me, is not going to be that it gets too smart and powerful and takes over the world. Indeed, it seems likely that it will be exactly the opposite. If you can modify your mind, what is the shortest path to satisfying all your goals? Yep, yo

[agi] Plant Neurobiology

2008-06-11 Thread Mike Tintner
http://www.nytimes.com/2008/06/10/science/10plant.html?pagewanted=2&_r=1&ei=5087&em&en=484cb A really interesting article about plant sensing. A bit O/T here but I'm posting it after the recent neurons discussion, because it all suggests that the control systems of living systems may indeed be

Re: [agi] More brain scanning and language

2008-06-11 Thread J. Andrew Rogers
On Jun 11, 2008, at 12:05 AM, Vladimir Nesov wrote: And it extends to much more than 3D physical models -- humans are able to adjust dynamic representations on the fly, given additional information about any level of description, propagating consequences to other levels of description and formi

Re: [agi] More brain scanning and language

2008-06-11 Thread Vladimir Nesov
On Wed, Jun 11, 2008 at 10:09 AM, J. Andrew Rogers <[EMAIL PROTECTED]> wrote: > > Having that model and computing interactions with that model are two > different things. Humans do not actually compute their relation to other > objects with high precision, they approximate and iteratively make > co