Re: [agi] AGI and Deity

2007-12-10 Thread Stan Nilsen
sion, just want to say hi! Hope to have my website up by end of this week. The thrust of the website is that STRONG AI might not be that strong. And, BTW I have notes about a write up on "Will a Strong AI pray?" I've enjoyed the education I'm getting here. Only been a few w

Re: [agi] AGI and Deity

2007-12-14 Thread Stan Nilsen
Greetings Jirih, A few minutes ago I uploaded my website - http://www.footnotestrongai.com The write up on a praying AI is amongst the "Articles", and can be found under "Just for Fun". I'll look at the link you've suggested below. Stan Jiri Jelinek wrote: Stan, there are believers o

Re: [agi] AGI and Deity

2007-12-19 Thread Stan Nilsen
e strong. Ed Porter -Original Message----- From: Stan Nilsen [mailto:[EMAIL PROTECTED] Sent: Monday, December 10, 2007 5:49 PM To: agi@v2.listbox.com Subject: Re: [agi] AGI and Deity Lest a future AGI scan these communications in developing it's attitude about God, for the record th

Re: [agi] AGI and Deity

2007-12-20 Thread Stan Nilsen
o be fair, I only had time to skim your web site. Perhaps I am missing something, but it seems your case against strong AGI does not address the obvious argument for the possibility of strong AGI I have made above. Ed Porter -Original Message- From: Stan Nilsen [mailto:[EMAIL PROTEC

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Stan Nilsen
t; or benefit of this? I can see benefit from allowing us our own thoughts as follows: The super intelligent gives us opportunity to produce "reward" where there was none. The net effect is to produce more benefit from the universe. Stan Matt Mahoney wrote: --- Stan Nilsen <[

Re: [agi] AGI and Deity

2007-12-20 Thread Stan Nilsen
d? Stan j.k. wrote: On 12/20/2007 09:18 AM,, Stan Nilsen wrote: I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain. It isn't the modeling device that limits the "level" of intelligence, but rather what ca

Re: [agi] AGI and Deity

2007-12-21 Thread Stan Nilsen
the result of the intelligence functions. Isn't the essence of the intelligence functions how one categorizes, simplifies the issues? Guess we'll wait for the arrival of this new life form before we learn what we would "commit our lives to" if we were smarter. Stan j.k. wrot

Re: [agi] AGI and Deity

2007-12-26 Thread Stan Nilsen
Samantha Atkins wrote: On Dec 20, 2007, at 9:18 AM, Stan Nilsen wrote: Ed, I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain. It isn't the modeling device that limits the "level" of intelligence, but rat

Re: [agi] AGI and Deity

2007-12-29 Thread Stan Nilsen
ht offer "explanations" given the situation. Pretty painless, easy read. I find the values based nature of our world highly relevant to the concept of an emerging "super brain" that will make super decisions. Stan Nilsen Samantha Atkins wrote: On Dec 26, 2007, at 7

[agi] Re: [agi] Comments on Pei Wang' s "What Do You Mean by “AI”?"

2008-01-15 Thread Stan Nilsen
suggestion that while we can't agree on a single definition of intelligence, we could probably agree on what "factors" will contribute to "greater" intelligence. 3. an observation that AGI will be reached by narrow AI techniques Stan Nilsen - This list is sponsored b

Re: [agi] AGI and Deity

2008-01-16 Thread Stan Nilsen
r it all the time. Any machine we create that has answers without the reasoning, is very scary. and maybe more than scary if it is optimized to offer reasoning that people will buy, especially the line "trust me." James Ratcliff */Stan Nilsen <[EMAIL PROTECTED]>/*

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Stan Nilsen
x27;s will outsmart us. Does that mean that "criminals" will ultimately be smarter than non-criminals? Maybe the AI's of the future will want an even playing field and be motivated to enforce laws. I see Richards design as easily being able to implement risk factors that cou

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen
Matt Mahoney wrote: --- Mark Waser <[EMAIL PROTECTED]> wrote: How do you propose to make humans Friendly? I assume this would also have the effect of ending war, crime, etc. I don't have such a proposal but an obvious first step is defining/describing Friendliness and why it might be a good i

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen
Matt Mahoney wrote: --- Stan Nilsen <[EMAIL PROTECTED]> wrote: Reprogramming humans doesn't appear to be an option. We do it all the time. It is called "school". I might be tempted to call this "manipulation" rather than programming. The results of s

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Stan Nilsen
Mark Waser wrote: Part 4. ... Eventually, you're going to get down to "Don't mess with anyone's goals", be forced to add the clause "unless absolutely necessary", and then have to fight over what "when absolutely necessary" means. But what we've got here is what I would call the goal of a F

Re: [agi] How general can be and should be AGI?

2008-04-29 Thread Stan Nilsen
Mike, I derived a few things from your response - even enjoyed it. One point passed over too quickly was the question of "How knowable is the world?" I take this to be a rhetorical question meant to suggest that we need all of it to be considered intelligent. This suggestion seems to be ech

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-08 Thread Stan Nilsen
Steve, I suspect I'll regret asking, but... Does this rational belief make a difference to intelligence? (For the moment confining the idea of intelligence to making good choices.) If the AGI rationalized the existence of a higher power, what ultimate bad choice do you see as a result? (I'v

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stan Nilsen
Matt, You asked "What would be a good test for understanding an algorithm?" Thanks for posing this question. It has been a good exercise. Assuming that the key word here is "understanding" rather than algorithm, I submit - A test of understanding is if one can give a correct *explanation* f

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Stan Nilsen
ther effects which always follow. If we looked at any one effect, we would easily predict that it "happens" because it always happens when the device is triggered. We wouldn't need to "understand" the entire process to correctly predict. Russell Wallace wrote: On

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Stan Nilsen
Jim Bromer wrote: --- I think it is important to note that understanding a subject does not mean that you understand everything about the subject. This is not a reasonable proposal. I think Stan is saying that understanding an algorithm is giving an *

Re: [agi] Defining "understanding" (was Re: Newcomb's Paradox)

2008-05-13 Thread Stan Nilsen
Matt Mahoney wrote: Remember that the goal is to test for "understanding" in intelligent agents that are not necessarily human. What does it mean for a machine to understand something? What does it mean to understand a string of bits? Have you considered testing intelligent agents by simpl

Re: [agi] Defining "understanding" (was Re: Newcomb's Paradox)

2008-05-13 Thread Stan Nilsen
Matt Mahoney wrote: --- Stan Nilsen <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: Remember that the goal is to test for "understanding" in intelligent agents that are not necessarily human. What does it mean for a machine to understand something? What does it mean to und

Re: [agi] Understanding a sick puppy

2008-05-15 Thread Stan Nilsen
There is something called "Evidence Based Medicine" that is in the works. In the book "Super Crunchers" Ian Ayres devotes a chapter(4) to such systems and the reaction of doctors. Diagnostics by examination of huge databases is evidently pretty far along. The book points out that it is the e

Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Stan Nilsen
Mike Tintner wrote: Matthias: I think it is extremely important, that we give an AGI no bias about space and time as we seem to have. Well, I (& possibly Ben) have been talking about an entity that is in many places at once - not in NO place. I have no idea how you would swing that - other th

[agi] AGI Light Humor - first words

2008-11-18 Thread Stan Nilsen
First words to come from the brand new AGI? Hello World or Gotta paper clip? What's the meaning of life? Am I really conscious? Where am I? I come from a dysfunctional family. --- agi Archives: https:/