sion, just want to say hi!
Hope to have my website up by end of this week. The thrust of the
website is that STRONG AI might not be that strong. And, BTW I have
notes about a write up on "Will a Strong AI pray?"
I've enjoyed the education I'm getting here. Only been a few w
Greetings Jirih,
A few minutes ago I uploaded my website - http://www.footnotestrongai.com
The write up on a praying AI is amongst the "Articles", and can be found
under "Just for Fun". I'll look at the link you've suggested below.
Stan
Jiri Jelinek wrote:
Stan,
there are believers o
e strong.
Ed Porter
-Original Message-----
From: Stan Nilsen [mailto:[EMAIL PROTECTED]
Sent: Monday, December 10, 2007 5:49 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity
Lest a future AGI scan these communications in developing it's attitude
about God, for the record th
o be fair, I only had time to skim your web site. Perhaps I am missing
something, but it seems your case against strong AGI does not address the
obvious argument for the possibility of strong AGI I have made above.
Ed Porter
-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTEC
t; or benefit of this?
I can see benefit from allowing us our own thoughts as follows: The
super intelligent gives us opportunity to produce "reward" where there
was none. The net effect is to produce more benefit from the universe.
Stan
Matt Mahoney wrote:
--- Stan Nilsen <[
d?
Stan
j.k. wrote:
On 12/20/2007 09:18 AM,, Stan Nilsen wrote:
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the "level" of intelligence,
but rather what ca
the result of the intelligence functions. Isn't the essence of the
intelligence functions how one categorizes, simplifies the issues?
Guess we'll wait for the arrival of this new life form before we learn
what we would "commit our lives to" if we were smarter.
Stan
j.k. wrot
Samantha Atkins wrote:
On Dec 20, 2007, at 9:18 AM, Stan Nilsen wrote:
Ed,
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the "level" of intelligence,
but rat
ht offer
"explanations" given the situation. Pretty painless, easy read.
I find the values based nature of our world highly relevant to the
concept of an emerging "super brain" that will make super decisions.
Stan Nilsen
Samantha Atkins wrote:
On Dec 26, 2007, at 7
suggestion that while we can't agree on a single definition of
intelligence, we could probably agree on what "factors" will contribute
to "greater" intelligence.
3. an observation that AGI will be reached by narrow AI techniques
Stan Nilsen
-
This list is sponsored b
r it all the time.
Any machine we create that has answers without the reasoning, is very scary.
and maybe more than scary if it is optimized to offer reasoning that
people will buy, especially the line "trust me."
James Ratcliff
*/Stan Nilsen <[EMAIL PROTECTED]>/*
x27;s
will outsmart us. Does that mean that "criminals" will ultimately be
smarter than non-criminals? Maybe the AI's of the future will want an
even playing field and be motivated to enforce laws.
I see Richards design as easily being able to implement risk factors
that cou
Matt Mahoney wrote:
--- Mark Waser <[EMAIL PROTECTED]> wrote:
How do you propose to make humans Friendly? I assume this would also have
the
effect of ending war, crime, etc.
I don't have such a proposal but an obvious first step is
defining/describing Friendliness and why it might be a good i
Matt Mahoney wrote:
--- Stan Nilsen <[EMAIL PROTECTED]> wrote:
Reprogramming humans doesn't appear to be an option.
We do it all the time. It is called "school".
I might be tempted to call this "manipulation" rather than programming.
The results of s
Mark Waser wrote:
Part 4.
... Eventually, you're going to get down to "Don't mess with
anyone's goals", be forced to add the clause "unless absolutely
necessary", and then have to fight over what "when absolutely necessary"
means. But what we've got here is what I would call the goal of a
F
Mike,
I derived a few things from your response - even enjoyed it. One point
passed over too quickly was the question of "How knowable is the world?"
I take this to be a rhetorical question meant to suggest that we need
all of it to be considered intelligent. This suggestion seems to be
ech
Steve,
I suspect I'll regret asking, but...
Does this rational belief make a difference to intelligence? (For the
moment confining the idea of intelligence to making good choices.)
If the AGI rationalized the existence of a higher power, what ultimate
bad choice do you see as a result? (I'v
Matt,
You asked "What would be a good test for understanding an algorithm?"
Thanks for posing this question. It has been a good exercise. Assuming
that the key word here is "understanding" rather than algorithm, I submit -
A test of understanding is if one can give a correct *explanation* f
ther
effects which always follow. If we looked at any one effect, we would
easily predict that it "happens" because it always happens when the
device is triggered. We wouldn't need to "understand" the entire
process to correctly predict.
Russell Wallace wrote:
On
Jim Bromer wrote:
---
I think it is important to note that understanding a subject does not
mean that you understand everything about the subject. This is not a
reasonable proposal. I think Stan is saying that understanding an
algorithm is giving an *
Matt Mahoney wrote:
Remember that the goal is to test for "understanding" in intelligent
agents that are not necessarily human. What does it mean for a machine to
understand something? What does it mean to understand a string of bits?
Have you considered testing intelligent agents by simpl
Matt Mahoney wrote:
--- Stan Nilsen <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
Remember that the goal is to test for "understanding" in intelligent
agents that are not necessarily human. What does it mean for a
machine to
understand something? What does it mean to und
There is something called "Evidence Based Medicine" that is in the
works. In the book "Super Crunchers" Ian Ayres devotes a chapter(4) to
such systems and the reaction of doctors.
Diagnostics by examination of huge databases is evidently pretty far
along. The book points out that it is the e
Mike Tintner wrote:
Matthias: I think it is extremely important, that we give an AGI no bias
about
space and time as we seem to have.
Well, I (& possibly Ben) have been talking about an entity that is in
many places at once - not in NO place. I have no idea how you would
swing that - other th
First words to come from the brand new AGI?
Hello World
or
Gotta paper clip?
What's the meaning of life?
Am I really conscious?
Where am I?
I come from a dysfunctional family.
---
agi
Archives: https:/
25 matches
Mail list logo