On Dec 20, 2007, at 9:18 AM, Stan Nilsen wrote:
Ed,
I agree that machines will be faster and may have something
equivalent to the trillions of synapses in the human brain.
It isn't the modeling device that limits the "level" of
intelligence, but rather what can be effectively modeled.
"Effectively" meaning what can be used in a real time "judgment"
system.
Probability is the best we can do for many parts of the model. This
may give us decent models but leave us short of "super" intelligence.
In what way? The limits of human probability computation to form
accurate opinions are rather well documented. Why wouldn't a mind
that could compute millions of times more quickly and with far greater
accuracy be able to form much more complex models that were far better
at predicting future events and explaining those aspects of reality
with are its inputs? Again we need to get beyond the [likely
religion instilled] notion that only "absolute knowledge" is real (or
"super") knowledge.
Deeper thinking - that means considering more options doesn't it?
If so, does extra thinking provide benefit if the evaluation system
is only at level X?
What does this mean? How would you separate "thinking" from the
"evaluation system"? What sort of "evaluation system" do you believe
can actually exist in reality that has characteristics different from
those you appear to consider woefully limited?
Yes, "faster" is better than slower, unless you don't have all the
information yet. A premature answer could be a jump to conclusion
that we regret in the near future. Again, knowing when to act is
part of being intelligent. Future intelligences may value high
speed response because it is measurable - it's harder to measure the
quality of the performance. This could be problematic for AI's.
Beliefs also operate in the models. I can imagine an intelligent
machine choosing not to trust humans. Is this intelligent?
If they have no more clarity than is exhibited here then yes, that is
probably an intelligent decision.
- samantha
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=79271793-067ea4