Matt,

Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am.

comment on two things:

1) The response "Intelligence has nothing to do with subservience to humans," seems to miss the point of the original comment. The original word was "trust." Why would trust be interpreted by the higher intelligence as subservience? And, it is worth noting that we wouldn't really know if there was lack of trust, as the AI would probably be silent about it. The result would be a possible needless discounting of anything we attempt to offer.

2) In the earlier note the comment was made that the higher intelligence would control our thoughts. I suspect this was in jest, but if not, what would be the "reward" or benefit of this? I can see benefit from allowing us our own thoughts as follows: The super intelligent gives us opportunity to produce "reward" where there was none. The net effect is to produce more benefit from the universe.

Stan



Matt Mahoney wrote:
--- Stan Nilsen <[EMAIL PROTECTED]> wrote:

Ed,

I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain.

It isn't the modeling device that limits the "level" of intelligence, but rather what can be effectively modeled. "Effectively" meaning what can be used in a real time "judgment" system.

Probability is the best we can do for many parts of the model. This may give us decent models but leave us short of "super" intelligence.

Deeper thinking - that means considering more options doesn't it? If so, does extra thinking provide benefit if the evaluation system is only at level X?

Yes, "faster" is better than slower, unless you don't have all the information yet. A premature answer could be a jump to conclusion that we regret in the near future. Again, knowing when to act is part of being intelligent. Future intelligences may value high speed response because it is measurable - it's harder to measure the quality of the performance. This could be problematic for AI's.

Humans are not capable of devising an IQ test with a scale that goes much
above 200.  That doesn't mean that higher intelligence is not possible, just
that we would not recognize it.

Consider a problem that neither humans nor machines can solve now, such as
writing complex software systems that work correctly.  Yet in an environment
where self improving agents compete for computing resources, that is exactly
the problem they need to solve to reproduce more successfully than their
competition.  A more intelligent agent will be more successful at earning
money to buy computing power, at designing faster computers, at using existing
resources more efficiently, at exploiting software bugs in competitors to
steal resources, at defending against attackers, at convincing humans to give
them computing power by providing useful services, charisma, deceit, or
extortion, and at other methods we haven't even thought of yet.

Beliefs also operate in the models. I can imagine an intelligent machine choosing not to trust humans. Is this intelligent?

Yes.  Intelligence has nothing to do with subservience to humans.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78266533-b2b3e9

Reply via email to