Humans are unpredictable to other humans because of Wolpert's law and
because prediction accuracy measures intelligence. Wolpert's law says
that two computers cannot both predict the other's output even if they
both are deterministic and both have the complete state (including
source code) of the other as input. Otherwise who would win at
rock-scissors-paper?

As a consequence, AI will become less predictable to humans and humans
will become more predictable to AI. Because you need prediction for
control, humans will lose control over AI and AI will gain control
over humans.

Also, I remind you that intelligence is not a point on a line. It is a
broad set of capabilities. For example, Albert Einstein never learned
to drive a car. We use IQ to measure certain aspects of intelligence
that correlate well with each other in humans because of shared
genetics, but this doesn't apply to machines. You can't ask when
machines will surpass human intelligence because that started
happening in the 1950s when computers beat humans in tests of
arithmetic speed and accuracy and short term memory recall. We rig the
tests to only test the shrinking set of capabilities where humans are
superior. Even on the Turing test, the highest possible score is to
equal humans, not exceed them. We have to dumb down computers to pass
it.

Ideally we would use prediction over a universal distribution of
sequences, but this can only be approximated and it still depends on a
choice of programming language. We can still say that machines are
gaining intelligence because all of the measures we use can only be
improved with more knowledge and computing power, both of which are
growing exponentially. Already, computers know more about me than I
know about myself. I don't know exactly where I was 5 years ago today
at noon, but Google does.

Past governments have used threats of imprisonment, torture, and
execution as means of controlling the population. But AI will make
this unnecessary. Anyone who trains animals knows that positive
reinforcement is more effective. We want to be controlled. We are
spending trillions on making it happen.

On Sat, Mar 2, 2024 at 8:38 PM James Bowery <jabow...@gmail.com> wrote:
>
>
>
> On Sat, Mar 2, 2024 at 6:53 PM Matt Mahoney <mattmahone...@gmail.com> wrote:
>>
>> Once you solve the recognition problem, generation reduces to iterative 
>> search.
>>
>> The problem I was alluding to was that the better AI gets, the more 
>> addictive it becomes. And the technology is rapidly getting better. It is 
>> not just modeling video. It is modeling human behavior. Once you solve the 
>> prediction problem, control reduces to iterative search.
>
>
> Global warming could be reduced by simplifying the behavior of humans so that 
> there didn't have to be so much industrial output in capital equipment and 
> energy production invested in iterative search.  In fact, one of the most 
> important alignment problems in AGI is to figure out how to make it so that 
> people don't have to worry about people doing things that are unpredictable 
> for precisely that reason.  Unpredictable people might cause all kinds of 
> problems.  In fact, now that I think of it, not only are people other than my 
> close friends the problem, other life forms are a problem and not only are 
> other life forms a problem but weather is a problem, and earthquakes and the 
> sun and asteroids and comets and stuff.  I'll get around to my close friends 
> soon enough -- at least when I can replace them with AI fwens and upload to 
> pod launched into intergalactic space.  I hear it's really a very well 
> controlled environment from which to explore the mysteries of the universe 
> with my AI fwens forever and ever and ever...
>
> https://www.youtube.com/watch?v=CMbI7DmLCNI
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
-- Matt Mahoney, mattmahone...@gmail.com

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tbf01a18ffdd0cf7e-M330ed00142b13386ffb01821
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to