I once thought I had a sure-fire way to make games between humans and computers 
fairer. Start with a large set of chess-like games that use different boards, 
different pieces, different rules. Enumerate the games so that each one 
corresponds to a n-digit binary numeral (for large n). Then make a "super game" 
in which the players start by creating a n digit binary numeral by taking turns 
in which they can specify one of the n binary digits. The super game would 
continue by playing the chess-like game that corresponds to the created number.


In a super game between a human and a computer, the computer would not have 
access to all the insights into the nature of chess that humans have 
established over hundreds of years of playing chess and which chess playing 
computers use to defeat humans.  Of course, the human player would also be 
deprived of all the years of research into chess, but humans can use their 
marvelous intuition to figure out a reasonable set of strategies even for a 
game they haven't studied before. The computer, without a reasonable set of 
strategies, would (I assumed) find little benefit from  its massive computing 
power.


The new AlphaZero game playing computer refutes my idea.

________________________________
From: Friam <friam-boun...@redfish.com> on behalf of Rich Murray 
<rmfor...@gmail.com>
Sent: Monday, December 11, 2017 12:16:26 AM
To: Rich Murray
Subject: [FRIAM] Google self-evolving AlphaZero artificial intelligence program 
mastered chess from scratch in 4 hours: Rich Murray 2017.12.10



https://futurism.com/4-hours-googles-ai-mastered-chess-knowledge-history/

Chess isn’t an easy game, by human standards. But for an artificial 
intelligence powered by a formidable, almost alien mindset, the trivial 
diversion can be mastered in a few spare hours.

In a new paper, Google researchers detail how their latest AI evolution, 
AlphaZero, developed “superhuman performance” in chess, taking just four hours 
to learn the rules before obliterating the world champion chess program, 
Stockfish.

In other words, all of humanity’s chess knowledge – and beyond – was absorbed 
and surpassed by an AI in about as long as it takes to drive from New York City 
to Washington, DC.

After being programmed with only the rules of chess (no strategies), in just 
four hours AlphaZero had mastered the game to the extent it was able to best 
the highest-rated chess-playing program Stockfish.

In a series of 100 games against Stockfish, AlphaZero won 25 games while 
playing as white (with first mover advantage), and picked up three games 
playing as black.
The rest of the contests were draws, with Stockfish recording no wins and 
AlphaZero no losses.

[https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
    Virus-free. 
www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to