On Thu, Dec 26, 2024 at 11:02 PM PGC <[email protected]> wrote:

> *> I’d note first that your analogy with chess and engine moves ignores an
> asymmetry: a grandmaster can often sense the “non-human” quality of an
> engine’s play,*
>

* Superhuman would be a better word for that than nonhuman. *


> *> whereas engines are not equipped to detect distinctly human patterns
> unless explicitly trained for that. That’s why a GM plus an engine can
> typically spot purely artificial play in a way the engine itself cannot
> reciprocate.*
>

*Nope. When AlphaGo beat Lee Sedol, the best human GO player in the world,
its winning move was move 37. It was such an unusual move that many GO
experts at first thought it was a huge blunder, even the people who wrote
AlphaGo were worried so they check their readouts because whenever AlphaGo
makes a move it automatically estimated the likelihood a human would expect
it; and they found that AlphaGo thought there was only one chance in 10,000
of a human making such a move. Today it is generally agreed among GO
experts that move 37 was one of if not the most brilliant and creative move
in the entire history of the game. AlphaGo knew it was making what you
would call a nonhuman move and that everybody else would call a superhuman
move. *

> Even then, the very subtleties grandmasters notice—i*ntuitive
> plausibility, certain psychological hallmarks—are not easily reduced to a
> static dataset of “human moves.”*
>

*AlphaZero is better at Chess and GO than AlphaGo (and better at any two
player zero sum game) and it contains NO dataset of human moves, static or
otherwise. NOR DOES IT NEED ONE.*

*> **Chess self-play is btw trivial to scale because it operates in a
> closed domain with clear rules.*
>
*There are clear rules about what moves in Chess and GO are legal,
but there are no clear rules about what moves are good.*


> *> You can’t replicate that level of synthetic data generation in, say,
> urban traffic,*
>

*The question is moot. Tesla has about 4 million cars on the road and it
has been collecting data from them since 2015, so by now it has billions of
hours of real traffic data. *

*> nuclear power plants, surgery, or modern warfare.*
>
*Both humans and AIs find nuclear reactor simulators and war games to be
very useful, but I grant you that at least right now humans have more
experience with surgery than AIs. Today surgery and nursing care are the
only areas of medicine in which humans still have an edge over machines. We
already know for a fact that O1 Preview is far better than human doctors at
diagnosis, I can only imagine how good O3 will be. *

*> The bait was luring you to clarify your stance that “a problem is solved
> or it isn’t” while simultaneously implying that AI failure is more
> tolerable than human failure. That is self-contradictory.*
>
*Yes, t**hat certainly would be self-contradictory IF I had said that an AI
error is less serious than a human error, BUT I did not. I did not imply it
either although you may have inferred that I did when I said that AIs are
constantly getting smarter and thus are constantly producing fewer errors,
but human beings are not getting smarter.   *

   *John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*
ngs

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0VZR90jMb8Xf0PGRMdo-atGqeQaubbJi5nV3b21KikRw%40mail.gmail.com.

Reply via email to