it's in EXPTIME, with boardsize as the parameter.

s.


----- Original Message ----
From: Don Dailey <[EMAIL PROTECTED]>
To: computer-go <computer-go@computer-go.org>
Sent: Thursday, November 22, 2007 12:35:37 PM
Subject: [computer-go] The global search myth


Instead of responding individually,  I would like to address a myth in
one message.    The myth is that any kind of global search, whether it
be alpha/beta, or the currently explored best first search methods is a
foolish approach and that there is a some kind of constant time elegant
algorithm for playing really strong go that doesn't require significant
computer resources.

Time and time again this myth resurfaces in computer games.   The idea
is that one extra ply of search should be pretty much worthless (the
myth) and that because it is an exponential time algorithm it should be
considered useless.

This argument actually appeals to the human brain - it seems like a
"reasonable" conclusion if you look at it superficially. 

The idea that this exponential time algorithm is "useless" is based on
the notion that 1 ply of search is "useless", which is a myth.   If a 1
ply search was useless, then certainly it would not be productive
searching 2 ply.      But this is CLEARLY not the case.

This is one of many things in life that people refuse to believe -
regardless of the evidence.   It's almost like politics - no matter
 what
world leaders do and how corrupt the system - most people ignore the
facts and believe that (whoever their favorite politician is) is a good
man.     I think you could commit murder in broad daylight with 1000
people watching you,   and if people believe you are not the type to do
this,  or if you spin it right,   they will believe you didn't do it -
even if they saw it with their own eyes.

I talked to an old friend of mine several weeks ago about computer
 chess
- he is an international chess master and my former partner in computer
chess.    He has continued to keep up with computer chess, I have
not.    He told me that people are "surprised" that the ELO strength
curve has remained more or less linear,  despite the fact that chess
programs are now better than people.      This myth never quite died
EVEN in computer chess.    People always "expected" that it was a local
phenomenon and at any particular stage in history, there were many who
sincerely believed that we were at that point in the curve where
 further
improvement would stop.  

I am probably older than most here on this group - I'm an old-timer in
 a
way.   When I was in high school, nobody even had a calculator, and
nobody had a PC in their house.    I wish there was a way I could give
you some historical perspective on this.   It was absolutely clear to
everyone back then that computers would NEVER reach expert strength
(2000 ELO) and master was completely out of the question.    The best
computers were pretty bad, perhaps 1400 ELO and when the first
micro-computers came out,  you could own a chess program that played
about 1000 ELO.   A beginner could beat it.     It really was quite a
leap to believe that 2000 ELO  would someday be attainable.     

Back then the ELO curve was not understood.   The basic formula was
eventually found that 1 ply added 250 ELO rating points to a program.  
It seems odd now, but this was actually not studied or understood for a
good little bit of time.    The reason it was not understood is that it
was a totally stupid idea.   Why would anyone go to the trouble to see
what 1 ply was worth when it clear that it was worth nothing?    

Instead,  people focused on highly selective searches.   In order to
play strong it was clear that computers would have to look 20 or 30 ply
ahead (because it was "understood" that this would simulate "long term
planning")    "Long term planning" was a term that was bandied about a
LOT in those days.   It was the clearly understood reason why computers
would never play beyond 1600 or 1700 hundred ELO because that is the
level that humans start thinking in terms of "long term planning."     
If you didn't imagine the end-game during the opening,  obviously you
couldn't play beyond the level of a patzer.

Back in those days, we knew very little and we thought in black and
white.   You either do long term planning or you don't.   In the game
 of
chess there are "tactics" and there  is "strategy" and the two don't
 mix
at all.     These naive ideas where transfered to programs - it was
 said
that programs understand "tactics" but not "strategy."    Tactics where
about search and strategy was about the "evaluation function" which had
nothing to do with the search in peoples minds.   

To complicate the naive ideas, people believed that no amount of search
would give a program positional understanding, but they didn't believe
the converse,  that no amount of positional understanding could give
 you
good tactics.     They believed a magically wonderful evaluation
function was the course to pursue. 

So two camps of believers emerged.   There were given dubious titles
like the "brute force" camp.   The term "brute force" makes it sound
like they were a bunch of thugs with no imagination - a group of
retards.     The other camp was more idealistic, they believed the path
to nirvana involved programs that did not search (or did very little)
and the whole process would be bolstered by incredibly sage-like wisdom
of a truly powerful all knowing AI built into a single module called
 the
evaluation function.      They believed that chess was so simple that
all the relevant knowledge could make  search a moot point.   

At some point,  some incredibly foolish programmers at Northwestern
University built a program and decided that it was too hard to do
selectivity.    They kept noticing that when they threw out moves, they
would accidentally throw out good moves because a lot of terrible
looking moves are actually good moves.      So in their ignorance they
build a program that didn't throw out moves and decided to live with
 the
consequences - a program that couldn't look very far ahead and thus
would be totally incapable of "long term planning."    Of course the
selective searchers back then did not do anything that could pass for
long term planning either - but that's beside the point - at least they
knew what the one true way was and remained loyal to a just and
righteous cause.

The big surprise was that this program dominated computer chess for a
period of time - until everyone got wise to their foolishness and
started copying them.    Thus was born the age of "brute force" chess.
     

It must have been around this time that the ELO curve started being
understood.   A lot of people wondered at the magic - how could 1 extra
ply be of any measurable benefit?    Surely,  one additional play could
not possibly find a brilliant checkmate?   It could not see a clever
tactical line and it certainly would not suddenly make it capable of
"long term planning."    But the evidence was right in front of their
 face.

Since this was obviously not believable,  a lot of people chose not to
believe it.    The most common way to explain this away was to do what
Albert Einstein did - add a cosmological constant to the formula to
 bend
the facts to your belief system.      Since this obviously flew in the
face of logic, and yet the evidence stared them in the face,  the
conclusion that was voiced by many is that this strength curve is
 caused
by the nature of chess  - the argument goes like this:

    1. Up to 6 ply,  it's easy to lose games due to silly blunders and
tactical shots.
 
    2. Beyond 6 ply,  tactics has little to do with it and long term
planning "kicks in"  

In other words, you can get to 1800 ELO or so by not making blunders -
if you can see 6 ply you will never blunder.   But 7 ply or more won't
help much at all - unless you can think 30 or 40 moves ahead.      I
would love to see what kind of curve this would produce.

This reasoning was completely contrived and total crap.    As it turned
out,  the curve was nearly flat.   7 ply played much better than 6
ply,   and 8 played much better than 7 ply.      There is no sudden
transition from "tactics" to "positional play" - this was just a device
created by humans to try to understand how it works.    In fact, the
term "tactics" and "strategy" or "positional play" is just a bunch of 
words that have no precise meaning.   A computer doesn't care what you
call it,   their only goal is to find a forced pathway to the best
possible position.   

You might say the unimaginative brute forcers won the first battle,
probably just a lucky sucker punch.     But the debate never stopped
 and
so it raged on.     To this day there are arguments and disputes on the
computer chess newsgroups where you can easily identify the "brute
forcers" and the advocates pushing for smarter programs.    Both camps
have toned down the rhetoric because extremists from both camps were
forced to face certain facts.    The best programs in the world HAD to
have good evaluation functions (which deflates another myth - that
 chess
programs are nothing more than a very simplistic evaluation function
thrown together with a few terms and that all the effort is in the
search.)     But the best programs had to have very good searches
too.    Unfortunately,  there is enough truth in both camps to keep the
arguments going forever.

The GO players are now having this same battle.   There are some who
will refuse to recognize the value of search, and this will be despite
any available evidence,  and will believe that there is a better
approach.  

I don't believe this is about search however.    I think these people
don't want to believe that ANY approach that requires computational
complexity is feasible.    You will notice that they harp on "elegance"
and beauty.    The word "smart" will be used a lot.  So what they
believe is that this is not a hard problem.    You will also notice
 that
they abhor the idea of a scalable program, so they must be thinking
there is an incredible elegant and refined solution that exposes GO as
 a
simple problem,  not a difficult one.   If an exponential time
 algorithm
is out - then they must believe GO is not an exponential problem.

Maybe they are correct?    In some sense, go is simple - the rules are
very simple and almost mathematically simple.   Perhaps there is a way
to solve the equation?     The variables to this equation is a board
state - you apply a bunch of mathematical transformations to a board
state and it spits out the best move.    You can run this formula on a
programmable calculator or compute it by hand perhaps.     

It's interesting to me that the Kolmogorov complexity of Go and Chess
 is
very tiny.    A 19x19 go database that contains all the answers would
not fit in the universe, even if it were packed with memory chips.   
But this database can be expressed by a short little program.     But
can it be expressed by a program that has a short running time AND
 small
size?        I think this is the crux of the matter.   If the answer is
yes, then Dave Dyer and other are correct - a search based approach is
foolhearty.     I think as a general statement you can say that search
enthusiasts (or "brute force" advocates if you want to belittle them)
believe GO is an NP hard game, and the knowledge based advocates
 believe
there is a simple fixed time "intelligent" approach that will crack
 this
nut.  

- Don









_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/





      
____________________________________________________________________________________
Be a better sports nut!  Let your teams follow you 
with Yahoo Mobile. Try it now.  
http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to