Re: [computer-go] Neural networks

2009-11-03 Thread Richard Brown
On Tue, Nov 3, 2009 at 6:43 AM, Willemien wilem...@googlemail.com wrote:
 I disagree with the point that MCTS is a neural network,

 In my opinion (and i maybe completely off target) One of the essences
 of neural networks is that the program changes/learns from the games
 it has played. .

I think that you are right; the learning associated with artificial neural nets
is likely to be from games the program has played, or from games it has
observed.  In some literature, this is called supervised learning, or
learning with a teacher, wherein the training data is not at all random,
but comprises loads of very specific examples of actual behavior on the
board.  (Of course, deciding what features of those actual behaviors are
to be measured, or encoded into a pattern, is a very tricky problem.)

 MCTS doesn't have that result, the improvement is only in-game
 The program doesn't learn not to make the same mistake anymore, by
 MCTS the mistake is hopefully avoided.

MCTS seems to be (I'm sure someone will correct me if I'm wrong)
reinforcement learning which differs from supervised learning
in that correct input/output pairs are never explicitly shown to the
classifier.

Just tons of nearly-random trials, yes?

According to  http://en.wikipedia.org/wiki/Reinforcement_learning ,  there
is a focus on on-line performance, which involves finding a balance between
exploration (of uncharted territory) and exploitation (of current knowledge).
The exploration vs. exploitation trade-off in reinforcement learning has been
mostly studied through the multi-armed bandit problem.

So, the two methods are both classifiers of data points, in some sense.

The fundamental difference between them, IMHO, for the purposes of
go-programming, is that artificial neural nets may learn from data that is
harvested (and preserved, at least for the duration of the training), while
Monte-Carlo methods learn from (mostly random) unsupervised trial-and-error.

Further, the tuning of the weights in an artificial neural net may be performed
off-line, with data from thousands of games, while MCTS performs its work
on-line, with a branch of the game-tree that begins only from the
current position.

Are both methods classifiers?  Sure.  But neural net training is done through
observation/measurement/feature-extraction (offline, from thousands of games).
That's a much different critter, at the core, from the environment/action/reward
scenario of MCTS (online, from the current position).

-- 
All computer programs are identical.  After all, it's just ones and zeroes,
and a paper tape of arbitrary length.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] COGS bug in Ko detection?

2009-04-14 Thread Richard Brown
2009/4/14 Brian Sheppard sheppar...@aol.com:
 But in game 739216 the stones are the same, but the other color is moving.
 That can't be a repetition...

Well, that's what distinguishes _positional_ superko from _situational_
superko.  See  http://senseis.xmp.net/?Superko .

As Jason House wrote,
 That sounds like a classic _positional_ super ko violation. Any board
 repetition is a ko violation, regardless of the player to play.

_Regardless_ of the player to play.  [Emphasis mine.]

Now, one might have _philosophical_ disagreement about whether
that's the way a server should implement a prohibition on cycles,
or about whether that's what the framers intended.  And I might
even agree with you that _situational_ superko is superior in that
regard.  Under situational superko, your example, as you say,
_can't_ be a repitition.  [So, one might well ask, what is the reason
for prohibiting it?]

But whether we like it or not, that is how the server authors have
chosen to implement the prohibition (even though allowing the
_other_ player to play in the position would not really create a
_cycle_, in the sense of a directed acyclic graph).

[See  http://en.wikipedia.org/wiki/Directed_acyclic_graph.]

Situational superko can be defined in terms of not permitting a
cycle in the game-tree, thus always preserving its acyclic nature.
[Positional superko, IMHO, has no such elegant rationale.]

But postional superko is what both KGS and CGOS implement,
and we have to live with that, at least until they see the light.

-- 
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] COGS bug in Ko detection?

2009-04-14 Thread Richard Brown
On Tue, Apr 14, 2009 at 10:51 AM, Robert Jasiek jas...@snafu.de wrote:
 Richard Brown wrote:
 Positional superko, IMHO, has no such elegant rationale.

 It is a ko rule that depends on only what one can see on the board.
 Elegant.

And what is the _reason_ to leave out the information of whose turn it is?
Elegant, but not a _rationale_.

 It is a ko rule that depends on one type of information only: The colour
 of each intersection.
 Elegant.

And what is the _reason_ to leave out the very pertinent information
of whose turn it is?
No _rationale_ for that.

 It is the superko rule that has the minimal number of conditions.
 Elegant.

And... Oh, never mind, of course you must be right.  You're the expert.

 --
 robert jasiek
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] The Enemy's Key Point Is My Own

2008-10-28 Thread Richard Brown
The enemy's key point is my own is often invoked, for example,  as a
reason to occupy the central point of a _nakade_ shape, or to play a
double sente point, or to make an extension that would also be an
extension for the opponent.

I would like now to talk about it in the context of the potential
modification of program behavior, or style, via opponent modeling.

There's another oft-repeated maxim which states that there are as many
styles of play as there are go-players.  If true, this adage implies a
continuum along which each player may find himself or herself.

Or itself.

One measure of playing style is the degree to which the bulk of a given
player's selected behaviors display a concern with one's own position,
or a concern with the position of the enemy.

That is to say, or rather ask, of the moves in which a player most
often chooses to indulge, whether they exhibit what might be called
a defensive character, seeking to shore up or expand one's own
position, or do  they instead more often exhibit a combative
character, seeking to destroy or diminish that of the other?

Of course all players indulge in both types of behavior, but we all
know that there are some players who prefer to build, and some who
prefer to destroy, when they are given a choice.

Now, for each potential behavior on the board -- even those acts in
which we would never choose to indulge -- suppose that we already have
a pair of numbers, in this case percentages, where one represents the
likelihood (or our belief about the probability) that such a point is
advantageous for us to play, and where another such number represents,
conversely, the likelihood (or our belief) that the same point (were we
to pass and it be selected by the foe) would be advantageous for the foe.

How we have arrived at these probabilities may be important,
certainly, but it's beyond the scope of this message.  Let's
just say that we already have them.

Remembering that frequently-quoted  proverb, namely:  The enemy's key
point is my own, we may wish to consider giving some weight to the
notion of playing on those points that we believe would be best for our
opponent to place a stone.

For example, suppose that it is our belief that a specific move has a
15% chance of being our best move, and say, a 35% chance of being best
for the foe.

If we give equal weight to these percentages, we may simply average
them, or split the difference, arriving at a value of 25%, if we wish
to take advantage of the notion embodied in the proverb, as well the
notion of merely looking for plays that will be advantageous to our own
position.  Note that this average is given by:

( 0.5 * 0.15 ) + ( 0.5 * 0.35 ) = 0.25

where the weights are equal, that is, fifty-fifty.

However, IRL (in real life) each go player in the world is not only
likely to -- but in practice _does_ -- give a different weight, either
more or less, to each notion, as his character, or even his whim, dictates.

At the one extreme we may have a player who is defensive, pacifistic,
self-obsessed, overly-concerned about his own stones, to the point of
ignoring the other entirely.  Such a player is oblivious to the foe's
plans and position, trying always to take one's own best point.  Such
a player completely disregards the proverb:  The enemy's key point is
my own.

At the other extreme we have the player who is aggressive, combative,
entirely obsessed with the other, overly-concerned about the opponent's
stones, to the point of ignoring his own best interests entirely.  Such
a player is oblivious to his own plans and postion, trying always to
take the foe's best point.  This player has _pathologically_ taken to
heart the proverb:  _Only_ the enemy's key point is my own.

Both styles are pathological, and somewhere between these two extremes,
lies _every_ go-player, it seems.

Thus, a moderately-defensive player might be likely, instead of as in
the above example, to give more weight to the 15% chance (his own) and
less to the the 35% chance (the foe's).

I use the term chance loosely here, as we all know that go, being a
game of perfect information, does not depend on chance.  [Whatever that is!]

Yet if a certain player were known to be, in general, say, about 63%
likely to be aggressive (and 37% defensive), we might then calculate
that his chance of selecting the above specific point is given by:

( 0.63 * 0.15 ) + ( 0.37 * 0.35 ) = 0.224

because such a player gives more weight (63%) to playing _our_ moves,
and less (37%) to playing his own.

In this particular example, there is not so much difference between our
original 50/50 estimate of the chance that this is a good move, and the
new, 63/37 estimate.

However, as the degree of aggressiveness of the player approaches one
or the other of the extremes, and also as our _a_priori_ beliefs vary
about the chances of a particular point being a good one for one or the
other of us, there are cases where the deviation from the 

Re: [computer-go] Programmers representative (ICGA)

2008-05-19 Thread Richard Brown
On Mon, May 19, 2008 at 9:30 AM, Jason House
[EMAIL PROTECTED] wrote:

 On May 19, 2008, at 10:09 AM, Rémi Coulom [EMAIL PROTECTED] wrote:

  [ICGA...]

 So I am your representative, and any question or suggestion is welcome.

 I don't know what that means...___

Considering the source, my guess is that it's none of:
 International Chewing Gum Association
 Islamic Center of Greater Austin
 Informed Comment:  Global Affairs
 Idaho Career Guidance Association
 Iowa Corn Growers Association
 Independent Craft Galleries Association

I've narrowed it down to two likely candidates:
 International Conference on Genetic Algorithms
or:
 International Computer Games Association
the latter being the very first hit, if one Googles (and I know I do!)
for ICGA (doing which took a lot less time than composing a reply).
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Micro-Matrix GO Machine

2007-11-30 Thread Richard Brown
On Nov 30, 2007 9:00 AM, Ben Lambrechts [EMAIL PROTECTED] wrote:
 You find it in http://daogo.org/download/computer_go_02.pdf page 27

I was a subscriber to this journal.  When I read this piece back in
1987, I had assumed that it was humor; a joke.

The article provides a number of subtle clues toward that effect.
(Humidity, orange-juice cans plus miles of wire, the price, 243 lines
of C code, and especially No-Yoke Importers.)

I believe that if you re-read the article while entertaining the
assumption that it is a farcical satire, you may similarly become
convinced that it was an attempt at levity.

The picture on the cover, however, is not a joke; Leibniz did write an
article on go that appeared in a scholarly journal in 1710.  The
original Latin (in which it was fashionable for scholars to write,
back then) as well as a few translations may be found at
http://www.gozillago.net/Leibnitz/Leibnitz.html  by those who may
have an interest in such things.
-- 
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] U. of Alberta bots vs. the Poker pros

2007-07-26 Thread Richard Brown

On 7/26/07, chrilly [EMAIL PROTECTED] wrote:

This is a remarkable result. I think poker is more difficult than Go and of
course chess. My hypothesis (its just a hypothesis) for the success is.
There is someone - Dave Billings - who worked for many years very
consequently on the topic. And he is able to motivate a lot of other good
people to go along with him. And he gets probably also a lot of support from
his boss, J.Schaeffer. And of course, there is some prospect to win fame and
money.


I think you mean Darse Billings. Playing internet poker used to be
accomplished only by using shudder IRC /shudder, and Darse
was one of the first to automate IRC poker, at first with aliases that
folks just shared with each other, informally, then later with gui
front-ends, but the back-end was still IRC.

I had the privilege of being on the IRC poker server when Poki made
its first appearance.  Poki was the first incarnation of Darse's poker
bot, and it not only played a respectable game of Texas Hold 'Em,
it would also respond to chat requests such as Poki, quote Steve,
wherupon it would reproduce a joke by comedian Stephen Wright.

Darse was not only smart, he was very friendly and helpful to folks,
even those who, although they lacked computer knowledge, wanted
to play online poker.

Such was the infancy of internet poker.

Then along came the world-wide web, and now there are dozens,
if not hundreds, of poker servers and  their associated clients; it
has become a multimillion-dollar industry.


The conditions for solving a problem are always at least as important than
the problem itself. Maybe are the conditions in Poker better than in Go.


There is certainly more money to be made in poker than in go.

--
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Neural Networks

2007-07-20 Thread Richard Brown

On 7/20/07, Joshua Shriver [EMAIL PROTECTED] wrote:

Anyone recommend a good book on programming Neural Networks in C or C++?

Been digging around the net for while and haven't come up with
anything other than an encyclopedia-like definition/writeup. No
examples or tutorials.


There are some C programs at
http://www.neural-networks-at-your-fingertips.com/
that you may find useful.  [Reading this code to see what it does
helped me to learn!]

Software that can be downloaded from
http://www.dontveter.com/nnsoft/nnsoft.html
is similarly useful, as is Don  Tveter's book The Pattern Recognition Basis of
Artificial Intelligence, to which that software is a companion.
[What I like about
Tveter's book, and software, is that it seems to be written more from
an engineering
perspective than a theoretical one.  As you have found, there is no shortage of
theory, on the web, but somewhat of a dearth of practical information.]

There is also a repository of free AI software (including some for
neural networks) at
http://www.cs.cmu.edu/Groups/AI/html/rep_info/intro.html .

Also, of course, you might wish to browse the archives of the Usenet newsgroup,
available at  http://groups.google.com/group/comp.ai.neural-nets  or from your
friendly neighborhood NNTP server.  The FAQ list, in particular,  for that group
contains a lot of good links.

Hope this helps.

--
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Why are different rule sets?

2007-07-12 Thread Richard Brown

On 7/12/07, Chris Fant [EMAIL PROTECTED] wrote:


No, gomputers are real:

http://www.google.com/search?q=gomputer


Maybe you were joking, but did you notice that one of the hits
from that search was a URL where the spelling was not only
used _intentionally_, but also -- in a remarkable occurrence of
serendipity and relevance to this list -- used to describe a
computer-go project?

The page describes the efforts of a group at the University of
Paderborn Center for Parallel Computing to develop a go program
on a cluster of FPGAs.  Hence  GOmputer:

http://wwwcs.uni-paderborn.de/pc2/index.php?id=191

[Pipe it through http://translate.google.com/ if you don't read German.]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Explanation to MoGo paper wanted.

2007-07-11 Thread Richard Brown

On 7/11/07, Don Dailey [EMAIL PROTECTED] wrote:


The dirty hack I'm referring to is the robotic way this is implemented
in programs, not how it's done in humans.  With a pattern based program
you essentially specify everything and the program is not a participant
in the process.   It comes down to a list of do's and dont's and if we
can claim that knowledge was imparted it might be true, but no wisdom or
understanding was.


I'm compelled to point out that neural nets, _trained_ on patterns, which
patterns themselves are then discarded, have the ability to recognize
novel patterns, ones which have never been previously seen, let alone
stored.  The list of do's and dont's has been discarded, and what to do
or not do, in a situation that may never have been seen before, is inferred,
not looked-up in a library of rules.

So, it is not true that with a pattern-based program you essentially specify
everything.  At least, not if you have thrown the patterns away, and
have substituted multilayer feedforward networks for that _training_data_.


UCT simulates understanding and wisdom,  patterns just simulates
knowledge.


This is a very strong assertion.  We eagerly await the proof.  :-)

I can just as easily assert:

Trained neural nets simulate understanding and wisdom.  (A static
pattern library merely simulates knowledge, I agree.)


Again, this is largely philosophical because even UCT programs are
robots just following instructions.   It's all about what you are trying
to simulate and why it's called AI.I think UCT tries to simulate
understanding to a much greater extent than raw patterns in a
conventional program.


Than raw patterns, yes.  Trained neural nets, too, try to simulate
understanding to a much greater extent than do raw patterns.

Of course Don is right, it boils down to philosophy.  And while we're
on that topic, ...

I regret some of the terms that have come into use with regard to AI,
due to the (misguided, in my humble opinion) philosophy of some.

The very name artificial intelligence bothers me; AI programs are neither.

When humans run certain computer programs, the programs may seem
intelligent enough to perform other tasks.  By the implied reasoning, taken
to its logical conclusion, a hammer is intelligent enough to drive a nail.

The military has their so-called smart bombs but in truth, machines and
algorithms are no more intelligent than hammers.

By a similar token, pattern recognition bothers me.  Machines and
algorithms don't recognize anything, ever.  That's anthropomorphism.

A somewhat better term is pattern classification, but machines don't really
classify anything, either.  It is we humans who classify, _using_ the machines.

It's like saying that the hammer drives the nail, when in fact it is the human
who does so, _using_ the hammer.

And there is nothing particularly neural about neural networks, other
than their origins.  (True, they were first invented -- discovered, really -- by
someone who was trying to simulate a neuron, but they are much more
general than that.)  I prefer the term multilayer feedforward network for
the type of neural net commonly used in many domains.  (And now in go!)

This sort of semantic nitpicking may seem too severe.  However, it keeps me
from falling into the camp of those who believe that machines will one day
literally become intelligent, develop self-awareness, and achieve consciousness.

Ain't gonna happen.
--
Rich

P.S. -- I hated the movie AI.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Explanation to MoGo paper wanted.

2007-07-10 Thread Richard Brown

On 7/10/07, Chris Fant [EMAIL PROTECTED] wrote:

 Nonetheless, a program that could not only play a decent game of go, but
 somehow emulate the _style_ of a given professional would be of interest,
 would it not?

Is this the case in chess?  If so, I've never heard of it.


I don't think that it is (but I don't know much about computer chess).

For a machine to learn the _style_ of anything whatsoever, by my reckoning,
is a rather difficult task.

As an example, I was once privileged to attend a talk by Donald Knuth in which
he described a somewhat difficult task that he was working on, and challenged
us to think about, namely, to teach a machine to recognize the _style_ of some
arbitrary font.

Far more difficult than mere OCR (optical character recognition), wherein one
already possesses the entire set of alphanumeric characters and symbols of a
particular font, this task was something like the following:

Given only an uppercase 'B', the numeral '4', and a lowercase 'a':

 Reproduce the entire font.

As you might well imagine, doing that could prove a bit trickier than OCR.

It's almost akin to reading the mind of a calligrapher:  What strokes would be
used to create a '7', an 'f', or an ampersand (''), given that we know only
the three characters above?  At what point do we think we have the right answer
to such questions?  If we think that we are finished, and then compare the font
that font we have created against the actual font, then have we failed if it
turns out that there are differences?  That is, to what degree must our created
font match _exactly_ the actual font?  Pixel for pixel?  Or is there a degree
of leeway, within which we may be satisfied that we have succeeded?
In a similar way, being able to recognize the _style_ of some particular pro
go player is a bit trickier than merely creating a program that plays.

It's a different problem altogether.

Just as Knuth's problem is harder than OCR, so too is capturing a pro's style
a greater challenge than creating a go program.

[Disclaimer:  I've forgotten the exact details of Knuth's challenge.  He had
determined that there were three or four characters that had the necessary
and sufficient details (loops, serifs, horizontals, verticals, diagonals, etc.)
to permit recreating the entire font, for most fonts anyway.  I don't remember
which characters, nor how many, although I'm sure it was either three or four.]

--
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Progressive unpruning in Mango 19x19

2007-05-25 Thread Richard Brown

Nick Wedd wrote:


I prefer unprune to graft.

Graft implies adding something to a tree which does not naturally 
belong there.


Not naturally?

Consider a tree, to which you, the tree surgeon, have taken a pair of shears,
and lopped off a branch.  What has been pruned, has been pruned.


Q.  By what method will you now re-attach that branch to the tree?

A.  By grafting.


Unprune suggests that there is a branch which was 
implicitly there all along, you earlier decided not to consider it, but 
you have now reversed that decision.


Just as there was a branch, both implicitily and explicitly, that you decided
to lop off with your shears.  Now that you have decided you didn't really want
to lop it off, and reversed your decision, by what method will you re-attach it?

Grafting.

If you want to reject unprune because it isn't a word, then use 
grow or widen, which suggest adding something which is naturally 
part of that tree.


If you want to reject graft you'll have to come up with a more convincing 
argument.

I assert, further, that the terms scion and stock could be given explicit
technical definitions in this context.

--
Richard L. Brown Office of Information Services
Senior Unix Sysadmin University of Wisconsin System
 780 Regent St., Rm. 246
[EMAIL PROTECTED]  Madison, WI  53715
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] UCT article

2007-02-22 Thread Richard Brown

Sylvain Gelly wrote:

Thank you all for your precise answers!

Sylvain

 


p.s. the find out more link at the bottom of your page

http://www.inria.fr/futurs/ressources-1/computer-culture/mogo-champion-program-for-go-games
is pointing to the wrong place, isn't it?

What do you mean? You mean you can't access the page, or the content is 
not informative, non relevant, not interesting?


This is the text:

The Mogo program was developed during Yizao Wang’s (a student from the Ecole Polytechnique, winner of the research centre’s 
prize attributed to the best intern for his work on Mogo) internship with collaboration from the TAO project-team (INRIA) and 
the CMAP (Ecole Polytechnique). The work was jointly conducted by Yizao Wang and Sylvain Gelly, a PhD student from the TAO 
project. Olivier Teytaud (INRIA), Rémi Munos (INRIA) and Pierre-Arnaud Coquelin, PhD student at CMAP supervised the project. 
Rémi Coulom (INRIA) provided development support and raised awareness of many of the ideas behind MoGo's success.

 find out more

But, when I click on the find out more link, it takes me to 
http://cgos.boardspace.net/ !!

Surely that is not what you intended.


--
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] UCT article

2007-02-21 Thread Richard Brown

Sylvain Gelly wrote:


my favorite line:

In Go all marbles are identical...


My English prevent me to understand the subtlety here.
Is there any relation to the type of stone meaning of marble?


No, not really.

Here the meaning of marbles is that of children's toys, small
spherical objects propelled at one another by means of the thumb.

Children -- usually just the boys -- attempt to hit each other's
marbles, or to knock them out of a circle drawn in the dirt.

Sometimes the kids play for keeps and acquire the marbles they hit.

Metaphorically, someone engage in large-stakes gambling is sometimes
said to be playing for all the marbles.

And to lose one's marbles is a humorous way of saying to go insane.

Actually, America's founding fathers, including Benjamin Franklin,
Thomas Jefferson, and George Washington, were avid marbles-players,
even as grown-ups (adults).

[I have conjectured that this was because billiards tables were
difficult to obtain in eighteenth-century America.]

--
Rich

p.s. the find out more link at the bottom of your page
http://www.inria.fr/futurs/ressources-1/computer-culture/mogo-champion-program-for-go-games
is pointing to the wrong place, isn't it?
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Big board

2007-02-20 Thread Richard Brown

Chris Fant wrote:

Here is a completed game of Go between two random players... on a very
large board.

For ascetics, the eyes have been filled after both players passed.


I think you mean aesthetics.  Ascetics are guys who torture themselves,
and deny themselves pleasure, in a struggle to attain enlightenment.

Hmmm...  On second thought, considering the proclivities of this list's
readership, maybe that is what you meant!

--
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Is skill transitive? No.

2007-01-31 Thread Richard Brown

Vlad Dumitrescu wrote:


Unfortunately, having more than one dimensions makes comparisons
impossible - if an ordering relation is defined over the domain, then
this domain is one-dimensional with regard to that relation.

In other words, one can't compare vectors, just scalars. So the
multi-dimensional strength vector has to be turned into a scalar (by
for example a weighted sum) and we're back where we started...


While that is true, as stated, it is also the case that this is exactly
the sort of thing at which artificial neural networks excel.

Given multiple inputs (a d-dimensional vector), the squashing functions
in the layers of the network in fact reduce the output to a single number.

Neural networks are excruciatingly well-documented, as is their use in
a wide variety of domains, even on the web.  [By that I mean, you needn't
buy a book to discover how powerful multi-layer perceptrons can be.]

So, while it's correct that one can't compare vectors, just scalars
one can compare the _output_ of a vector-massaged-by-a-neural-net
against the _output_ of a vector-massaged-by-a-neural-net.

I think Vlad knows this, of course, as he said, 'the multi-dimensional
strength vector has to be turned into a scalar'.  [I'm just here to
clarify one common method of doing that.]

The tough part is deciding what to measure, in your original vector.

I'm sure I don't know what variables one would use, but I agree with
Don:  the 2 numbers together would predict your chances of beating
another (2 dim) player more accurately that a 1 dimension system could.

I further agree:  And of course you could extend this.  Which is to say,
there is no reason to stop at two.  [Although there is something called
the curse of dimensionality, which prevents a too-large dimensionality:
See  http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-13.html .]

In practice, the feature extraction phase is crucial.  Deciding what to
measure, finding the right number of items to measure, scaling them properly
(e.g., multiplying a vector element by a constant), not measuring irrelevant
items:  All these things are exceedingly crucial (and often difficult) when
constructing a useful, effective, neural net.

So, I'm stumped.

In theory though, if one measures the right variables, and collects
enough data (sometimes a surprisingly small amount is sufficient!) a
neural network could be trained to recognize and predict the strength
of go-players; witness the (excruciatingly well-documented!) success
of neural networks in a wide variety of domains.

--
Rich
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/