> ladders, not just liberties. In that case, yes! If you outright tell the
> neural net as an input whether each ladder works or not (doing a short
> tactical search to determine this), or something equivalent to it, then the
> net will definitely make use of that information, ...
Each convolution
> Blog post:
> https://blog.janestreet.com/accelerating-self-play-learning-in-go/
> Paper: https://arxiv.org/abs/1902.10565
I read the paper, and really enjoyed it: lots of different ideas being
tried. I was especially satisfied to see figure 12 and the big
difference giving some go features made
> I also think that what makes real go that hard is ko, but you've shown that
> it's
> equivalent to ladder, which frankly baffles me. I'd love to understand that.
Just different definitions of "hard"? Ko is still way harder (more
confusing, harder to discover a winning move when one exists) tha
> but then it does not make sense to call that algorithm "rollout".
>
> In general: when introducing a new name, care should
> be taken that the name describes properly what is going on.
Speaking of which, why did people start calling them rollouts instead of
playouts?
Darren
P.S. And don't get
> Weights_31_3200 is 20 layers of 192, 3200 board evaluations per move
> (no random playout). But it still has difficulties with very long
> strings. My next network will be 40 layers of 256, like Master.
"long strings" here means solidly connected stones?
The 192 vs. 256 is the number of 3x3 co
>> One of the changes they made (bottom of p.3) was to continuously
>> update the neural net, rather than require a new network to beat
>> it 55% of the time to be used. (That struck me as strange at the
>> time, when reading the AlphaGoZero paper - why not just >50%?)
Gian wrote:
> I read that a
> Mastering Chess and Shogi by Self-Play with a General Reinforcement
> Learning Algorithm
> https://arxiv.org/pdf/1712.01815.pdf
One of the changes they made (bottom of p.3) was to continuously update
the neural net, rather than require a new network to beat it 55% of the
time to be used. (That s
high-end PC 20 years apart.
https://en.wikipedia.org/wiki/History_of_supercomputing#Historical_TOP500_table
--
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
http://shop.oreilly.com/product/0636920053170.do
_
> Would it typically help or disrupt to start
> instead with values that are non-random?
> What I have in mind concretely:
Can I correctly rephrase your question as: if you take a well-trained
komi 7.5 network, then give it komi 5.5 training data, will it adapt
quickly, or would it be faster/bette
> Zero was reportedly very strong with 4 TPU. If we say 1 TPU = 1 GTX 1080
> Ti...
4 TPU is 180 TFLOPS, or 45 TFLOPS each [1]
GTX 1080Ti is 11.3 TFLOPs [2], or 9 TFLOPS for the normal 1080.
So 4 TPUs are more like 15-20 times faster than a high-end gaming notebook.
(I'm being pedantic; I expect
> You make me really curious, what is a Keras model ?
When I was a lad, you had to bike 3 miles (uphill in both directions) to
the library to satisfy curiosity. Nowadays you just type "keras" into
Google ;-)
https://keras.io/
Darren
___
Computer-go mai
> Since AlphaGo, almost all academic organizations have
> stopped development but, ...
In Japan, or globally? Either way, what domain(s)/problem(s) have they
switched into studying?
Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http:/
> What do you want evaluate the software for ? corner cases which never
> have happen in a real game ?
If the purpose of this mailing list is a community to work out how to
make a 19x19 go program that can beat any human, then AlphaGo has
finished the job, and we can shut it down.
But this list h
Could we PLEASE take this off-list? If you don't like someone, or what
they post, filter them. If you think someone should be banned, present
your case to the list owner(s).
Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer
> The source of AlphaGo Zero is really of zero interest (pun intended).
The source code is the first-hand account of how it works, whereas an
academic paper is a second-hand account. So, definitely not zero use.
> So yes, the database of 29M self-play games would be immensely more
> valuable. (Pr
how badly Deep Blue had played until
he analyzed the games with modern chess computers. That is an amazing
thing to hear from the mouth of Kasparov! The book he is plugging is
here - I just skimmed the reviews, and it actually sounds rather good:
https://www.amazon.co.uk/Deep-Thinking-Machine-In
sing... but the
real science was known by the 1997 rematch... but AlphaGo is an entirely
different thing. Deep Blue's chess algorithms were good for playing
chess very well. The machine-learning methods AlphaGo uses are
applicable to practically anything."
Agree or disagree?
Darren
> https://en.wikipedia.org/wiki/Brute-force_search explains it as
> "systematically enumerating all possible candidates for the
> solution".
>
> There is nothing systematic about the pseudo random variation
> selection in MCTS;
More semantics, but as it is pseudo-random, isn't that systematic?
Stone.)
Darren
--
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
http://shop.oreilly.com/product/0636920053170.do
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/m
not just to make a strong Chinese-rules go
program, why not embrace the messiness!
(Japanese rules are not *that* hard. IIRC, Many Faces, and all other
programs, including my own, scored in them, before MCTS took hold and
being able to shave milliseconds off scoring became the main decider
, bikes and wild animals).
Or how about this angle: humans are still better than the programs at
Japanese rules. Therefore this is an interesting area of study.
Darren
--
Darren Cook, Software Researcher/Developer
My New Book: Practical Machine Learning with H2O:
http://shop.oreilly.com/product/
> Can you say something more on "Fine Art"?
> From which country is it? Who is Tencent?
Tencent is a very big Chinese Internet company; it is described here as
the largest gaming company in the world:
https://en.wikipedia.org/wiki/Tencent
Darren
___
> English official page has the info.
> http://www.worldgochampionship.net/english/
Thanks. Is it three hours, with sudden death? It says there is byo-yomi
from 5 minutes left, but didn't mention seconds per move, so it is just
a 300, 299, 288, 287, ... kind of countdown?
Darren
have
a ko, play a ko threat. If you see have two 1-eye groups near each
other, join them together. :-)
Okay, those could be considered higher-level concepts, but I still
thought it was impressive to learn to play arcade games with no hints at
all.
Darren
>
> On Sat, Feb 25, 2017 at
,
or was done in parallel with it): https://deepmind.com/research/dqn/
It just learns from trial and error, no expert game records:
http://www.theverge.com/2016/6/9/11893002/google-ai-deepmind-atari-montezumas-revenge
Darren
--
Darren Cook, Software Researcher/Developer
My New Book: Practical
ex whole board ko fights, obscure under-the-stones tesuji, etc.
I wondered if anyone here had studied those 50 games and found anything
interesting or impressive, along those lines? I.e. if I was going to
look at just one game, which one should it be?
Thanks,
Darren
--
Darren Cook, Software Re
problem of how to pass a
probability distribution up the tree, and then what to do with it at the
top.
(The presence of the life/death battles means the distribution tends to
have multiple peaks, not be nice and gaussian.)
Darren
--
Darren Cook, Software Researcher/Developer
My New Book: Practic
> DeepMind published AlphaGo's selfplay 3 games with comment.
I've just been playing through the AlphaGo-Lee first game. When it shows
a variation, is this what AlphaGo was expecting, i.e. its prime
variation? Or is the follow-up "just" the opinion of the pro commentators?
(E.g. game 1, move 13,
> Any chance someone has put this on Youtube for those of us who primarily
> consume
> videos on phones or tablets (where a 2.0GB is very large to store locally)?
> And
> if so, replying with a link here would be deeply appreciated.
+1. It is actually 3GB, for a 40 minute video! I had to start
>> At 5d KGS, is this the world's strongest MIT/BSD licensed program? ...
>> actually, is there any other MIT/BSD go program out there? (I thought
>> Pachi was, but it is GPLv2)
>
> Huh, that's interesting, because Darkforest seems to have copy-pasted
> the pachi playout policy:
>
> https://githu
> DarkForest Go engine is now public on the Github (pre-trained CNN models are
> also public). Hopefully it will help the community.
>
> https://github.com/facebookresearch/darkforestGo
Ooh, BSD license (i.e. very liberal, no GPL virus). Well done! :-)
At 5d KGS, is this the world's strongest M
> http://itpro.nikkeibp.co.jp/atcl/column/15/061500148/051900060/
> (in Japanese). The performance/watt is about 13 times better,
> a photo in the article shows.
Has anyone found out exactly what the "Other" in the photo is? The
Google blog was also rather vague on this.
(If you didn't click t
> It's be interesting to know what the speedup factor against, say,
> Tesla K40 is.
Or against the P100 chip [1], which claims the same "order of magnitude"
speed-up on neural nets by doing the same thing (half-precision floating
point).
Darren
[1]:
http://nvidianews.nvidia.com/news/nvidia-deliv
> I've implemented the Tromp Taylor algorithm. As a comparison I use
> gnugo.
> Now something odd is happening. If I setup a board of size 11
> (boardsize 11), then put a stone (play b a1) and then ask it to run the
> scoring (final_score), then it takes minutes before it finishes. That
> alone is
Thanks for the very interesting replies, David, and Remi.
No-one is using TensorFlow, then? Any reason not to? (I'm just curious
because there looks to be a good Udacity DNN course
(https://www.udacity.com/course/deep-learning--ud730), which I was
considering, but it is using TensorFlow.)
Remi w
David Fotland wrote:
> There are 12 programs here that have deep neural nets. 2 were not
> qualified for the second day, and six of them made the final 8. Many
> Faces has very basic DNN support, but it’s turned off because it
> isn’t making the program stronger yet. Only Dolburam and Many Faces
> ...
> Pro players who are not familiar with MCTS bot behavior will not see this.
I stand by this:
>> If you want to argue that "their opinion" was wrong because they don't
>> understand the game at the level AlphaGo was playing at, then you can't
>> use their opinion in a positive way either.
> ... we witnessed hundreds of moves vetted by 9dan players, especially
> Michael Redmond's, where each move was vetted.
This is a promising approach. But, there were also numerous moves where
the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
E.g. wasting ko threats for no
> If I remember correctly, it is not browser implementation, but rather a
> frontend. The actual computation runs on server, browser only communicates
> the
> moves and shows the results.
No, a quick test shows once it loads it has not made any server calls.
It has a 14MB file which looks like
uot;sx": 1, "sy": 1, "w": [0.519023, -1.379795, -0.495255, -0.051380,
> -0.466160, -1.380873, -0.630742
> , -0.174662, -0.743714, -1.288785, -0.607110, -0.536119, -0.819585,
> -0.248130, -0.629681, -0.004683,
> -0.408890, -1.701742, -0.011255, -0.833270, -0.665327
> You can also look at the score differentials. If the game is perfect,
> then the game ends up on 7 points every time. If players made one
> small error (2 points), then the distribution would be much narrower
> than it is.
I was with you up to this point, but players (computer and strong
humans)
> You are right, but from fig 2 of the paper can see, that mc and value
> network should give similar results:
>
> 70% value network should be comparable to 60-65% MC winrate from this
> paper, usually expected around move 140 in a "human expert game" (what
> ever this means in this figure :)
Tha
From Demis Hassabis:
When I say 'thought' and 'realisation' I just mean the output of
#AlphaGo value net. It was around 70% at move 79 and then dived
on move 87
https://twitter.com/demishassabis/status/708934687926804482
Assuming that is an MCTS estimate of winning probability, that 70%
s
Well done, Aja and all the DeepMind team (including all the "backroom
boys" who've given the reliability on the hardware side).
BTW, I've gained great pleasure seeing you sitting there with the union
jack, representing queen and country; you'll probably receive a
knighthood. :-)
> Thanks all. Alp
>> global, more long-term planning. A rumour so far suggests to have used the
>> time for more learning, but I'd be surprised if this should have sufficed.
>
> My personal hypothesis so far is that it might - the REINFORCE might
> scale amazingly well and just continuous application of it...
Agre
> In fact in game 2, white 172 was described [1] as the losing move,
> because it would have started a ko. ...
"would have started a ko" --> "should have instead started a ko"
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.o
> I was surprised the Lee Sedol didn't take the game a bit further to probe
> AlphaGo and see how it responded to [...complex kos, complex ko fights,
> complex sekis, complex semeais, ..., multiple connection problems, complex
> life and death problems] as ammunition for his next game.
In fact in
Wow - didn't expect that. Congratulations to the AlphaGo team!
Ingo wrote:
> Similar with CrazyStone. After move 26 CS gave 56 % for AlphaGo
> and never went below this value. Soon later it were 60+ %, and
> never went lower, too.
Did it show jumps at some of the key moves the human experts thoug
Current edition of New Scientist has an article (p.26) by Gary Kasparov
on the AlphaGo vs. Lee Sedol match. (Just a page, no deep analysis;
though the facing page is also interesting: about Facebook applying AI
to map-making.)
Darren
P.S. I think you can view online with a free subscription:
htt
I'm sure quite a few people here have suddenly taken a look at neural
nets the past few months. With hindsight where have you learnt most?
Which is the most useful book you've read? Is there a Udacity (or
similar) course that you recommend? Or perhaps a blog or youtube series
that was so good you w
>> The longest I've been able to find, by more or less random sampling,
>> is only 521 moves,
>
> Found a 582 move 3x3 game...
Again by random sampling?
Are there certain moves(*) that bring games to an end earlier, or
certain moves(*) that make games go on longer? Would weighting them
appropria
> someone cracked Go right before that started. Then I'd have plenty of
> time to pick a new research topic." It looks like AlphaGo has
> provided.
It seems [1] the smart money might be on Lee Sedol:
1. Ke Jie (world champ) – limited strength…but still amazing… Less than
5% chance against Lee Se
> I'd propose these as the major technical points to consider when
> bringing a Go program (or a new one) to an Alpha-Go analog:
> ...
> * Are RL Policy Networks essential? ...
Figure 4b was really interesting (see also Extended Tables 7 and 9): any
2 of their 3 components, on a single machin
white moves as particularly good, e.g.
108, which is also an empty triangle: obviously AlphaGo isn't being held
back by any "good shape" heuristics ;-)
I hope he comments the other four games!
Darren
--
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps w
> If you want to view them in the browser, I've also put them on my blog:
> http://www.furidamu.org/blog/2016/01/26/mastering-the-game-of-go-with-deep-neural-networks-and-tree-search/
> (scroll down)
Thanks. Has anyone (strong) made commented versions yet? I played
through the first game, but it j
> Google beats Fan Hui, 2 dan pro, 5-0 (19x19, no handicap)!
> ...
> I read the paper...
Is it available online anywhere, or only in Nature?
I just watched the video, which was very professionally done, but didn't
come with the SGFs, information on time limits, number of CPUs, etc.
Aja, David - s
> Attempting to maximize the score is not compatible with being a
> strong engine. If you want a dan level engine it is maximizing
> win-probability.
If you narrow it down such that komi 25.5, 27.5, and 29.5 give a black
win with 63% to 67% probability, but komi 31.5 jumps to black only
winning 4
> I am trying to create a database of games to do some machine-learning
> experiments. My requirements are:
> * that all games be played by the same strong engine on both sides,
> * that all games be played to the bitter end (so everything on the board
> is alive at the end), and
> * that both s
> standard public fixed dataset of Go games, mainly to ease comparison of
> different methods, to make results more reproducible and maybe free the
> authors of the burden of composing a dataset.
Maybe the first question should be is if people want a database of
*positions* or *games*.
I imagine
> If one or two of these cells are outside the board the
> move will count as a pass. If the landing cell is occupied by another
> stone the move is also counted as a pass. Illegal moves are also counted
> as pass moves.
Alternatively, the probability could be adjusted for the number of legal
mov
> Of course, that's anecdata...anyone is welcome to prove or disprove this
> old claim by analyzing the stats on KGS, or Tygem or wherever else.
Don't forget the distortion due to people knowing the komi, and playing
to win, rather than playing to maximize their score.
Darren
> I have a probability table of all possible moves. What is the
> fastest way to pick with probability, possibly with reducing the
> quality of probability?!
>
> I could not find any discussion on this on computer-go, but probably
> I missed it :(
I may have misunderstood the question, but there
> I have problems to access the KGS server. My Firefox 40.0.3
> (under Windows 8.1) is even not allowing me to visit the website
> www.gokgs.com.
> Argument: "Diffie-Hellman key is too weak"
Here is how to have Firefox not be so fussy:
http://letusexplain.blogspot.co.uk/2015/08/solved-server-has
> Robert, David Fotland has...
> I find your critique a little painful.
I don't think Robert was critiquing - he was asking for David's
definition of group strength and connection strength.
> the "stupid" monte carlo works so much better.
Does it? I thought "stupid" monte carlo (i.e. light play
> I think you are right, though. In my opinion, calling MCTS "brute
> force" isn't really fair, the brute force portion really doesn't
> work and you need to add a lot of smarts both to the simulations and
> to the way you pick situations to simulate to make things work.
In chess, basic min-max,
> However, i have to admit that in 1979 i was a false prophet when i claimed
> "the brute-force approach is a no-hoper for Go, even if computers become a
> hundred times more powerful than they are now" ...
I think you are okay: at the point where computers were 100 times
quicker than in 1979, mon
> performance at endgames is worse than middle, because IMHO MC
> simulations don't evaluate the values (due to execution speed) of
> yose-moves and play such moves in random orders. Assuming there are 7,
> 3, 1 pts moves left at a end position, for example.
(Sorry for two messages). I just th
> yose-moves and play such moves in random orders. Assuming there are 7,
> 3, 1 pts moves left at a end position, for example. Correct order is
> obviously 7, 3 and 1 (sente gets +5 pts) but all combinations are played
> at the same probability in MC simulations now. The average of the
> sco
> I imagine it would be fairly easy to swap from MCTS to a CGT solver once it
> could be applied.. Or is this not interesting for some reason?
It only becomes usable once the game is fairly much decided. (Though,
you can construct artificial positions where it gives you a correct move
is non-obvio
> It is not exactly Go, but i have a monte-carlo tree searcher on the GPU for
> the game of Hex 8x8
> Here is a github link https://github.com/dshawul/GpuHex
The engine looks to be just the middle 450 lines of code; quite compact!
So running playouts on a GPU worked out well?
Would doing the
Steven wrote:
> http://arxiv.org/abs/1412.6564 (nvidia gtx titan black)
> http://arxiv.org/abs/1412.3409 (nvidia gtx 780)
Thanks - I had read those papers but hadn't realized the neural nets
were run on GPUs.
Nikos wrote:
>> https://timdettmers.wordpress.com/2015/03/09/deep-learning-hardware-guid
I wondered if any of the current go programs are using GPUs.
If yes, what is good to look for in a GPU? Links to essential reading on
this topic would be welcome. (*)
If not, is there some hardware breakthrough being waited for, or some
algorithmic one?
Darren
*: After many years of being happy
;action=display;num=1429402345;start=1#1
To my untrained eye it looks like they are all game-specific, rather
than something we could steal from to use in other games and other
domains :-)
Darren
--
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by
> I disagree with that. Why does it suck?
(Getting a bit OT for computer-go, so I replied off-list; if anyone was
following the conversation, and wants to be CC-ed let me know.)
Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://com
> BTW I am a Linux guy true and true since 1994. But I am DAMN tempted
> to write it in C#.
I use mono on linux [1], and c# is an OK language for this kind of
thing. RestSharp is an interesting library for web service *clients*,
but of course you are writing a server.
Lots of C++ programmers on
> I will be willing to welcome players of all strengths, if that is what the
> strong players want...
Winning against a much weak player does not prove anything, nor teach
you much. (In contrast to losing against a stronger player.)
I wonder if handicaps could be used? E.g. the elite players coul
g the generator each time you
need a new random number?
Darren
--
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
http://shop.oreilly.com/product/0636920030928.do
Also on Amazon an
> To be honest, what I really want is for it to self-learn,...
I wonder if even the world's most powerful AI (i.e. the human brain)
could self-learn go to, say, strong dan level? I.e. Give a boy genius a
go board, the rules, and two years, but don't give him any books, hints,
or the chance to play
(I didn't see it, but my apologies if someone already posted this.)
Forwarded Message
Subject: Announcement Call for Papers ACG 2015 -- deadline 1 March 2015
Date: Tue, 23 Dec 2014 16:40:17 +0100
Dear all,
With great pleasure we announce the 14th International Conference Advan
> Is "KGS rank" set 9 dan when it plays against Fuego?
Aja replied:
> Yes.
I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the r
On 2014-12-19 15:25, Hiroshi Yamashita wrote:
> Ko fight is weak. Ko threat is simpley good pattern move.
I suppose you could train on a subset of data: only positions where
there was a ko-illegal move on the board. Then you could learn ko
threats. And then use this alternative NN when meeting a k
ement from just using a subset of the training data
was one of the most surprising results.
Darren
--
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
http://shop.ore
So NNs good for move candidate generation, MCTS
good for scoring?
Darren
--
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and
http://arxiv.org/pdf/1412.3409v1.pdf
>>
>> Thier move prediction got 91% winrate against GNU Go and 14%
>> against Fuego in 19x19.
--
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
(4.6k) 5000 playouts 10minutes + 30sec byoyomi (x5)
>
> These are last year's data,
>
> AyaMC4 2k (1.4k) 10sec/mov (8cores) 1minute + 15sec byoyomi (x10)
> AyaMC 3k (2.6k) 10sec/mov (8cores) 10minutes + 30sec byoyomi (x5)
> (about 16 playouts/mov)
--
Darren Cook,
foolishly managing their blitz
time foolishly does distort things a bit.
Darren
--
Darren Cook, Software Researcher/Developer
Specializing in intelligent search (in multiple languages), discovery
of context, aiding communication, and basically helping people find
and make good use of their
gt; 2400 1d mfgo12-610-2c 1d ManyFaces1 (10sec blitz)
> 2500 2d Aya693_1c Zen-4.9-1c 2d Zen19 Zen (15sec/move)
> 2600 3d Fuego-1095-1c
> 2700 4d Zen-4.9-1c
> 2800 5d Zengg9-4x4c
> 2900 6d
>
> Hiroshi Yamashita
[1]: http://www.gokgs.com/
Do any of the strongest MCTS programs have a rank at 9x9 on any major
server? I found the "fuego9" account on KGS but it appears to be
unranked and only playing free games (*). The "ManyFaces" account
appears to play only 19x19.
I know the programs are stronger at 9x9 than 19x19, but I'm trying to
hough increasing the 60% threshold as the
game progresses may make sense).
I'm surprised people are using a simple linear decreasing rule, but very
interested to hear there is a tangible improvement. Perhaps being
adaptive isn't needed?
Darren
--
Darren Cook, Software Researcher/Devel
> Yes. And while worrying about what happens after a win rate of 97%
> sounds like splitting hairs, I think we're talking about an awkward
> way of measuring something that's of practical interest.
Yes. How can a program be strong enough to win 97% and not win 100%.
Over on the fuego list Martin M
ite - http://www.gochildgame.com
>
> Regards,
> gosharplite
>
>
>
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
--
Darren Cook, Software Resear
000 random playouts/second on 9x9 using a single
>>>>> thread on a 32-bit iMac, using the gc compiler, which doesn't do any
>>>>> optimization. I suspect that a board structure that tracked
>>>>> pseudo-liberties could do better.
>>>>>
g
from gogui, I tried each of go_rules japanese chinese and kgs and
gogui reports all the go_param_rules have changed (including super ko rule).
(Also I think someone reported on the fuego list that "uct_param_search
number_playouts 2" stopped giving any advantage after some bug fixes?
mbling over the seeding at the UEC Cup but that is still
quite a difference. Did it have technical problems at Hakone?
Darren
--
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/ (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http:/
t stronger when using multiple threads. I'm not sure
what the second is doing...
Darren
[1]:http://www.cs.ualberta.ca/TechReports/2009/TR09-09/TR09-09.pdf
--
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/ (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multili
less influence). I remember having a really good reason to
want to delay reducing multiple features to a single number , but it is
all a bit fuzzy now.
Does this type of search have a name, and any associated research?
Darren
--
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/ (Shod
, is a fine read (a professional writer took
a year off to become a poker pro), and nicely shows the balance between
maths, bluffing and hustling by *professional* gamblers.
Darren
--
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/ (Shodan Go Bet - who will win?)
http://dc
Zen, or are
you trying to run other programs on a cluster too?)
Darren
--
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/ (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http
and Korean
translations because there is so much interest in go in those countries,
but I'd appreciate volunteers for translations to any language. I don't
really have budget, but can offer publicity (e.g. in the Japanese page I
have links to Yasuhiro's company website and his algorithm books).
s not
allowed here.
--
Darren Cook, Software Researcher/Developer
http://dcook.org/gobet/ (Shodan Go Bet - who will win?)
http://dcook.org/mlsn/ (Multilingual open source semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and art
1 - 100 of 330 matches
Mail list logo