Looking at the lightest playout version of my bitmap-based 9x9 program
(source-code somewhere in the archives), I spent an estimated 2% of the
time generating random numbers, so 40% seems to indicate something is not
right, such as re-initializing the generator all the time.

The execution time of my engine breaks down sort-of like this on my MacBook:

benchmarking Mersenne Twister (SFMT version [following Agner Fog])
  [BRandom] = 1, [IRandom] = 1, [IRandomX] = 2, [Random] = 2, [FRandom] = 2

benchmarking playouts (clock cycles)
  [game] = 18051 cc, [moves] = 111.07

benchmarking playouts (wall time)
  [game] = 126.108 kpps, [moves] = 111.07

benchmarking board functions (calls per game/clock cycles)
  [select] 111/55, [play] 106/93, [score] 14, [win] 0.4251

benchmarking move generator (calls per game/clock cycles, find+pick)
  [moves] 111
    [random] 111/17+36

This last line breaks down move-generation in finding all legal moves
(17cc, using Gunnar Farneback's Brown programs definition of legal) and
picking one uniformly at random (36cc). That last method uses 1-3 random
numbers of 1cc each. A move takes 55 + 93 = 148cc, so 3cc represents at
most 2%.

Things get a bit messier to assess once you consider waterfalling moves,
such as captures, atari extensions, etc. In general, though, going towards
heavier playouts will only reduce the time spent on random number
generation because you will have to maintain more data or perform more
logic. In any case, spending more than a handful of cc on random number
generation is not needed and, thus, random number generation should not be
high on your resource consumption list.

For reference, an old profile report for a non-bitmap heavier MCTS-player
reports 1.1% of time was spent on random number generation.

René

BTW. The average game-length using a uniformly random move-generator is a
pretty good correctness test.

On Sun, Mar 29, 2015 at 10:16 PM, Petri Pitkanen <petri.t.pitka...@gmail.com
> wrote:

> Assuming you are using some sensible OS there better ways profile than
> sample like oprofile for linux. There is similar thing for FreeBSD I
> think.  No instrumentation san sampling gets automated
>
> Petri
>
> 2015-03-30 8:05 GMT+03:00 hughperkins2 <hughperki...@gmail.com>:
>
>> 40% sounds pretty high. Are you sure its not an artefact of your
>> profiling implementation?
>>
>> I prefer not to instrument, but to sample stack traces. You can do this
>> using gdb by pressing ctrl-c, then type bt. Do this 10 times, and look for
>> the parts of the stack that occur often.
>>
>>
>> _______________________________________________
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to