HI nicolas,

first of all thanks for your time...I do appreciate it...Ok let's see,


On 19/08/12 13:01, nicolas.o...@gmail.com wrote:
You might use the fact that -Infinity is a neutral element for max.
(Or the smallest long if you work with longs).
Alternatively you can represent your value as either a number or nil,
nil being the neutral element.

So do you mean wrapping 'max-key' in a fn of my own that can actually take 0 args and returns Integer/MIN_VALUE ? Something like this:

(defn cbn-max-key
  ([] Integer/MIN_VALUE)
  ([k x] x)
  ([k x y] (if (> (k x) (k y)) x y))
  ([k x y & more]
   (reduce1 #(max-key k %1 %2) (max-key k x y) more)))

Yes it will be realised but it does not matter as the children are not realised.
So just the computation of the next moves is done sequentially.

However, as I understand them, reducers will be of any help only if your
trees are highly branching. (They work chunk by chunk and not element
per element. They are tuned
for very simple operations on big data.)
I might be wrong on that though.

But my tree is very highly branching. It starts off with 20 branches...by level 
4 it reaches 40, perhaps more branches! That is why I think reducers is my best 
option.
I already tried pmapping at the top or using Executors with fixed number of 
threads but neither works as expected...reducers is my only option at this 
point...

  If you use lazyness for generating the tree (which may or not be
the right approach), I would not generate the tree up to a depth, I
would generate the infinite tee and then
   use a depth in the searching function. That way, if your search
function wants to have two level more, it can.
Ha! That is exactly what I tried to do originally. I thought that I could 
'take' as many levels as I wanted but the thing is 'take' does not work on 
levels of nesting but
on lazy sequential elements. In other words, I cannot say (take 5 game-tree) 
and expect it to reach nesting level 5 in the single map. I could, as you say, 
use a depth
flag to limit the recursion itself but since I can do (seq (:children tree)) to 
realise that we've reached the bottom I don't any difference.

I would not try to put concurrency in before I understand well the
performance of my sequential code. If you want to use
   eveloutionary algorithm to learn the best evaluation function, you
will have a lot of //ism coming from the evolutionary algorithm
anyway.

I didn't! Only when i observed horrible performance I started thinking about 
concurrency. Yes, the genetic algorithm will run in parallel but this is many 
levels up...
If just searching takes 4 min there is no way to complete a genetic search 
within my lifetime! Remember, searching occurs in every single move - so what 
if the organisms are
trained in parallel? Each organism will have to explore many many moves in its 
life.so all the paralleism of the genetic search will be dominated by searching.
I would profile the code.

Admitedly, I've not done that...


I would try to improve on the algorithm (alpha-beta pruning, a more
selective approach to unfolding nodes, and so on...)

alpha-beta is practically the same algorithm just a bit optimised...I'm not 
going down that road unless I'm absolutely sure that the algorith is correct 
and it works as
expected. Of the top of my head, I need 2 more keys in my game-tree (:alpha 
:beta) and a way to keep track of them while nesting. However, aplha-beta seems 
inherently serial
because of these 2 extra variables that need checking before exploration of the 
tree continues. So in a way, I think reducers would not work with alpha-beta 
because if you
want to prune the tree you have to know what the best move is so far as opposed 
to just applying an assosiative fn (like max/max-key) to all nodes. If 
searching occursin
parallel chunks how will I update and propagate correct values for alpha and 
beta down the tree?

To add concurrency, with the lazyness of the tree generation, I think
it is sufficient to add concurrency in the exploration of the tree.
You should check on the depth before calling a conccurent version.
(Find a depth that gives a reasonable amount of work. For smaller
depth,
run a sequential version of the algorithm.  ).

That was exactly my plan!!! I thought of using pmap at the top and map on all 
rest levels. So if we start with 20 possible moves (branches) explore all 20 
using pamp but
after that use 'map'. Those first 20 branches have quite a bit of work to do 
until they reach the bottom. Unfortunately this doesn't work. I get very 
similar performance from
pamp and map.

I am not a concurrency specialist at all, but I would do something like that:
  - realise all the children on one level. (You can also implement a
conccurent version of your valid moves generation, but it is a
different problem. And probably useless, because there is already a
lot of //ism in exploring the tree.)

Move generation is done using core.logic and has pretty descent performance. I 
can get the moves of any piece in just under 2 ms. No problems there!

create a task per child corresponding to the exploration of said
child. As the operation is CPU bound, you want to have these tasks
executed by a thread pool with a fix number of threads. (There seems
to be Clojure libraries for that. Or you can implement this with
agents.)

I did this as well...I created a fixed thread-pool at the top instead of using 
pmap but nothign exciting happened. Same performance with pmap...If i do it 
with agents
I'll never be able to implement alph-beta pruning, will I? Coordination goes 
out the window with agents doesn't it?


Again thanks a lot for your time and thoughts!

Jim








--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to