Hi,
I wonder if anyone can offer any comments on the following questions?
1. When parallel search processes are created on a remote machine, do
they explore a given single 'branch' of the solution and report the
result back to the main Oz program, then cease to function or are they
able to continue to explore further 'branches' if required?
2. When I used parallel search with the n-Queens problem, the trace of
the search reported that certain remote processes were exploring
significantly more nodes than the others did. I assume this is a result
of the 'shape' the solution takes (if it is imagined in tree form)?
3. Also, with the expectation that the distribution strategy would
affect this, what would be the best way (for this particular problem) to
ensure the workload is spread more evenly over the remote processes? If
that is even possible.
For example the n-Queens problem can easily use up all available Oz
virtual memory on very large values of n. It would seem you might be
able to overcome this by splitting the search up over parallel
processes. However, I recently tried this with n=2000 (a ridiculously
big number I appreciate) using 7 processes (1 local and 6 remote) and
the results showed that 3 remote processes explored hardly any nodes at
all, and the rest explored the remainder of the nodes with one process
running out of virtual memory.
Any help greatly appreciated.
Regards
Mark
--
Mark Richardson
Final year undergraduate
University of Teesside
_________________________________________________________________________________
mozart-users mailing list
[email protected]
http://www.mozart-oz.org/mailman/listinfo/mozart-users