Hi  Zheng,
> Let's say I have 100 long-running tasks and 20 nodes.
> I want each of them to take up to 10 tasks. Each of the task should be
> taken by one and only one node.
This is exactly one of our users is using ZooKeeper for. You might want to
make it more general  saying that a directory /tasks/ will have the list of
tasks that need to be processed - (in your case 0-99). Basically storing the
list of tasks also in zookeeper. The clients can then read of this list and
try creating ephemeral nodes for tasks in mytasks/ and assign themselves as
the owner of those tasks.
You also should factor in the task dying or the machine not able to start
that task. In that case the machine should just remove the ephemeral node
that it created and should let the other machines take up that task.

Here is one minor thing that might be useful. One of the zookeeper users who
was doing exactly the same thing had the number of failures of booting up a
task stored as data in /tasks/ znode for that task. This way all the
machines can update this count and alert (to the admin) if a task cannot be
started or worked upon by a given count of machines.

Hope this helps.

Thanks
Mahadev


On 1/23/10 12:58 AM, "Zheng Shao" <zsh...@gmail.com> wrote:

> Let's say I have 100 long-running tasks and 20 nodes.
> I want each of them to take up to 10 tasks. Each of the task should be
> taken by one and only one node.
> 
> Will the following solution solve the problem?
> 
> Create a directory "/mytasks" in zookeeper.
> Normally there will be 100 EPHEMERAL children in /mytasks directory,
> named from "0" to "99".
> The data of each will be the name of the node and the process id in
> the node. This data is optional but allow us to do lookup from task to
> node and process id.
> 
> 
> Each node will start 10 processes.
>   Each process will list the directory "/mytasks" with a watcher
>   If trigger by the watcher, we relist the directory.
>   If we found some missing files in the range of "0" to "99", we
> create an EPHEMERAL node with no-overwrite option
>     if the creation is successful, then we disable the watcher and
> start processing the corresponding task (if something goes wrong, just
> kill itself and the node will be gone)
>     if not, we go back to wait for watcher.
> 
> Will this work?
> 
> 

Reply via email to