Yep, that's what I meant when I said that the impact depends on the
operating system. As giraph, we can only guarantee that between different
jvms, the message will hit Netty and hence it will be passed to the
operating system through a socket. What happens afterwards, depends on the
operating system. As you said, on linux it will most likely hit the
loopback (unless the machine is really oddly configured!), but as giraph we
do not know that for sure :).


On Mon, Feb 4, 2013 at 10:57 PM, Eli Reisman <apache.mail...@gmail.com>wrote:

> i think messages between workers (jvm's) on the same compute node in the
> cluster will communicate over the netty, but on the loopback addr. messages
> between worker tasks on different compute nodes will communicate over the
> network. messages between "workers and themselves" (i.e. between 2 data
> partitions in the same worker) are placed directly into the data structures
> concerned as managed within the same worker task, hopefully in a
> thread-safe manner.
>
> So (as I understand it) regardless of the threads on that jvm/worker task,
> the thing you need to know is: which data partition + worker task is this
> outgoing message coming from, and to which data partition + worker task is
> it going. With this, you can determine how the communication is being done.
>
>
> On Mon, Feb 4, 2013 at 12:30 PM, Alexandros Daglis <
> alexandros.dag...@epfl.ch> wrote:
>
>> This helps indeed. Thank you Claudio!
>>
>> Alexandros
>>
>>
>> On 4 February 2013 18:48, Claudio Martella <claudio.marte...@gmail.com>wrote:
>>
>>> Giraph runs on a Hadoop cluster as a map reduce job. Each worker is
>>> hence a different task and it runs as a different jvm. This means that you
>>> can have two workers running in the same machine. Hence it can happen that
>>> two vertices running on two different workers executed in the same machine
>>> might exchange messages over the network. The practical impact depends on
>>> the host operating system.
>>> On the other side, two vertices residing on the same worker but being
>>> computed by different compute threads will behave as mentioned in the
>>> previous email. If you would like to minimize the first behavior, you
>>> should ensure a single worker is executed on each machine, and set
>>> adequately the number of compute threads per worker.
>>>
>>> Hope this helps,
>>> Claudio
>>>
>>>
>>> On Monday, February 4, 2013, Daglis Alexandros wrote:
>>>
>>>>  Hello Claudio,
>>>>
>>>> Thank you for your prompt answer!
>>>> So, vertices that belong to the same worker thread do not require
>>>> access to the network in order to exchange messages.
>>>> However, what about *different worker threads* that reside on
>>>> different cores *of the same node*?
>>>>
>>>> Cheers,
>>>> Alexandros
>>>>
>>>>  ------------------------------
>>>> *From:* Claudio Martella [claudio.marte...@gmail.com]
>>>> *Sent:* Monday, February 04, 2013 6:19 PM
>>>> *To:* user@giraph.apache.org
>>>> *Subject:* Re: Inter- and intra-node message passing
>>>>
>>>>   Hi Alexandros,
>>>>
>>>>  if two vertices are on the same worker, the message does not pass
>>>> through the network but it is put directly in the mailbox of the
>>>> destination vertex.
>>>>
>>>>  Cheers,
>>>> Claudio
>>>>
>>>>
>>>> On Mon, Feb 4, 2013 at 6:03 PM, Alexandros Daglis <
>>>> alexandros.dag...@epfl.ch> wrote:
>>>>
>>>>>  Hello everybody,
>>>>>
>>>>> I was wondering about the message-passing protocol: is there a
>>>>> difference if two communicating threads are on the same node, as opposed 
>>>>> to
>>>>> being on different ones? Is communication achieved through memory whenever
>>>>> the threads are local to the node, or does it always default to the
>>>>> network?
>>>>>
>>>>> I tried answering that question by going through the code, but I
>>>>> haven't seen any high-level difference in handling those two different
>>>>> cases. I would appreciate if someone could give me a hint on that.
>>>>>
>>>>> Thank you in advance.
>>>>> Alexandros
>>>>>
>>>>
>>>>
>>>>
>>>>  --
>>>>    Claudio Martella
>>>>    claudio.marte...@gmail.com
>>>>
>>>
>>>
>>> --
>>>    Claudio Martella
>>>    claudio.marte...@gmail.com
>>>
>>
>>
>


-- 
   Claudio Martella
   claudio.marte...@gmail.com

Reply via email to