Sorry, I take that back. This does not seem to work either. I am guessing
the context is not shared among the workers so the aggregation happens
local to each worker.


On Fri, Nov 16, 2012 at 1:36 PM, Jonathan Bishop <jbishop....@gmail.com>wrote:

> Hi,
>
> I found a solution for this which seems to work...here it is in case
> someone else finds this useful.
>
> 1) Create a new worker context class which registers a
> LongSumAggregator...make sure to set the worker context in your job to this
> class.
>
> public class MyContext extends WorkerContext {
> static String MESSAGE_COUNT = "MESSAGE_COUNT";
> long messages = 0;
>  @Override
> public void preApplication() throws InstantiationException,
> IllegalAccessException {
> registerAggregator(MESSAGE_COUNT, LongSumAggregator.class);
>  }
> @Override
> public void postApplication() {
> }
>  @Override
> public void preSuperstep() {
> LongSumAggregator ag = (LongSumAggregator)getAggregator(MESSAGE_COUNT);
>  messages = ag.getAggregatedValue().get();
> ag.setAggregatedValue(new LongWritable(0L));
> }
>  @Override
> public void postSuperstep() {
> }
>  long getSuperstepMessageCount() {
> return messages;
> }
> }
>
> 2) In compute increment this aggregator by the number of messages sent...
>
> LongSumAggregator messageAggregator =
> ((LongSumAggregator)getAggregator(MyContext.MESSAGE_COUNT));
> for (VertexIndex i : this) {
> messageAggregator.aggregate(1L);
> }
>
> 3) retrieve the number of messages being sent to this superstep...
>
> long numMessages =
> ((GraphContext)getWorkerContext()).getSuperstepMessageCount();
>
> That's it - let me know if I missed something.
>
> Jon
>
>
>
> On Wed, Nov 14, 2012 at 5:38 PM, Jonathan Bishop <jbishop....@gmail.com>wrote:
>
>> Hi,
>>
>> It would be useful to know if there are no messages being sent to any
>> vertex during the current super-step during "calculate".
>>
>> Of course, the number of messages sent to the particular vertex being
>> computed is known. What I need is the sum of messages being sent to all
>> vertices.
>>
>> Thanks,
>>
>> Jon
>>
>
>

Reply via email to