Re:

2014-01-10 Thread Jyoti Yadav
Thanks Lukas..I tried in other way.. I gave giraph.SpliMasterWorker=true
while running the job..


On Thu, Jan 9, 2014 at 4:23 PM, Lukas Nalezenec 
lukas.naleze...@firma.seznam.cz wrote:

  I thing you cluster is busy, you need to increase timeout
 -Dgiraph.maxMasterSuperstepWaitMsecs=...


 On 9.1.2014 11:20, Jyoti Yadav wrote:

  Hi..
  Is anyone  familiar with below mentioned error*??*

 * ERROR:*org.apache.giraph.master.BspServiceMaster: checkWorkers: Did not
 receive enough processes in time (only 0 of 1 required) after waiting
 60msecs).  This occurs if you do not have enough map tasks available
 simultaneously on your Hadoop instance to fulfill the number of requested
 workers.

  Thanks
  Jyoti





Fwd: Writing my own aggregator..

2014-01-10 Thread Jyoti Yadav
-- Forwarded message --
From: Ameya Vilankar ameya.vilan...@gmail.com
Date: Fri, Jan 10, 2014 at 1:43 PM
Subject: Re: Writing my own aggregator..
To: Jyoti Yadav rao.jyoti26ya...@gmail.com


This should solve it I think. If it doesn't email me the error.

// MyArrayWritable.java

package org.apache.giraph.examples.utils;

import java.io.*;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.WritableComparator;
import java.util.Arrays;
import java.util.*;

public class MyArrayWritable implements Writable {

  private ArrayListLong arraylist;

  public MyArrayWritable()
  {
  arraylist = new ArrayListLong();
  }

  public MyArrayWritable(long toAdd)
  {
arraylist = new ArrayListLong();
arraylist.add(toAdd);
  }

  public ArrayListLong get_arraylist()
  {
return arraylist;
  }

  public  void set_arraylist(ArrayList al)
  {
this.arraylist = al;
  }

  @Override
  public void readFields(DataInput in) throws IOException {

int size = in.readInt();

arraylist = new ArrayListLong(size);

for(int i = 0; i  size; i++)
{
  arraylist.add(in.readLong());
}
  }

  @Override
  public void write(DataOutput out) throws IOException {

out.writeInt(arraylist.size());

for(int i = 0; i  arraylist.size(); i++)
{
 out.writeLong(arraylist.get(i));
}
  }

  @Override
  public String toString()
  {
return output is + Long.toString(item) + \n;
   }
}


2.MyArrayAggregator.java

package org.apache.giraph.examples.utils;
import org.apache.giraph.aggregators.BasicAggregator;
import java.util.*;

public class MyArrayAggregator extends BasicAggregatorMyArrayWritable {
  @Override
  public void aggregate(MyArrayWritable value) {
getAggregatedValue().get_arraylist().addAll(value);
  }

  @Override
  public MyArrayWritable createInitialValue() {
return new MyArrayWritable();
  }
}


On Fri, Jan 10, 2014 at 1:27 AM, Jyoti Yadav rao.jyoti26ya...@gmail.comwrote:

 Hi Ameya..

  I am badly stuck while implementing my custom aggregator..
 In my program i  want to send each vertex id  to master.
 For that i took an arraylist, in which each vertex is adding its own
 id.while running the program,each vertex calls aggregate() function..As per
 my observation it is working fine in vertex compute method.But while
 retrieving back in master compute function.arraylist is not reflected back
 to master compute function.

 I am attaching two files below..You are requested to please check it once..

 *1.MyArrayWritable.java*

 package org.apache.giraph.examples.utils;

 import java.io.*;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.WritableComparator;
 import java.util.Arrays;
 import java.util.*;


 public class MyArrayWritable implements Writable {

   private long item;
   private ArrayListLong arraylist=new ArrayListLong(5);

   public MyArrayWritable()
 {
 item=0;


 //arraylist=new ArrayListLong(5);
 arraylist.add(item);

 }

   public MyArrayWritable(long item1)
 {

 item=item1;
 //arraylist=new ArrayListLong(5);
 arraylist.add(item);

 }


   public ArrayListLong get_arraylist() { return arraylist; }
   public  void set_arraylist(ArrayList al)
   {
 //this.arraylist=new ArrayListLong(al);
 this.arraylist=al;
   }
   public long get_item(){return item;}

  @Override
   public void readFields(DataInput in) throws IOException {
  item=in.readLong();
 int size=arraylist.size();
 size=in.readInt();

 arraylist=new ArrayListLong(5);

 for(int i=0;isize;i++)
 {
  arraylist.add(in.readLong());
 }
   }
  @Override
   public void write(DataOutput out) throws IOException {
   out.writeLong(item);

 out.writeInt(arraylist.size());


 for(int i=0;iarraylist.size();i++)
 {
  out.writeLong(arraylist.get(i));
 }


   }

 @Override
   public String toString()
   {
 return output is + Long.toString(item) + \n;
}



 }


 2.MyArrayAggregator.java

 package org.apache.giraph.examples.utils;
 import org.apache.giraph.aggregators.BasicAggregator;
 import java.util.*;


 public class MyArrayAggregator extends BasicAggregatorMyArrayWritable {
   @Override
   public void aggregate(MyArrayWritable value) {
 ArrayListLong al=new ArrayListLong();
 (getAggregatedValue().get_arraylist()).add(value.get_item());
 al=getAggregatedValue().get_arraylist();
 getAggregatedValue().set_arraylist(al);

   }

   @Override
   public MyArrayWritable createInitialValue() {
 return new MyArrayWritable();
   }
 }

 Thanks in advance ...
  Jyoti



Re: Writing my own aggregator..

2014-01-10 Thread Jyoti Yadav
Thanks a lot Ameya...It really worked..:).


On Fri, Jan 10, 2014 at 2:32 PM, Jyoti Yadav rao.jyoti26ya...@gmail.comwrote:



 -- Forwarded message --
 From: Ameya Vilankar ameya.vilan...@gmail.com
 Date: Fri, Jan 10, 2014 at 1:43 PM
 Subject: Re: Writing my own aggregator..
 To: Jyoti Yadav rao.jyoti26ya...@gmail.com


 This should solve it I think. If it doesn't email me the error.

 // MyArrayWritable.java

 package org.apache.giraph.examples.utils;

 import java.io.*;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.WritableComparator;
 import java.util.Arrays;
 import java.util.*;

 public class MyArrayWritable implements Writable {

   private ArrayListLong arraylist;

   public MyArrayWritable()
   {
   arraylist = new ArrayListLong();
   }

   public MyArrayWritable(long toAdd)
   {
 arraylist = new ArrayListLong();
 arraylist.add(toAdd);
   }

   public ArrayListLong get_arraylist()
   {
 return arraylist;
   }

   public  void set_arraylist(ArrayList al)
   {
 this.arraylist = al;
   }

   @Override
   public void readFields(DataInput in) throws IOException {

 int size = in.readInt();

 arraylist = new ArrayListLong(size);

 for(int i = 0; i  size; i++)
 {
   arraylist.add(in.readLong());
 }
   }

   @Override
   public void write(DataOutput out) throws IOException {

 out.writeInt(arraylist.size());

 for(int i = 0; i  arraylist.size(); i++)
  {
  out.writeLong(arraylist.get(i));
 }
   }

   @Override
   public String toString()
   {
 return output is + Long.toString(item) + \n;
}
 }


 2.MyArrayAggregator.java

 package org.apache.giraph.examples.utils;
 import org.apache.giraph.aggregators.BasicAggregator;
 import java.util.*;

 public class MyArrayAggregator extends BasicAggregatorMyArrayWritable {
   @Override
   public void aggregate(MyArrayWritable value) {
  getAggregatedValue().get_arraylist().addAll(value);
   }

   @Override
   public MyArrayWritable createInitialValue() {
 return new MyArrayWritable();
   }
 }


 On Fri, Jan 10, 2014 at 1:27 AM, Jyoti Yadav 
 rao.jyoti26ya...@gmail.comwrote:

 Hi Ameya..

  I am badly stuck while implementing my custom aggregator..
 In my program i  want to send each vertex id  to master.
 For that i took an arraylist, in which each vertex is adding its own
 id.while running the program,each vertex calls aggregate() function..As per
 my observation it is working fine in vertex compute method.But while
 retrieving back in master compute function.arraylist is not reflected back
 to master compute function.

 I am attaching two files below..You are requested to please check it
 once..

 *1.MyArrayWritable.java*

 package org.apache.giraph.examples.utils;

 import java.io.*;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.WritableComparator;
 import java.util.Arrays;
 import java.util.*;


 public class MyArrayWritable implements Writable {

   private long item;
   private ArrayListLong arraylist=new ArrayListLong(5);

   public MyArrayWritable()
 {
 item=0;


 //arraylist=new ArrayListLong(5);
 arraylist.add(item);

 }

   public MyArrayWritable(long item1)
 {

 item=item1;
 //arraylist=new ArrayListLong(5);
 arraylist.add(item);

 }


   public ArrayListLong get_arraylist() { return arraylist; }
   public  void set_arraylist(ArrayList al)
   {
 //this.arraylist=new ArrayListLong(al);
 this.arraylist=al;
   }
   public long get_item(){return item;}

  @Override
   public void readFields(DataInput in) throws IOException {
  item=in.readLong();
 int size=arraylist.size();
 size=in.readInt();

 arraylist=new ArrayListLong(5);

 for(int i=0;isize;i++)
 {
  arraylist.add(in.readLong());
 }
   }
  @Override
   public void write(DataOutput out) throws IOException {
   out.writeLong(item);

 out.writeInt(arraylist.size());


 for(int i=0;iarraylist.size();i++)
 {
  out.writeLong(arraylist.get(i));
 }


   }

 @Override
   public String toString()
   {
 return output is + Long.toString(item) + \n;
}



 }


 2.MyArrayAggregator.java

 package org.apache.giraph.examples.utils;
 import org.apache.giraph.aggregators.BasicAggregator;
 import java.util.*;


 public class MyArrayAggregator extends BasicAggregatorMyArrayWritable {
   @Override
   public void aggregate(MyArrayWritable value) {
 ArrayListLong al=new ArrayListLong();
 (getAggregatedValue().get_arraylist()).add(value.get_item());
 al=getAggregatedValue().get_arraylist();
 getAggregatedValue().set_arraylist(al);

   }

   @Override
   public MyArrayWritable createInitialValue() {
 return new MyArrayWritable();
   }
 }

 Thanks in advance ...
  Jyoti






Re: Writing my own aggregator..

2014-01-10 Thread Ameya Vilankar
Your are welcome. If you don't mind me asking, what are you using Giraph
for? Class project or on the job?


On Fri, Jan 10, 2014 at 4:22 AM, Jyoti Yadav rao.jyoti26ya...@gmail.comwrote:

 Thanks a lot Ameya...It really worked..:).


 On Fri, Jan 10, 2014 at 2:32 PM, Jyoti Yadav 
 rao.jyoti26ya...@gmail.comwrote:



 -- Forwarded message --
 From: Ameya Vilankar ameya.vilan...@gmail.com
 Date: Fri, Jan 10, 2014 at 1:43 PM
 Subject: Re: Writing my own aggregator..
 To: Jyoti Yadav rao.jyoti26ya...@gmail.com


 This should solve it I think. If it doesn't email me the error.

 // MyArrayWritable.java

 package org.apache.giraph.examples.utils;

 import java.io.*;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.WritableComparator;
 import java.util.Arrays;
 import java.util.*;

 public class MyArrayWritable implements Writable {

   private ArrayListLong arraylist;

   public MyArrayWritable()
   {
   arraylist = new ArrayListLong();
   }

   public MyArrayWritable(long toAdd)
   {
 arraylist = new ArrayListLong();
 arraylist.add(toAdd);
   }

   public ArrayListLong get_arraylist()
   {
 return arraylist;
   }

   public  void set_arraylist(ArrayList al)
   {
 this.arraylist = al;
   }

   @Override
   public void readFields(DataInput in) throws IOException {

 int size = in.readInt();

 arraylist = new ArrayListLong(size);

 for(int i = 0; i  size; i++)
 {
   arraylist.add(in.readLong());
 }
   }

   @Override
   public void write(DataOutput out) throws IOException {

 out.writeInt(arraylist.size());

 for(int i = 0; i  arraylist.size(); i++)
  {
  out.writeLong(arraylist.get(i));
 }
   }

   @Override
   public String toString()
   {
 return output is + Long.toString(item) + \n;
}
 }


 2.MyArrayAggregator.java

 package org.apache.giraph.examples.utils;
 import org.apache.giraph.aggregators.BasicAggregator;
 import java.util.*;

 public class MyArrayAggregator extends BasicAggregatorMyArrayWritable {
   @Override
   public void aggregate(MyArrayWritable value) {
  getAggregatedValue().get_arraylist().addAll(value);
   }

   @Override
   public MyArrayWritable createInitialValue() {
 return new MyArrayWritable();
   }
 }


 On Fri, Jan 10, 2014 at 1:27 AM, Jyoti Yadav 
 rao.jyoti26ya...@gmail.comwrote:

 Hi Ameya..

  I am badly stuck while implementing my custom aggregator..
 In my program i  want to send each vertex id  to master.
 For that i took an arraylist, in which each vertex is adding its own
 id.while running the program,each vertex calls aggregate() function..As per
 my observation it is working fine in vertex compute method.But while
 retrieving back in master compute function.arraylist is not reflected back
 to master compute function.

 I am attaching two files below..You are requested to please check it
 once..

 *1.MyArrayWritable.java*

 package org.apache.giraph.examples.utils;

 import java.io.*;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.LongWritable;
 import org.apache.hadoop.io.WritableComparator;
 import java.util.Arrays;
 import java.util.*;


 public class MyArrayWritable implements Writable {

   private long item;
   private ArrayListLong arraylist=new ArrayListLong(5);

   public MyArrayWritable()
 {
 item=0;


 //arraylist=new ArrayListLong(5);
 arraylist.add(item);

 }

   public MyArrayWritable(long item1)
 {

 item=item1;
 //arraylist=new ArrayListLong(5);
 arraylist.add(item);

 }


   public ArrayListLong get_arraylist() { return arraylist; }
   public  void set_arraylist(ArrayList al)
   {
 //this.arraylist=new ArrayListLong(al);
 this.arraylist=al;
   }
   public long get_item(){return item;}

  @Override
   public void readFields(DataInput in) throws IOException {
  item=in.readLong();
 int size=arraylist.size();
 size=in.readInt();

 arraylist=new ArrayListLong(5);

 for(int i=0;isize;i++)
 {
  arraylist.add(in.readLong());
 }
   }
  @Override
   public void write(DataOutput out) throws IOException {
   out.writeLong(item);

 out.writeInt(arraylist.size());


 for(int i=0;iarraylist.size();i++)
 {
  out.writeLong(arraylist.get(i));
 }


   }

 @Override
   public String toString()
   {
 return output is + Long.toString(item) + \n;
}



 }


 2.MyArrayAggregator.java

 package org.apache.giraph.examples.utils;
 import org.apache.giraph.aggregators.BasicAggregator;
 import java.util.*;


 public class MyArrayAggregator extends BasicAggregatorMyArrayWritable {
   @Override
   public void aggregate(MyArrayWritable value) {
 ArrayListLong al=new ArrayListLong();
 (getAggregatedValue().get_arraylist()).add(value.get_item());
 al=getAggregatedValue().get_arraylist();
 getAggregatedValue().set_arraylist(al);

   }

   @Override
   public 

DataStreamer Exception - LeaseExpiredException

2014-01-10 Thread Kristen Hardwick
Hi all, I'm requesting help again! I'm trying to get this
SimpleShortestPathsComputation example working, but I'm stuck again. Now
the job begins to run and seems to work until the final step (it performs 3
supersteps), but the overall job is failing.

In the master, among other things, I see:

...
14/01/10 15:04:17 INFO master.MasterThread: setup: Took 0.87 seconds.
14/01/10 15:04:17 INFO master.MasterThread: input superstep: Took 0.708
seconds.
14/01/10 15:04:17 INFO master.MasterThread: superstep 0: Took 0.158 seconds.
14/01/10 15:04:17 INFO master.MasterThread: superstep 1: Took 0.344 seconds.
14/01/10 15:04:17 INFO master.MasterThread: superstep 2: Took 0.064 seconds.
14/01/10 15:04:17 INFO master.MasterThread: shutdown: Took 0.162 seconds.
14/01/10 15:04:17 INFO master.MasterThread: total: Took 2.31 seconds.
14/01/10 15:04:17 INFO yarn.GiraphYarnTask: Master is ready to commit final
job output data.
14/01/10 15:04:18 INFO yarn.GiraphYarnTask: Master has committed the final
job output data.
...

To me, that looks promising - like the job was successful. However, in the
WORKER_ONLY containers, I see these things:

...
14/01/10 15:04:17 INFO graph.GraphTaskManager: cleanup: Starting for
WORKER_ONLY
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and unprocessed
event
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/1/_addressesAndPartitions,
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent :
partitionExchangeChildrenChanged (at least one worker is done sending
partitions)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and unprocessed
event
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/1/_superstepFinished,
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 INFO netty.NettyClient: stop: reached wait threshold, 1
connections closed, releasing NettyClient.bootstrap resources now.
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent: Job state
changed, checking to see if it needs to restart
14/01/10 15:04:17 INFO bsp.BspService: getJobState: Job state already
exists
(/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_masterJobState)
14/01/10 15:04:17 INFO yarn.GiraphYarnTask: [STATUS: task-1] saveVertices:
Starting to save 2 vertices using 1 threads
14/01/10 15:04:17 INFO worker.BspServiceWorker: saveVertices: Starting to
save 2 vertices using 1 threads
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent: Job state
changed, checking to see if it needs to restart
14/01/10 15:04:17 INFO bsp.BspService: getJobState: Job state already
exists
(/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_masterJobState)
14/01/10 15:04:17 INFO bsp.BspService: getJobState: Job state path is
empty! -
/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_masterJobState
14/01/10 15:04:17 ERROR zookeeper.ClientCnxn: Error while calling watcher
java.lang.NullPointerException
at java.io.StringReader.init(StringReader.java:50)
at org.json.JSONTokener.init(JSONTokener.java:66)
at org.json.JSONObject.init(JSONObject.java:402)
at org.apache.giraph.bsp.BspService.getJobState(BspService.java:716)
at
org.apache.giraph.worker.BspServiceWorker.processEvent(BspServiceWorker.java:1563)
at org.apache.giraph.bsp.BspService.process(BspService.java:1095)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
at
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and unprocessed
event
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_vertexInputSplitsAllReady,
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and unprocessed
event
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/2/_addressesAndPartitions,
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent :
partitionExchangeChildrenChanged (at least one worker is done sending
partitions)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and unprocessed
event
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/2/_superstepFinished,
type=NodeDeleted, state=SyncConnected)
...
14/01/10 15:04:17 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on
/user/spry/Shortest/_temporary/1/_temporary/attempt_1389300168420_0024_m_01_1/part-m-1:
File does not exist. Holder DFSClient_NONMAPREDUCE_-643344145_1 does not
have any open files.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2755)
at

Re: DataStreamer Exception - LeaseExpiredException

2014-01-10 Thread Avery Ching
This looks more like the Zookeeper/YARN issues mentioned in the past.  
Unfortunately, I do not have a YARN instance to test this with.  Does 
anyone else have any insights here?


On 1/10/14 1:48 PM, Kristen Hardwick wrote:
Hi all, I'm requesting help again! I'm trying to get this 
SimpleShortestPathsComputation example working, but I'm stuck again. 
Now the job begins to run and seems to work until the final step (it 
performs 3 supersteps), but the overall job is failing.


In the master, among other things, I see:

...
14/01/10 15:04:17 INFO master.MasterThread: setup: Took 0.87 seconds.
14/01/10 15:04:17 INFO master.MasterThread: input superstep: Took 
0.708 seconds.
14/01/10 15:04:17 INFO master.MasterThread: superstep 0: Took 0.158 
seconds.
14/01/10 15:04:17 INFO master.MasterThread: superstep 1: Took 0.344 
seconds.
14/01/10 15:04:17 INFO master.MasterThread: superstep 2: Took 0.064 
seconds.

14/01/10 15:04:17 INFO master.MasterThread: shutdown: Took 0.162 seconds.
14/01/10 15:04:17 INFO master.MasterThread: total: Took 2.31 seconds.
14/01/10 15:04:17 INFO yarn.GiraphYarnTask: Master is ready to commit 
final job output data.
14/01/10 15:04:18 INFO yarn.GiraphYarnTask: Master has committed the 
final job output data.

...

To me, that looks promising - like the job was successful. However, in 
the WORKER_ONLY containers, I see these things:


...
14/01/10 15:04:17 INFO graph.GraphTaskManager: cleanup: Starting for 
WORKER_ONLY
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and 
unprocessed event 
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/1/_addressesAndPartitions, 
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent : 
partitionExchangeChildrenChanged (at least one worker is done sending 
partitions)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and 
unprocessed event 
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/1/_superstepFinished, 
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 INFO netty.NettyClient: stop: reached wait 
threshold, 1 connections closed, releasing NettyClient.bootstrap 
resources now.
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent: Job 
state changed, checking to see if it needs to restart
14/01/10 15:04:17 INFO bsp.BspService: getJobState: Job state already 
exists 
(/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_masterJobState)
14/01/10 15:04:17 INFO yarn.GiraphYarnTask: [STATUS: task-1] 
saveVertices: Starting to save 2 vertices using 1 threads
14/01/10 15:04:17 INFO worker.BspServiceWorker: saveVertices: Starting 
to save 2 vertices using 1 threads
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent: Job 
state changed, checking to see if it needs to restart
14/01/10 15:04:17 INFO bsp.BspService: getJobState: Job state already 
exists 
(/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_masterJobState)
14/01/10 15:04:17 INFO bsp.BspService: getJobState: Job state path is 
empty! - 
/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_masterJobState

14/01/10 15:04:17 ERROR zookeeper.ClientCnxn: Error while calling watcher
java.lang.NullPointerException
at java.io.StringReader.init(StringReader.java:50)
at org.json.JSONTokener.init(JSONTokener.java:66)
at org.json.JSONObject.init(JSONObject.java:402)
at 
org.apache.giraph.bsp.BspService.getJobState(BspService.java:716)
at 
org.apache.giraph.worker.BspServiceWorker.processEvent(BspServiceWorker.java:1563)

at org.apache.giraph.bsp.BspService.process(BspService.java:1095)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and 
unprocessed event 
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_vertexInputSplitsAllReady, 
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and 
unprocessed event 
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/2/_addressesAndPartitions, 
type=NodeDeleted, state=SyncConnected)
14/01/10 15:04:17 INFO worker.BspServiceWorker: processEvent : 
partitionExchangeChildrenChanged (at least one worker is done sending 
partitions)
14/01/10 15:04:17 WARN bsp.BspService: process: Unknown and 
unprocessed event 
(path=/_hadoopBsp/giraph_yarn_application_1389300168420_0024/_applicationAttemptsDir/0/_superstepDir/2/_superstepFinished, 
type=NodeDeleted, state=SyncConnected)

...
14/01/10 15:04:17 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): 
No lease on 
/user/spry/Shortest/_temporary/1/_temporary/attempt_1389300168420_0024_m_01_1/part-m-1: 
File