Hi,

You have got bug in class MyArrayListWritable in method write. Your code writes LONG but tries to read INT.

Regards
Lukas


On 3.4.2014 18:45, ghufran malik wrote:
Sorry I by accident sent that email before finishing it.

I tested the compute method with just:

public void compute(Vertex<IntWritable, IntWritable, NullWritable> vertex, Iterable<IntWritable> messages) throws IOException
{
//check if its the first superstep
if (getSuperstep() == 0)
{
if(isStart(vertex))
{
vertex.setValue(new IntWritable((int) getSuperstep()));
for (Edge<IntWritable, NullWritable> edge : vertex.getEdges())
{
idQueue.addInt(edge.getTargetVertexId().get());
sendMessage(edge.getTargetVertexId(), new IntWritable(1));
}
Collections.sort(idQueue.getArrayList());
//aggregate(ID_AGG, idQueue);
}
else
{
vertex.setValue(new IntWritable(Integer.MAX_VALUE));
}
}

else { }
vertex.voteToHalt();
}

//inner class
public static class SimpleBFSMasterCompute extends MasterCompute {

public void readFields(DataInput arg0) throws IOException {
// TODO Auto-generated method stub
}

public void write(DataOutput arg0) throws IOException {
// TODO Auto-generated method stub
}

@Override
public void compute() {
// TODO Auto-generated method stub
}

@Override
public void initialize() throws InstantiationException,
IllegalAccessException {
// TODO Auto-generated method stub
registerAggregator(ID_AGG, ArrayListAggregator.class);
}
}


and it worked fine, and then tested it with the aggregate method uncommented and produced the same log statements as before. The values I pass into the aggregate method are:

public static final String ID_AGG = "simplemastercompute.aggregator";

private MyArrayListWritable idQueue = new MyArrayListWritable();

code:

MyArrayListWirtable class: http://pastebin.com/n4iDjp3j

ArrayListAggregator class: http://pastebin.com/z7xjpZVU

Hopefully this made my issue clearer.

Kind regards,

Ghufran


On Thu, Apr 3, 2014 at 5:34 PM, ghufran malik <ghufran1ma...@gmail.com <mailto:ghufran1ma...@gmail.com>> wrote:

    I just tested the compute method with just:


    public void compute(Vertex<IntWritable, IntWritable, NullWritable>
    vertex, Iterable<IntWritable> messages) throws IOException
    {
    //check if its the first superstep
    if (getSuperstep() == 0)
    {
    if(isStart(vertex))
    {
    vertex.setValue(new IntWritable((int) getSuperstep()));
    for (Edge<IntWritable, NullWritable> edge : vertex.getEdges())
    {
    idQueue.addInt(edge.getTargetVertexId().get());
    sendMessage(edge.getTargetVertexId(), new IntWritable(1));
    }
    Collections.sort(idQueue.getArrayList());
    aggregate(ID_AGG, idQueue);
    }
    else
    {
    vertex.setValue(new IntWritable(Integer.MAX_VALUE));
    }
    }
    else{}
    vertex.voteToHalt();
    }

    On Thu, Apr 3, 2014 at 5:24 PM, ghufran malik
    <ghufran1ma...@gmail.com <mailto:ghufran1ma...@gmail.com>> wrote:

        After a while of those info jobs been printed this is printed
        out after:

        14/04/03 17:01:25 INFO zookeeper.ClientCnxn: Unable to read
        additional data from server sessionid 0x145284966610002,
        likely server has closed socket, closing socket connection and
        attempting reconnect
        14/04/03 17:01:26 INFO mapred.JobClient:  map 50% reduce 0%
        14/04/03 17:01:27 INFO zookeeper.ClientCnxn: Opening socket
        connection to server ghufran/127.0.1.1:22181
        <http://127.0.1.1:22181>. Will not attempt to authenticate
        using SASL (unknown error)
        14/04/03 17:01:27 WARN zookeeper.ClientCnxn: Session
        0x145284966610002 for server null, unexpected error, closing
        socket connection and attempting reconnect
        java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at
        sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at
        
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at
        org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
        14/04/03 17:01:27 WARN zk.ZooKeeperExt: exists: Connection
        loss on attempt 0, waiting 5000 msecs before retrying.
        org.apache.zookeeper.KeeperException$ConnectionLossException:
        KeeperErrorCode = ConnectionLoss for
        /_hadoopBsp/job_201404031649_0001/_workerProgresses
        at
        org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at
        org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1069)
        at org.apache.giraph.zk.ZooKeeperExt.exists(ZooKeeperExt.java:360)
        at
        
org.apache.giraph.job.JobProgressTracker$2.run(JobProgressTracker.java:87)
        at java.lang.Thread.run(Thread.java:744)
        14/04/03 17:01:29 INFO zookeeper.ClientCnxn: Opening socket
        connection to server ghufran/127.0.1.1:22181
        <http://127.0.1.1:22181>. Will not attempt to authenticate
        using SASL (unknown error)
        14/04/03 17:01:29 WARN zookeeper.ClientCnxn: Session
        0x145284966610002 for server null, unexpected error, closing
        socket connection and attempting reconnect
        java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at
        sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at
        
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at
        org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
        14/04/03 17:01:30 INFO zookeeper.ClientCnxn: Opening socket
        connection to server ghufran/127.0.1.1:22181
        <http://127.0.1.1:22181>. Will not attempt to authenticate
        using SASL (unknown error)
        14/04/03 17:01:30 WARN zookeeper.ClientCnxn: Session
        0x145284966610002 for server null, unexpected error, closing
        socket connection and attempting reconnect
        java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at
        sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at
        
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at
        org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
        14/04/03 17:01:31 INFO zookeeper.ClientCnxn: Opening socket
        connection to server ghufran/127.0.1.1:22181
        <http://127.0.1.1:22181>. Will not attempt to authenticate
        using SASL (unknown error)
        14/04/03 17:01:31 WARN zookeeper.ClientCnxn: Session
        0x145284966610002 for server null, unexpected error, closing
        socket connection and attempting reconnect
        java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at
        sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at
        
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at
        org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
        14/04/03 17:01:31 INFO mapred.JobClient: Job complete:
        job_201404031649_0001
        14/04/03 17:01:31 INFO mapred.JobClient: Counters: 6
        14/04/03 17:01:31 INFO mapred.JobClient:   Job Counters
        14/04/03 17:01:31 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=1235189
        14/04/03 17:01:31 INFO mapred.JobClient:     Total time spent
        by all reduces waiting after reserving slots (ms)=0
        14/04/03 17:01:31 INFO mapred.JobClient:     Total time spent
        by all maps waiting after reserving slots (ms)=0
        14/04/03 17:01:31 INFO mapred.JobClient:     Launched map tasks=2
14/04/03 17:01:31 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
        14/04/03 17:01:31 INFO mapred.JobClient:     Failed map tasks=1



        On Thu, Apr 3, 2014 at 5:05 PM, ghufran malik
        <ghufran1ma...@gmail.com <mailto:ghufran1ma...@gmail.com>> wrote:

            My Giraph job gets stuck at this point and will not go any
            further, that log is what is continually printed out every
            5 seconds. This log is coming from the
            CombinedWorkerProgress class:

             else if (isComputeSuperstep()) {
                  sb.append("Compute superstep
            ").append(currentSuperstep).append(": ");
            sb.append(verticesComputed).append(" out of ").append(
            verticesToCompute).append(" vertices computed; ");
            sb.append(partitionsComputed).append(" out of ").append(
            partitionsToCompute).append(" partitions computed");

            So the data is loaded in fine and then the Giraph job gets
            stuck in superstep 0 for some reason?

            public void compute(Vertex<IntWritable, IntWritable,
            NullWritable> vertex, Iterable<IntWritable> messages)
            throws IOException
            {
            //check if its the first superstep
            if (getSuperstep() == 0)
            {
            if(isStart(vertex))
            {
            vertex.setValue(new IntWritable((int) getSuperstep()));
            for (Edge<IntWritable, NullWritable> edge :
            vertex.getEdges())
            {
            idQueue.addInt(edge.getTargetVertexId().get());
            sendMessage(edge.getTargetVertexId(), new IntWritable(1));
            }
            Collections.sort(idQueue.getArrayList());
            aggregate(ID_AGG, idQueue);
            }
            else
            {
            vertex.setValue(new IntWritable(Integer.MAX_VALUE));
            }
            }

            Thats the code I wrote for the first superstep. I ran this
            code before without the aggregate and it worked, so I
            think my problem is related the the aggregator/master.

            Kind regards,

            Ghufran


            On Thu, Apr 3, 2014 at 4:40 PM, Rob Vesse
            <rve...@dotnetrdf.org <mailto:rve...@dotnetrdf.org>> wrote:

                How is that an error?

                That's just some informational log statements from
                Giraph, you'll need to provide the actual error
                message/describe the issue to get help with your problem

                Rob

                From: ghufran malik <ghufran1ma...@gmail.com
                <mailto:ghufran1ma...@gmail.com>>
                Reply-To: <user@giraph.apache.org
                <mailto:user@giraph.apache.org>>
                Date: Thursday, 3 April 2014 16:09
                To: <user@giraph.apache.org
                <mailto:user@giraph.apache.org>>
                Subject: Master/Agreggators

                    Hi,

                    I received the error:

                    14/04/03 16:01:07 INFO mapred.JobClient:  map 100%
                    reduce 0%
                    14/04/03 16:01:11 INFO job.JobProgressTracker:
                    Data from 1 workers - Compute superstep 0: 0 out
                    of 4 vertices computed; 0 out of 1 partitions
                    computed; min free memory on worker 1 - 106.6MB,
                    average 106.6MB
                    14/04/03 16:01:16 INFO job.JobProgressTracker:
                    Data from 1 workers - Compute superstep 0: 0 out
                    of 4 vertices computed; 0 out of 1 partitions
                    computed; min free memory on worker 1 - 106.6MB,
                    average 106.6MB
                    14/04/03 16:01:21 INFO job.JobProgressTracker:
                    Data from 1 workers - Compute superstep 0: 0 out
                    of 4 vertices computed; 0 out of 1 partitions
                    computed; min free memory on worker 1 - 106.6MB,
                    average 106.6MB
                    14/04/03 16:01:26 INFO job.JobProgressTracker:
                    Data from 1 workers - Compute superstep 0: 0 out
                    of 4 vertices computed; 0 out of 1 partitions
                    computed; min free memory on worker 1 - 106.59MB,
                    average 106.59MB


                    After trying to run a computation class I made
                    that makes use of an agreggator and master. I
                    remember getting a similar error when I tried
                    SimplePageRank which also makes uses of a master
                    and a agreggator.

                    Does anyone know why I receive this error and how
                    to fix it?

                    Kind regards,

                    Ghufran






Reply via email to