Hi!

No you cannot serialize any future object.

Mikael


Den 2019-11-20 kl. 17:59, skrev Prasad Bhalerao:
Thank you for the suggestion. I will try this.

I am thinking to store the task future object in a (replicated)cache against a jobId. If the node goes down as described in case (b), I will get the task's future object from this  cache using a jobId and will invoke the get method on it.

But I am not sure about this approach, whether a future object can be serialized and send it over the wire to another node and deserialize it and then invoke the get API on it.

I will try to implement it tomorrow.

Thanks,
Prasad


On Wed 20 Nov, 2019, 9:06 PM Igor Belyakov <igor.belyako...@gmail.com <mailto:igor.belyako...@gmail.com> wrote:

    Hi Prasad,

    I think that you can use compute().broadcast() for collecting
    results of activeTaskFutures() from all the nodes:
    Collection<Map<IgniteUuid, ComputeTaskFuture<Object>>> result =
    ignite.compute().broadcast(() ->
    ignite.compute().activeTaskFutures());

    Regards,
    Igor Belyakov

    On Wed, Nov 20, 2019 at 5:27 PM Prasad Bhalerao
    <prasadbhalerao1...@gmail.com
    <mailto:prasadbhalerao1...@gmail.com>> wrote:

        Hi,

        I want to get the active tasks running in cluster (tasks
        running on all nodes in cluster)

        IgniteCompute interface has method "activeTaskFutures" which
        returns tasks future for active tasks started on local node.

        Is there anyway to get the task futures for all active tasks
        of whole cluster?

        My use case is as follows.

        a) The node submits the affinity task and task runs on some
        other node in the cluster and the node which submitted the
        task dies.

        b) The node submits the affinity task and the task runs on the
        same node and the same node dies.

        The task consumers running on all ignite grid nodes consumes
        tasks from kafka topic. If the node which submitted the
        affinity task dies, kafka re-assigns the partitions to another
        consumer (running on different node) as part of its partition
        rebalance process. In this case my job gets consumed one more
        time,

        But in this scenario that job might be already running on one
        of the node case (a) or already died as mentioned case (b).

        So I want to check if the job is still running on one of the
        node or it is already died. For this I need the active job
        list running on all nodes.

        Can someone please advise?

        Thanks,
        Prasad







        Thanks,
        Prasad


Reply via email to