Well, the thread has to set a member in the future when it is finished, if you serialize the future and send it somewhere else, how is the thread going to be able to tell the future it had finished ?

Den 2019-11-23 kl. 14:31, skrev Prasad Bhalerao:
Can someone please explain why Active task futures can't be serialized?

If we loose the future then we don't have the way to cancel the active task if it's taking too long. I think this is important feature.



Thanks,
Prasad

On Thu 21 Nov, 2019, 5:16 AM Denis Magda <dma...@apache.org <mailto:dma...@apache.org> wrote:

    I think that you should broadcast another task that will simply
    ask every node if taskA is already running or not every time the
    topology changes. If the response from all the nodes is empty then
    you need to reschedule taskA, otherwise, you will skip this
    procedure.

    -
    Denis


    On Wed, Nov 20, 2019 at 9:28 AM Prasad Bhalerao
    <prasadbhalerao1...@gmail.com
    <mailto:prasadbhalerao1...@gmail.com>> wrote:

        That means I can't do this..

                Collection<Map<IgniteUuid, ComputeTaskFuture<Object>>>
                result = ignite.compute().broadcast(() ->
                ignite.compute().activeTaskFutures());

        Is there any way to get list futures of all active tasks
        running on all nodes of the cluster?

        Thanks,
        Prasad


        On Wed 20 Nov, 2019, 10:51 PM Mikael
        <mikael-arons...@telia.com <mailto:mikael-arons...@telia.com>
        wrote:

            Hi!

            No you cannot serialize any future object.

            Mikael


            Den 2019-11-20 kl. 17:59, skrev Prasad Bhalerao:
            Thank you for the suggestion. I will try this.

            I am thinking to store the task future object in a
            (replicated)cache against a jobId. If the node goes down
            as described in case (b), I will get the task's future
            object from this  cache using a jobId and will invoke the
            get method on it.

            But I am not sure about this approach, whether a future
            object can be serialized and send it over the wire to
            another node and deserialize it and then invoke the get
            API on it.

            I will try to implement it tomorrow.

            Thanks,
            Prasad


            On Wed 20 Nov, 2019, 9:06 PM Igor Belyakov
            <igor.belyako...@gmail.com
            <mailto:igor.belyako...@gmail.com> wrote:

                Hi Prasad,

                I think that you can use compute().broadcast() for
                collecting results of activeTaskFutures() from all
                the nodes:
                Collection<Map<IgniteUuid,
                ComputeTaskFuture<Object>>> result =
                ignite.compute().broadcast(() ->
                ignite.compute().activeTaskFutures());

                Regards,
                Igor Belyakov

                On Wed, Nov 20, 2019 at 5:27 PM Prasad Bhalerao
                <prasadbhalerao1...@gmail.com
                <mailto:prasadbhalerao1...@gmail.com>> wrote:

                    Hi,

                    I want to get the active tasks running in cluster
                    (tasks running on all nodes in cluster)

                    IgniteCompute interface has method
                    "activeTaskFutures" which returns tasks future
                    for active tasks started on local node.

                    Is there anyway to get the task futures for all
                    active tasks of whole cluster?

                    My use case is as follows.

                    a) The node submits the affinity task and task
                    runs on some other node in the cluster and the
                    node which submitted the task dies.

                    b) The node submits the affinity task and the
                    task runs on the same node and the same node dies.

                    The task consumers running on all ignite grid
                    nodes consumes tasks from kafka topic. If the
                    node which submitted the affinity task dies,
                    kafka re-assigns the partitions to another
                    consumer (running on different node) as part of
                    its partition rebalance process. In this case my
                    job gets consumed one more time,

                    But in this scenario that job might be already
                    running on one of the node case (a) or already
                    died as mentioned case (b).

                    So I want to check if the job is still running on
                    one of the node or it is already died. For this I
                    need the active job list running on all nodes.

                    Can someone please advise?

                    Thanks,
                    Prasad







                    Thanks,
                    Prasad


Reply via email to