Does cancelling an active task cancels the threads being used internally
for query execution (threads in query pool) or cache put/delete operation?

Thanks,
Prasad

On Sat, Nov 23, 2019 at 8:26 PM Mikael <mikael-arons...@telia.com> wrote:

> The whole idea with a future is that it is a small lightweight compact
> object, and you still have Igor's suggestion:
>
> Collection<Map<IgniteUuid, ComputeTaskFuture<Object>>> result =
> ignite.compute().broadcast(() -> ignite.compute().activeTaskFutures());
>
> If you would have to implement a cluster wide listening mechanism in the
> futures you would add a terrible amount of overhead to it, and you would
> cause a lot of problems, what if you try to deserialize a future on a
> computer that is in another cluster, it may not even be an Ignite
> application, what if you deserialize a future that was created 2 years ago
> and the "id" of the future is now being reused for another future that has
> nothing to do with the original one, what if you deserialize it in a
> different cluster where that id is something different and not the same you
> submitted on the other cluster, yes all these things can be handled but
> once again you would turn a small nice simple object into a complex beast.
> Den 2019-11-23 kl. 15:00, skrev Prasad Bhalerao:
>
> By member you meant the output of the thread right?
>
> If yes, can we keep the member at centralised location like an internal
> cache?
> (May be we can provide the flag if turned on then the member can be
> broadcasted to whoever is listening to it or centralised cache location)
> I am considering future as a handle to the task which can be used to
> cancel the task even if the submitter node goes down.
>
>
>
> On Sat 23 Nov, 2019, 7:21 PM Mikael <mikael-arons...@telia.com wrote:
>
>> Well, the thread has to set a member in the future when it is finished,
>> if you serialize the future and send it somewhere else, how is the thread
>> going to be able to tell the future it had finished ?
>> Den 2019-11-23 kl. 14:31, skrev Prasad Bhalerao:
>>
>> Can someone please explain why Active task futures can't be serialized?
>>
>> If we loose the future then we don't have the way to cancel the active
>> task if it's taking too long. I think this is important feature.
>>
>>
>>
>> Thanks,
>> Prasad
>>
>> On Thu 21 Nov, 2019, 5:16 AM Denis Magda <dma...@apache.org wrote:
>>
>>> I think that you should broadcast another task that will simply ask
>>> every node if taskA is already running or not every time the topology
>>> changes. If the response from all the nodes is empty then you need to
>>> reschedule taskA, otherwise, you will skip this procedure.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Wed, Nov 20, 2019 at 9:28 AM Prasad Bhalerao <
>>> prasadbhalerao1...@gmail.com> wrote:
>>>
>>>> That means I can't do this..
>>>>
>>>> Collection<Map<IgniteUuid, ComputeTaskFuture<Object>>> result =
>>>> ignite.compute().broadcast(() -> ignite.compute().activeTaskFutures());
>>>>
>>>> Is there any way to get list futures of all active tasks running on all
>>>> nodes of the cluster?
>>>>
>>>> Thanks,
>>>> Prasad
>>>>
>>>>
>>>> On Wed 20 Nov, 2019, 10:51 PM Mikael <mikael-arons...@telia.com wrote:
>>>>
>>>>> Hi!
>>>>>
>>>>> No you cannot serialize any future object.
>>>>>
>>>>> Mikael
>>>>>
>>>>>
>>>>> Den 2019-11-20 kl. 17:59, skrev Prasad Bhalerao:
>>>>>
>>>>> Thank you for the suggestion. I will try this.
>>>>>
>>>>> I am thinking to store the task future object in a (replicated)cache
>>>>> against a jobId. If the node goes down as described in case (b), I will 
>>>>> get
>>>>> the task's future object from this  cache using a jobId and will invoke 
>>>>> the
>>>>> get method on it.
>>>>>
>>>>> But I am not sure about this approach, whether a future object can be
>>>>> serialized and send it over the wire to another node and deserialize it 
>>>>> and
>>>>> then invoke the get API on it.
>>>>>
>>>>> I will try to implement it tomorrow.
>>>>>
>>>>> Thanks,
>>>>> Prasad
>>>>>
>>>>>
>>>>> On Wed 20 Nov, 2019, 9:06 PM Igor Belyakov <igor.belyako...@gmail.com
>>>>> wrote:
>>>>>
>>>>>> Hi Prasad,
>>>>>>
>>>>>> I think that you can use compute().broadcast() for collecting results
>>>>>> of activeTaskFutures() from all the nodes:
>>>>>> Collection<Map<IgniteUuid, ComputeTaskFuture<Object>>> result =
>>>>>> ignite.compute().broadcast(() -> ignite.compute().activeTaskFutures());
>>>>>>
>>>>>> Regards,
>>>>>> Igor Belyakov
>>>>>>
>>>>>> On Wed, Nov 20, 2019 at 5:27 PM Prasad Bhalerao <
>>>>>> prasadbhalerao1...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I want to get the active tasks running in cluster (tasks running on
>>>>>>> all nodes in cluster)
>>>>>>>
>>>>>>> IgniteCompute interface has method "activeTaskFutures" which
>>>>>>> returns tasks future for active tasks started on local node.
>>>>>>>
>>>>>>> Is there anyway to get the task futures for all active tasks of
>>>>>>> whole cluster?
>>>>>>>
>>>>>>> My use case is as follows.
>>>>>>>
>>>>>>> a) The node submits the affinity task and task runs on some other
>>>>>>> node in the cluster and the node which submitted the task dies.
>>>>>>>
>>>>>>> b) The node submits the affinity task and the task runs on the same
>>>>>>> node and the same node dies.
>>>>>>>
>>>>>>> The task consumers running on all ignite grid nodes consumes tasks
>>>>>>> from kafka topic. If the node which submitted the affinity task dies, 
>>>>>>> kafka
>>>>>>> re-assigns the partitions to another consumer (running on different
>>>>>>> node) as part of its partition rebalance process. In this case my job 
>>>>>>> gets
>>>>>>> consumed one more time,
>>>>>>>
>>>>>>> But in this scenario that job might be already running on one of the
>>>>>>> node case (a) or already died as mentioned case (b).
>>>>>>>
>>>>>>> So I want to check if the job is still running on one of the node or
>>>>>>> it is already died. For this I need the active job list running on all
>>>>>>> nodes.
>>>>>>>
>>>>>>> Can someone please advise?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Prasad
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Prasad
>>>>>>>
>>>>>>>
>>>>>>>

Reply via email to