Continuation of this thread is here:
http://apache-ignite-users.70518.x6.nabble.com/ignite-compute-ClusterGroup-is-broken-td18655.html

пн, 27 нояб. 2017 г. в 3:30, Chris Berry <chriswbe...@gmail.com>:

> >>>> Affinity is not affected by IgniteCompute. My mistake.
> >>>> I get a hold of an IgniteCompute _before_ the Affinity is applied
> >> I'm not sure, what you are trying to say.
>
> I mean is -- I do this;
>
>     ignite.compute().withTimeout(timeout).execute(computeTask, request);
>
> Up-front.  Before I execute the Affinity part. Like so
>
>     public Map<? extends ComputeJob, ClusterNode>
>          map(List<ClusterNode> subgrid, @Nullable TRequest request) throws
> IgniteException {
>         Map<ComputeJob, ClusterNode> jobMap = new HashMap<>();
>         try {
>             List<UUID> cacheKeys = getIgniteAffinityCacheKeys(request);
>             Map<ClusterNode, Collection&lt;UUID>> nodeToKeysMap =
>
>
> ignite.<UUID>affinity(getIgniteAffinityCacheName()).mapKeysToNodes(cacheKeys);
>
> >> Correct me, if I'm wrong, but you are trying to choose a node to run a
> >> compute task on,
> >> depending on where a needed key is stored (just like affinityRun).
> >> And at the same time you want to make sure, that the corresponding node
> >> is already initialized.
> >> And if a node, that you chose by affinity, is not ready yet, then there
> >> will be no suitable nodes.
>
> We need the existing Grid to keep computing as Nodes come and go, which is
> often the case in the Cloud.
>
> As in; NodeA somehow die, and consequently another; NodeB, soon replaces
> it.
> Meanwhile, there are 3 other, additional copies of the data that was on
> NodeA available. (1 Primary & 3 Backups)
>
> What I want is that the compute load, which is constant and high, to not be
> sent to NodeB until it is truly ready to accept load.
>
> And that is not when NodeB starts Ignite, but sometime afterwards – which
> can, possibly, be a few minutes.
>
> >> Another thing I don't understand is why you need a ComputeTask#map
> >> method.
> >> Why ClusterGroup-specific compute is not enough for you?
>
> As shown above. I execute each ComputeTask against a large batch of Keys.
> These are then mapped to many different Nodes -- those with the data for
> that Key.
> And I send the ComputeJob to the appropriate Node using the map() function
>
> Thank you for all your help Denis.
> Cheers,
> -- Chris
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to