Yes makes perfect since.

- Bobby

On Friday, February 10, 2017, 4:36:22 PM CST, Anis Nasir <aadi.a...@gmail.com> 
wrote:Dear Bobby,

Thank you very much for your reply.

In real deployments, it is often the case that executors are heterogenous
and execution time per tuple is non-uniform (as discussed in the JIRA). In
such cases, the workload and capacity (of executors) distributions are
often unknown at the upstream operator and it is required to infer the
capacity of each worker and the assigned workload.

For such scenarios, I would like to design a grouping scheme that allows
upstream operators to change the assignments by knowing both the workload
and the capacities of the machine.

Also, i would prefer that each downstream operator can send this message
on-need basis, rather than broadcasting it across the whole set of
operators.

Does it makes sense?

Regards,
Anis








On Fri, Feb 10, 2017 at 11:54 PM, Bobby Evans <ev...@yahoo-inc.com.invalid>
wrote:

> Anis,
> We already have the q-length being reported up stream.
> https://issues.apache.org/jira/browse/STORM-162
> It works well, except when a topology gets really big the amount of
> metrics being collected can negatively impact the performance of the
> topology.  By really big I mean several thousand workers.
> There has also been a push to redo the metrics system in storm so it is
> more scalable and so that nimbus can query it.  That is what I personally
> think would be a good long term solution for features like elasticity.  But
> I am not really sure what you mean by load aware scheduling.
>
> - Bobby
>
> On Thursday, February 9, 2017, 10:34:29 PM CST, Anis Nasir <
> aadi.a...@gmail.com> wrote:Dear All,
>
> I have been trying to implement load aware scheduling for Apache Storm.
>
> For this purpose, I need to send periodic statistics from downstream
> operators to upstream operators.
>
> Is there a standard way of sending such statistics to upstream operator,
> e.g., a bolt periodically reporting it's local queue length to the upstream
> spout.
>
> Thanking you in advance.
>
> Regards,
> Anis
>

Reply via email to