On 23 June 2016 at 18:32, Даниил Лещёв <mele...@gmail.com> wrote:

>
>> I would slightly prefer if we discuss it over plain email (without
>> patches), to see what you think about how complex the network model needs
>> to be, and whether a static "time X" vs. semi-dynamic (based on the
>> instance disk size) approach is best.
>>
>> Maybe there was some more information back at the start of the project?
>> (I only started watching the mailing list again recently).
>>
>> The initial plan was to implement "static" solutions, based on instance
> disk size and then make it "dynamic" by using information about network
> speed from data collectors.
>

Ack.

At the moment, we have "semi-dynamic" solution, I think. The new tags may
> specify network speed in cluster (and between different parts of cluster).
>

I'll look at the patches, but if I read correctly—these are currently
stored as tags. Would it make more sense to have them as proper values in
the objects, so that (in the future) they can be used by other parts of the
code? Just a thought.

I am assuming that this speed remains constant since the network usually
> configured once and locally (for example in server rack).
>

That makes sense.


> I think, with such assumption, the network speed stays almost constant and
> the time estimations for balancing solutions become predictable.
>
> I suggest to use the new options for discarding solutions, that takes long
> time and slightly changes the state of the cluster.
> In my mind the time to perform disk replication is directly depends on the
> network bandwidth.
>

Hmm, depends. On a gigabyte or 10G network and with mechanical harddrives,
the time will depend more on disk load.

thanks,
iustin

Reply via email to