> On Nov 20, 2017, at 2:21 PM, Toke Høiland-Jørgensen
> <notificati...@github.com> wrote:
>
> Pete Heist <notificati...@github.com> writes:
>
> >> On Nov 20, 2017, at 1:11 PM, Toke Høiland-Jørgensen
> >> <notificati...@github.com> wrote:
>
> > I wondered if/when this would come up… Why not plot the latency every
> > 20ms, too dense?
>
> For the current plot type (where data points are connected by lines),
> certainly. It would probably be possible to plot denser data sets by a
> point cloud type plot, but that would make denser data series harder to
> read.
Yeah, thought of the same, or some area fill between min and max, or 98th
percentile values, or something.
> Hmm, seeing as we probably want to keep all the data points in the Flent
> data file anyway, I think we might as well do the sub-sampling in Flent.
> Just thinning the plots is a few lines of numpy code; just need to
> figure out a good place to apply it.
Didn’t think of that (keeping all data points anyway), but it really makes more
sense. At first I thought numpy was a new street-talkin’ adjective (as in,
that’s some really numpy code). I see: NumPy. :)
> Handling loss is another matter, but one that I need to deal with
> anyway. Right now I'm just throwing away lost data points entirely,
> which loses the lost_{up,down} information. Will fix that and also
> figure out the right way to indicate losses.
Really looking forward to it!
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345701066
_______________________________________________
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org