Hmm, I'd say try disabling re-plays for a while and see if that makes any
different.  That might generated a lot of ("unnecessary") traffic and
storage I/O on an ad-hoc basis.  If that makes things better, than that
might indicate you need storage I/O (i.e. find a host that uses all
solid-state storage) or some network bottleneck.


On Thu, Apr 10, 2014 at 3:53 PM, pilger <pilger...@gmail.com> wrote:

> Any idea of how many IOPS would be needed for hosting a decent srcds with
> replays enabled!? I've had one host offering me 100 but I'm not really sure
> how is that converted to latency. I know it isn't a direct relation, but I
> think it might depend on the hardware specs.
>
> Anyways, I just wonder if 100 IOPS is bad, regular or good.
>
>
> _pilger
>
>
> On 10 April 2014 11:34, Yun Huang Yong <gumby_li...@mooh.org> wrote:
>
> > +1
> >
> > SSD might help, but only might.
> >
> > If you think about the underlying machines they are limited on the
> > dimensions of CPU, RAM, and disk. If the provider uses SSD that would
> > generally improve the disk performance but if that means they then cram
> > more customers onto the box because they have more IOPs to share you
> might
> > then bump into the limits of shared CPU.
> >
> > The virtualisation tech may be correlated to overselling but isn't the
> > root cause. You can get horribly oversold Xen/KVM same as you can get
> > horribly oversold OpenVZ. It comes down to price:performance and
> provider's
> > desired profit margin.
> >
> > In my conversations with VPS providers I generally ask what hardware
> > they're running, and ask if it's reasonable to expect X level of
> > performance from the service. i.e. I specifically ask if they monitor IO
> > latency, and how much they oversell the CPUs if the VPS does not come
> with
> > dedicated cores. Having these pre-sales conversations also gives you a
> good
> > opportunity to evaluate the provider's competence & attitude to customer
> > service.
> >
> >
> > On 10/04/2014 11:16 PM, Rick Dunn wrote:
> >
> >> Wow that's some pretty ridiculous I/O lag.  It's pretty obvious the VPS
> >> you
> >> have isn't intended for anything I/O intensive (or has too much I/O
> >> intensive stuff on it already).  I'd recommend moving to a VPS provider
> >> that has fully virtualized containers rather than the para-virtualized
> >> ones
> >> you have now.  Most places advertise these as "cloud" VPS servers, and
> >> most
> >> providers of them that I've seen have much better I/O times from higher
> >> performance SANs.
> >>
> >>
> >>
> >> On Thu, Apr 10, 2014 at 8:37 AM, pilger <pilger...@gmail.com> wrote:
> >>
> >>  Did a ioping instead. I believe it does the trick of measuring.
> >>>
> >>> Here's what I got:
> >>>
> >>>
> >>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> *4.0 kb from . (ext4 /dev/ploop48624p1): request=1 time=283 us4.0 kb
> >>>> from
> >>>> . (ext4 /dev/ploop48624p1): request=2 time=540 us 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=3 time=421 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=4 time=429 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=5 time=7.5 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=6 time=30.7 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=7 time=483 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=8 time=619 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=9 time=47.2 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=10 time=13.8 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=11 time=12.7 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=12 time=519 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=13 time=456 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=14 time=327 us 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=15 time=17.8 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=16 time=37.4 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=17 time=178.2 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=18 time=288 us 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=19 time=20.9 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=20 time=1.1 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=21 time=41.9 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=22 time=32.7 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=23 time=20.2 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=24 time=12.9 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=25 time=123.6 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=26 time=36.4 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=27 time=38.3 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=28 time=670 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=29 time=55.7 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=30 time=19.1 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=31 time=220 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=32 time=43.1 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=33 time=33.2 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=34 time=31.0 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=35 time=58.7 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=36 time=577 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=37 time=26.6 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=38 time=586.4 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=39 time=41.4 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=40 time=17.5 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=41 time=254 us4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=42 time=102.0 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=43 time=212.3 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=44 time=33.6 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=45 time=434.5 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=46 time=360.5 ms 4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=47 time=40.4 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=48 time=132.2 ms4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=49 time=141.2 ms^[4.0 kb from . (ext4
> >>>> /dev/ploop48624p1): request=50 time=26.9 ms ^C--- . (ext4
> >>>> /dev/ploop48624p1) ioping statistics ---50 requests completed in 52.1
> s,
> >>>>
> >>> 16
> >>>
> >>>> iops, 65.0 kb/smin/avg/max/mdev = 220 us / 61.5 ms / 586.4 ms / 113.3
> >>>> ms*
> >>>>
> >>>
> >>>
> >>> A couple of them were way above 15ms. I still couldn't link a stutter
> to
> >>> I/O lag yet. With that I mean that I didn't experience a "hiccup" while
> >>> noticing the I/O increase at the same time. I tried disabling replays
> and
> >>> logs yesterday but it hat little effect since the system was behaving
> >>> better than usual. Looks like it depends on the neighbours noise,
> indeed.
> >>>
> >>> Is that a solid evidence to go to the host and ask to move or get an
> >>> SSD!?
> >>> And, repeating myself a bit, would SSD help!?
> >>>
> >>>
> >>> Thanks guys!
> >>>
> >>> _pilger
> >>> _______________________________________________
> >>> To unsubscribe, edit your list preferences, or view the list archives,
> >>> please visit:
> >>> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
> >>>
> >>>  _______________________________________________
> >> To unsubscribe, edit your list preferences, or view the list archives,
> >> please visit:
> >> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
> >>
> >>
> >>
> >
> > _______________________________________________
> > To unsubscribe, edit your list preferences, or view the list archives,
> > please visit:
> > https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
> >
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives,
> please visit:
> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
>
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux

Reply via email to