@Weasels
Yeap. Got replays disabled for a while as long as logging. Though it is a
feature our community misses and we can't afford to have it disabled for
the long run. But yeah, the stuttering gets considerably better with
replays and logging off. I don't think I got what you said about storage,
though. Shipping replay block and dmx files trough network is still
extremely unstable and cause srcds to crash whenever it encounters a
problem with the transfer

@yun
What if they guarantee *my VPS* will have the 100 IOPS?

And I'm trying what you suggested already. Trying out as many as I can. I'd
stick with by-the-slot servers if those were good here in Brazil. We had a
good experience with basically just one by-slot host but it had major
problems with its network route and we had to leave. Most of the people who
run those hosts here are incompetents and, what is worse, unwilling to work
with their costumers in order to provide a decent service.

So I'm taking the long way around and taking the opportunity to learn
things as I go. It's being very stressful but I like to have as much
knowledge about what I'm doing as I can. And I'm very grateful for all the
help you guys are giving. :)

On a side note, the host that presented the I/O delay is going to move my
VPS to another, less crowded, node to see if it helps. Lets hope it does.


_pilger


On 10 April 2014 21:25, Yun Huang Yong <gumby_li...@mooh.org> wrote:

> You can't tell from that number alone. Same as on the CPU dimension your
> provider can sell 100 IOPs to 30 customers on a disk subsystem that might
> provide, for examples sake, 800 IOPs. Theoretically it limits the damage
> one customer could do but overall the system is still oversold - what
> happens when 10 of those customers are running I/O heavy applications?
>
> TF2 does very little write activity even with replays enabled so it's
> extremely unlikely that replays/logging is causing any problem. More than
> likely, if you're having IO issues, it would be due to noisy
> neighbours/under-provisioned disk.
>
> You really only have two options:
> 1) talk to the provider before you sign up and tell them your
> requirements, that you intend to run TF2 servers which are CPU hungry, I/O
> write light, but needs to have consistent disk performance
> 2) sign up for a month and try out various services
>
> I recommend doing both because even if you try out 10 providers today you
> won't know whether you are just lucky that you've been provisioned onto a
> box that hasn't reached capacity yet (e.g. if they typically sell 30
> services on one box you might be customer #1 and have uncontended access
> today but when 29 other customers are on board it will be a different
> experience).
>
> TBH I wonder if you aren't better off just renting a by-the-slot server.
> The price difference between that and a VPS surely can't be large enough to
> justify the time you're spending on trying to understand this issue... then
> again, if your goal here is to learn more about the underlying systems then
> VPS is a fun way to do it :]
>
>
> On 11/04/2014 8:53 AM, pilger wrote:
>
>> Any idea of how many IOPS would be needed for hosting a decent srcds with
>> replays enabled!? I've had one host offering me 100 but I'm not really
>> sure
>> how is that converted to latency. I know it isn't a direct relation, but I
>> think it might depend on the hardware specs.
>>
>> Anyways, I just wonder if 100 IOPS is bad, regular or good.
>>
>>
>> _pilger
>>
>>
>> On 10 April 2014 11:34, Yun Huang Yong <gumby_li...@mooh.org> wrote:
>>
>>  +1
>>>
>>> SSD might help, but only might.
>>>
>>> If you think about the underlying machines they are limited on the
>>> dimensions of CPU, RAM, and disk. If the provider uses SSD that would
>>> generally improve the disk performance but if that means they then cram
>>> more customers onto the box because they have more IOPs to share you
>>> might
>>> then bump into the limits of shared CPU.
>>>
>>> The virtualisation tech may be correlated to overselling but isn't the
>>> root cause. You can get horribly oversold Xen/KVM same as you can get
>>> horribly oversold OpenVZ. It comes down to price:performance and
>>> provider's
>>> desired profit margin.
>>>
>>> In my conversations with VPS providers I generally ask what hardware
>>> they're running, and ask if it's reasonable to expect X level of
>>> performance from the service. i.e. I specifically ask if they monitor IO
>>> latency, and how much they oversell the CPUs if the VPS does not come
>>> with
>>> dedicated cores. Having these pre-sales conversations also gives you a
>>> good
>>> opportunity to evaluate the provider's competence & attitude to customer
>>> service.
>>>
>>>
>>> On 10/04/2014 11:16 PM, Rick Dunn wrote:
>>>
>>>  Wow that's some pretty ridiculous I/O lag.  It's pretty obvious the VPS
>>>> you
>>>> have isn't intended for anything I/O intensive (or has too much I/O
>>>> intensive stuff on it already).  I'd recommend moving to a VPS provider
>>>> that has fully virtualized containers rather than the para-virtualized
>>>> ones
>>>> you have now.  Most places advertise these as "cloud" VPS servers, and
>>>> most
>>>> providers of them that I've seen have much better I/O times from higher
>>>> performance SANs.
>>>>
>>>>
>>>>
>>>> On Thu, Apr 10, 2014 at 8:37 AM, pilger <pilger...@gmail.com> wrote:
>>>>
>>>>   Did a ioping instead. I believe it does the trick of measuring.
>>>>
>>>>>
>>>>> Here's what I got:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *4.0 kb from . (ext4 /dev/ploop48624p1): request=1 time=283 us4.0 kb
>>>>>> from
>>>>>> . (ext4 /dev/ploop48624p1): request=2 time=540 us 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=3 time=421 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=4 time=429 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=5 time=7.5 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=6 time=30.7 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=7 time=483 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=8 time=619 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=9 time=47.2 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=10 time=13.8 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=11 time=12.7 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=12 time=519 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=13 time=456 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=14 time=327 us 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=15 time=17.8 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=16 time=37.4 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=17 time=178.2 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=18 time=288 us 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=19 time=20.9 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=20 time=1.1 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=21 time=41.9 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=22 time=32.7 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=23 time=20.2 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=24 time=12.9 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=25 time=123.6 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=26 time=36.4 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=27 time=38.3 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=28 time=670 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=29 time=55.7 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=30 time=19.1 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=31 time=220 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=32 time=43.1 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=33 time=33.2 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=34 time=31.0 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=35 time=58.7 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=36 time=577 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=37 time=26.6 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=38 time=586.4 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=39 time=41.4 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=40 time=17.5 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=41 time=254 us4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=42 time=102.0 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=43 time=212.3 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=44 time=33.6 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=45 time=434.5 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=46 time=360.5 ms 4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=47 time=40.4 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=48 time=132.2 ms4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=49 time=141.2 ms^[4.0 kb from . (ext4
>>>>>> /dev/ploop48624p1): request=50 time=26.9 ms ^C--- . (ext4
>>>>>> /dev/ploop48624p1) ioping statistics ---50 requests completed in 52.1
>>>>>> s,
>>>>>>
>>>>>>  16
>>>>>
>>>>>  iops, 65.0 kb/smin/avg/max/mdev = 220 us / 61.5 ms / 586.4 ms / 113.3
>>>>>> ms*
>>>>>>
>>>>>>
>>>>>
>>>>> A couple of them were way above 15ms. I still couldn't link a stutter
>>>>> to
>>>>> I/O lag yet. With that I mean that I didn't experience a "hiccup" while
>>>>> noticing the I/O increase at the same time. I tried disabling replays
>>>>> and
>>>>> logs yesterday but it hat little effect since the system was behaving
>>>>> better than usual. Looks like it depends on the neighbours noise,
>>>>> indeed.
>>>>>
>>>>> Is that a solid evidence to go to the host and ask to move or get an
>>>>> SSD!?
>>>>> And, repeating myself a bit, would SSD help!?
>>>>>
>>>>>
>>>>> Thanks guys!
>>>>>
>>>>> _pilger
>>>>> _______________________________________________
>>>>> To unsubscribe, edit your list preferences, or view the list archives,
>>>>> please visit:
>>>>> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
>>>>>
>>>>>   _______________________________________________
>>>>>
>>>> To unsubscribe, edit your list preferences, or view the list archives,
>>>> please visit:
>>>> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
>>>>
>>>>
>>>>
>>>>
>>> _______________________________________________
>>> To unsubscribe, edit your list preferences, or view the list archives,
>>> please visit:
>>> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
>>>
>>>  _______________________________________________
>> To unsubscribe, edit your list preferences, or view the list archives,
>> please visit:
>> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
>>
>>
>
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives,
> please visit:
> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux
>
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux

Reply via email to