I doesn’t necessarily get throttled, the added latency will definitely impact 
the maximum bandwidth achievable per stream, especially if you are using TCP. 
In this case a bandwidth delay calculator can help you find the maximum 
theoretical bandwidth for a given latency:
https://www.switch.ch/network/tools/tcp_throughput/?do+new+calculation=do+new+calculation





From: Granwille Strauss <granwi...@namhost.com.INVALID>
Sent: Monday, July 10, 2023 6:04 PM
To: users@cloudstack.apache.org
Cc: Levin Ng <levindec...@gmail.com>; Jithin Raju <jithin.r...@shapeblue.com>; 
vivek.ku...@indiqus.com
Subject: Re: Write Speeds


Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM host to 
determine a base line to the remote USA VM machine:

- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in a DC 
that's close to our DC  in the same country that we use:

- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the USA VM to 
the local KVM host gets throttled? Based on the results above, the throttling 
doesn't seems to be in from ISPs inside our country.

So yeah, somewhere some ISP is throttling during the USA routes.
On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,



I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.



The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.





On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like



fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest





Regards,

Levin

On 10 Jul 2023 at 11:33 +0100, Granwille Strauss 
<granwi...@namhost.com><mailto:granwi...@namhost.com>, wrote:

Hi Guys

Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:

1. Network:

- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.

- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant

- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.

Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.

2. Disk Write Speed:

- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.



But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.

On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,



The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .







-Jithin



From: Levin Ng <levindec...@gmail.com><mailto:levindec...@gmail.com>

Date: Sunday, 9 July 2023 at 5:00 AM

To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org> 
<users@cloudstack.apache.org><mailto:users@cloudstack.apache.org>, Granwille 
Strauss <granwi...@namhost.com><mailto:granwi...@namhost.com>

Cc: vivek.ku...@indiqus.com<mailto:vivek.ku...@indiqus.com> 
<vivek.ku...@indiqus.com><mailto:vivek.ku...@indiqus.com>, Nux 
<n...@li.nux.ro><mailto:n...@li.nux.ro>

Subject: Re: Write Speeds

Dear Groete,



https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java



https://libvirt.org/formatnetwork.html#quality-of-service



This is in kilobytes/second, u have to divide by 8



1Gbps / 8bit = 128MB = 128000KBps



You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.



Regards,

Levin







On 8 Jul 2023 at 09:02 +0100, Granwille Strauss 
<granwi...@namhost.com><mailto:granwi...@namhost.com>, wrote:

Hi Levin

Thank you very much, I do appreciate your feedback and time replying back to 
me. I believe I have picked up on something. My VMs XML dump ALL show the 
following:

<interface type='bridge'> <mac address='xxxxxxx'/> <source bridge='cloudbr0'/> 
<bandwidth> <inbound average='128000' peak='128000'/> <outbound 
average='128000' peak='128000'/> </bandwidth> <target dev='vnet219'/> <model 
type='virtio'/> <link state='up'/> <alias name='net0'/> <rom bar='off' 
file=''/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' 
function='0x0'/> </interface>

Specifically:

<bandwidth> <inbound average='128000' peak='128000'/> <outbound 
average='128000' peak='128000'/> </bandwidth>

All my VM has this in place, a 128 Mbps limit even the VR, which makes no 
sense. I found this thread posted 10 years ago and Nux says that this value is 
affected by the service offerings: 
https://users.cloudstack.apache.narkive.com/T6Gx7BoV/cloudstack-network-limitation
 But all my service offerings are set to 1000 Mbps. See attached screenshots. 
The 4.18 documentation also confirms if the values are null, which most default 
service offerings are, it takes the values set in global settings 
network.throttling.rate and vm.network.throttling.rate, which I also have set 
as 1000 as you can see in the screenshots.



I then found this: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Network+throttling+in+CloudStack
 with no KVM details, seems this part is missing to tell me how KVM throttling 
is applied. So as DuJun said 10 years ago, I feel confused about how Cloudstack 
limit the network rate for guest. And yes, I have stopped my VMs and rebooted 
MANY times it doesn't update the XML at all.

Please also take into account documentation states that in shared networking 
there's supposed to be no limits on incoming traffic (ingress), as far as I 
understand it.

On 7/7/23 23:35, Levin Ng wrote:

Hi Groete,





IMO, You should bypass any ACS provisioning to troubleshoot the performance 
case first, which allow you get more idea on the hardware + kvm performance 
with minimal influent, then you can compare the libvirt xml different between 
plain KVM and ACS. That help you sort out the different where it come from, you 
will see QoS bandwidth setting in the VM xml if you do.



We are trying to tell you, when you diagnose the throughput problem, you should 
first identify the bottleneck where it come first.  Iperf is a tools that you 
can test the line speed end to end into your VM, if the result in 1Gbps network 
are near 800+ Mbps, you can focus on the VM performance or the copy protocol 
you are using, try different protocol, ssh/rsync/ftp/nfs, see any different.



You are already test the write-back caching which will improve disk I/O 
performance, it is another story you need to deep dive the pro and cons on the 
write cache, there are risk to corrupt the VM filesystem in some case, this is 
what u need to learn about each cache mode.



VM Guest performance are involved by many factor, you cannot expect VM perform 
nearly the bare metal does. There are long journey to do such optimization, 
take time and improve it gradually. There are lot of kvm tuning guide you can 
reference and prove it on your hardware. Read thoughtfully on each tuning that 
may bring improvement and also introduce risk factor.





Regards,

Levin









On 7 Jul 2023 at 21:24 +0100, Granwille Strauss 
<granwi...@namhost.com><mailto:granwi...@namhost.com>, wrote:

Sorry that I have to ask, can you perhaps be a bit more specific, please. The 
only QOS settings I see in service offering are "None", "Hypervisor" and 
"Storage", which doesn't really seem network related. Or am I missing the 
point? Take note that I use the default offerings for the VR and VMs but with 
slight tweaks such as setting local storage etc and only increased the Network 
rate from 200 Mbps to 1000 Mbps.



So can you kindly explain by what QOS settings you guys are referring to, 
please?

PS, the Write-back disk caching seems to give the VM a slight increase, I now 
see writes at 190 Mbps from ~70 Mbps.

On 7/7/23 21:11, Vivek Kumar wrote:

Hello,



IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, so I 
don’t think that it’s depends on disk performance, it’s better to check the 
network QoS at every layer, VR and VM.







Vivek Kumar

Sr. Manager - Cloud & DevOps

TechOps | Indiqus Technologies



vivek.ku...@indiqus.com<mailto:vivek.ku...@indiqus.com> 
<mailto:vivek.ku...@indiqus.com><mailto:vivek.ku...@indiqus.com>

     
www.indiqus.com<http://www.indiqus.com><http://www.indiqus.com><http://www.indiqus.com>
 <https://www.indiqus.com/><https://www.indiqus.com/>
















 

On 07-Jul-2023, at 9:44 PM, Granwille Strauss 
<granwi...@namhost.com.INVALID><mailto:granwi...@namhost.com.INVALID> wrote:

Hi Levin



Thank you, I am aware of network offering, the first thing I did was make sure 
it was set to accommodate the KVM's entire 1 Gbps uplink. But now that I think 
if it iperf test prevousily were always stuck on 50 Mbps, but this is because 
of the write speeds on the disk at least that's what I believe causes the 
network bottle neck. I will double-check this again.



But there is some sort of limit on the VM disk in place. FIO tests show that 
write speeds are in the range of 50 - 90 MB/s on the VM, while fio test 
confirms on the KVM its over 400 MB/s.



On 7/7/23 18:08, Levin Ng wrote:

Hi Groete,



Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.



Regards,

Levin



On 7 Jul 2023 at 16:40 +0100, Granwille Strauss 
<granwi...@namhost.com><mailto:granwi...@namhost.com> 
<mailto:granwi...@namhost.com><mailto:granwi...@namhost.com>, wrote:

Hi Levin

Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware RAID 
5 configuration, of which I am using local storage pool, yes. I will run fio 
test and also playing around with the controller cache settings to see what 
happens and provide feedback on this soon.

On 7/7/23 17:23, Levin Ng wrote:

HI Groete,



Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.



Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.



Please gather more information first



Regards,

Levin

On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
<granwi...@namhost.com.invalid><mailto:granwi...@namhost.com.invalid> 
<mailto:granwi...@namhost.com.invalid><mailto:granwi...@namhost.com.invalid>, 
wrote:

Hi Guys

Does Cloudstack have a disk write speed limit somewhere in its setting? We have 
been transferring many files from remote servers to VM machines on our 
Cloudstack instance and we recently noticed that the VM write speeds are all 
limited to about 5-8 MB/s. But the underlying hardware of the KVM uses SSDs 
capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms 
are set to "No Disk Cache" with thin provisioning, could this be the reason? I 
understand that "Write Back Disk Cach" has better write speeds. Also I have VMs 
set as virtio for its disk controller. What could I be missing in this case?

--

Regards / Groete



Granwille Strauss  //  Senior Systems Admin



e: granwi...@namhost.com<mailto:granwi...@namhost.com> 
<mailto:granwi...@namhost.com><mailto:granwi...@namhost.com>

m: +264 81 323 1260

w: 
www.namhost.com<http://www.namhost.com><http://www.namhost.com><http://www.namhost.com>
 <http://www.namhost.com/><http://www.namhost.com/>









Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA





The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy



--

Regards / Groete



Granwille Strauss  //  Senior Systems Admin



e: granwi...@namhost.com<mailto:granwi...@namhost.com> 
<mailto:granwi...@namhost.com><mailto:granwi...@namhost.com>

m: +264 81 323 1260

w: 
www.namhost.com<http://www.namhost.com><http://www.namhost.com><http://www.namhost.com>
 <http://www.namhost.com/><http://www.namhost.com/>









Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA





The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy



--

Regards / Groete



<https://www.namhost.com/><https://www.namhost.com/>  Granwille Strauss  //  
Senior Systems Admin



e: granwi...@namhost.com<mailto:granwi...@namhost.com> 
<mailto:granwi...@namhost.com><mailto:granwi...@namhost.com>

m: +264 81 323 1260 <tel:+264813231260><tel:+264813231260>

w: 
www.namhost.com<http://www.namhost.com><http://www.namhost.com><http://www.namhost.com>
 <https://www.namhost.com/><https://www.namhost.com/>



<https://www.facebook.com/namhost><https://www.facebook.com/namhost>  
<https://twitter.com/namhost><https://twitter.com/namhost>  
<https://www.instagram.com/namhostinternetservices/><https://www.instagram.com/namhostinternetservices/>
  
<https://www.linkedin.com/company/namhos><https://www.linkedin.com/company/namhos>
  
<https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA><https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>



<https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner><https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>



Namhost Internet Services (Pty) Ltd, 24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy

<https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818><https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>

--

Regards / Groete



Granwille Strauss  //  Senior Systems Admin



e: granwi...@namhost.com<mailto:granwi...@namhost.com>

m: +264 81 323 1260

w: 
www.namhost.com<http://www.namhost.com><http://www.namhost.com><http://www.namhost.com>









Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA





The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy



--

Regards / Groete



Granwille Strauss  //  Senior Systems Admin



e: granwi...@namhost.com<mailto:granwi...@namhost.com>

m: +264 81 323 1260

w: 
www.namhost.com<http://www.namhost.com><http://www.namhost.com><http://www.namhost.com>









Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA





The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy



--

Regards / Groete



Granwille Strauss  //  Senior Systems Admin



e: granwi...@namhost.com<mailto:granwi...@namhost.com>

m: +264 81 323 1260

w: www.namhost.com<http://www.namhost.com>









Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA





The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy




--

Regards / Groete
[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/logo/621b3fa39fb210001f975298/cd2904ba-304d-4a49-bf33-cbe9ac76d929_248x-.png]<https://www.namhost.com/>
Granwille Strauss  //  Senior Systems Admin

e: granwi...@namhost.com<mailto:granwi...@namhost.com>
m: +264 81 323 1260<tel:+264813231260>
w: www.namhost.com<https://www.namhost.com/>

[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_01/621b3fa39fb210001f975298/9151954b-b298-41aa-89c8-1d68af075373_48x48.png]<https://www.facebook.com/namhost>[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_02/621b3fa39fb210001f975298/85a9dc7c-7bd1-4958-85a9-e6a25baeb028_48x48.png]<https://twitter.com/namhost>[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_03/621b3fa39fb210001f975298/c1c5386c-914c-43cf-9d37-5b4aa8e317ab_48x48.png]<https://www.instagram.com/namhostinternetservices/>[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_04/621b3fa39fb210001f975298/3aaa7968-130e-48ec-821d-559a332cce47_48x48.png]<https://www.linkedin.com/company/namhos>[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_05/621b3fa39fb210001f975298/3a8c09e6-588f-43a8-acfd-be4423fd3fb6_48x48.png]<https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>

[https://www.adsigner.com/v1/i/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner/940x300]<https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>
Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA

The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy

<https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>

Reply via email to