Re: Write Speeds

2023-07-14 Thread Granwille Strauss

Hi Guys

Just want to get back to you all. Thank you very much for your input it 
has greatly helped me and now I have learned a few new things too. I 
want to confirm that write speeds was never the issue, it was a network 
bottleneck instead. It was not related to ACS either but more towards 
the default network configs in the VMs itself.



I started applying network optimisations at a kernel level of my VMs 
such as setting the following:



net.ipv4.tcp_congestion_control=bbr
net.ipv4.tcp_notsent_lowat=131072
net.ipv4.tcp_rmem=8192 262144 536870912
net.ipv4.tcp_wmem=4096 16384 536870912
net.ipv4.tcp_window_scaling=1
net.core.rmem_max=536870912
net.core.wmem_max=536870912
net.ipv4.tcp_fastopen=3
net.ipv4.tcp_max_syn_backlog=4096
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_sack=1


And now speeds have increased from 50 mbps to 150 Mbps, how I wish I 
thought of this earlier.  Its not quite there yet, I want to further 
optimise this and try not to have any trade-offs doing so. I read a 
little and so but would appreciate it if anyone is willing to share some 
more optimisations for VMs, for network bandwidths of 1000 Mbps.


On 7/10/23 19:53, Alex Mattioli wrote:

I doesn’t necessarily get throttled, the added latency will definitely impact 
the maximum bandwidth achievable per stream, especially if you are using TCP. 
In this case a bandwidth delay calculator can help you find the maximum 
theoretical bandwidth for a given latency:
https://www.switch.ch/network/tools/tcp_throughput/?do+new+calculation=do+new+calculation





From: Granwille Strauss
Sent: Monday, July 10, 2023 6:04 PM
To:users@cloudstack.apache.org
Cc: Levin Ng; Jithin 
Raju;vivek.ku...@indiqus.com
Subject: Re: Write Speeds


Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM host to 
determine a base line to the remote USA VM machine:

- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in a DC 
that's close to our DC  in the same country that we use:

- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the USA VM to 
the local KVM host gets throttled? Based on the results above, the throttling 
doesn't seems to be in from ISPs inside our country.

So yeah, somewhere some ISP is throttling during the USA routes.
On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,



I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.



The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.





On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like



fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest





Regards,

Levin

On 10 Jul 2023 at 11:33 +0100, Granwille 
Strauss<mailto:granwi...@namhost.com>, wrote:

Hi Guys

Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:

1. Network:

- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.

- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant

- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.

Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.

2. Disk Write Speed:

- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, give

Re: Write Speeds

2023-07-11 Thread Granwille Strauss

I Did the following:

1. Tail agent log at /var/log/cloudstack/agent/agent.log
2. Set enable.io.uring=true in agent properties
3. Restart Cloudstack-agent
4. Stopped VM and set io.policy=io.uring
5. Set io.policy=io.uring in local primary storage settings for KVM
6. Start VM up again
7. restart cloudstack agent again.

You can find the log of all these steps and what it triggered here: 
https://we.tl/t-AJdC2bXa4u


And as you requested, this is all I see in the log when I grep "uring" 
from the log:



grep "uring" /var/log/cloudstack/agent/agent.log
2023-07-11 11:59:58,994 INFO [kvm.resource.LibvirtComputingResource] 
(main:null) (logid:) IO uring driver for Qemu: disabled
2023-07-11 12:04:59,064 INFO [kvm.resource.LibvirtComputingResource] 
(main:null) (logid:) IO uring driver for Qemu: disabled
2023-07-11 13:16:07,433 INFO [kvm.resource.LibvirtComputingResource] 
(main:null) (logid:) IO uring driver for Qemu: disabled
2023-07-11 13:17:48,056 INFO [kvm.resource.LibvirtComputingResource] 
(main:null) (logid:) IO uring driver for Qemu: disabled


That follows with the following error, which you will see in the log:

2023-07-11 13:16:07,520 WARN [kvm.storage.KVMStoragePoolManager] 
(main:null) (logid:) Duplicate StorageAdaptor type PowerFlex, not 
loading com.cloud.hypervisor.kvm.storage.ScaleIOStorageAdaptor
2023-07-11 13:16:07,522 INFO [kvm.resource.LibvirtComputingResource] 
(main:null) (logid:) No libvirt.vif.driver specified. Defaults to 
BridgeVifDriver.
2023-07-11 13:16:07,725 INFO  [cloud.serializer.GsonHelper] 
(main:null) (logid:) Default Builder inited.


On 7/11/23 13:03, Slavka Peleva wrote:
Can you share the full agent.log with the VM deployment? Also, can you 
search the agent.log for "IO uring driver for Qemu:  " and share 
the whole log message?


Best regards,
Slavka

On Tue, Jul 11, 2023 at 1:55 PM Granwille Strauss 
 wrote:


Hi

Thank you for the feedback, that's exactly what I did earlier,
didn't do anything. I also set it in global config and you can
also set it for the local primary storage. Nothing happened in the
XML dumps.

On 7/11/23 12:50, Slavka Peleva wrote:

Can you set it from the VM's settings? The VM has to be stopped.
I will check the global config later.

image.png



On Tue, Jul 11, 2023 at 1:38 PM Granwille Strauss
 wrote:

Hi

I am using the latest CS 4.18.0.0

On 7/11/23 12:28, Slavka Peleva wrote:

Can you share the CS version that you're using? There are some 
differences
in enabling the IO policy depending on the version.

Best regards,
Slavka

On Tue, Jul 11, 2023 at 1:05 PM Granwille Strauss  

wrote:


Nope, didn't work. I added enable.io.uring=true to
/etc/cloudstack/agent/agent.properties, restarted cloudstack-agent.
Shutdown one VM machine. Set io.policy=io.uring for this VM then start 
up
VM again and XML dump remains the same:

virsh dumpxml i-2-182-VM | grep drive
   
   
   


On 7/11/23 11:53, Slavka Peleva wrote:

Hi,

Can you check if the `enable.io.uring=true` property in the
agent.properties file is set? You have to restart the cloudstack-agent
service if you make any changes.

Best regards,
Slavka


--
Regards / Groete

    Granwille Strauss 
 //  Senior Systems Admin

*e:*granwi...@namhost.com
*m:* +264 81 323 1260 <+264813231260>
*w:*www.namhost.com  

    
  
  

  

  





  


Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by
mistake, please inform us by email reply and then delete the message. 
It is
forbidden to copy, forward, or in any way reveal the contents of this
message to anyone without our explicit consent. The integrity and 
security
of this email cannot be guaranteed over the Internet. Therefore, the 
sender
will not be held liable for any damage caused by the message. For our 
full
privacy policy and disclaimers, please go to

Re: Write Speeds

2023-07-11 Thread Slavka Peleva
Can you share the full agent.log with the VM deployment? Also, can you
search the agent.log for "IO uring driver for Qemu:  " and share the
whole log message?

Best regards,
Slavka

On Tue, Jul 11, 2023 at 1:55 PM Granwille Strauss 
wrote:

> Hi
>
> Thank you for the feedback, that's exactly what I did earlier, didn't do
> anything. I also set it in global config and you can also set it for the
> local primary storage. Nothing happened in the XML dumps.
> On 7/11/23 12:50, Slavka Peleva wrote:
>
> Can you set it from the VM's settings? The VM has to be stopped. I will
> check the global config later.
>
> [image: image.png]
>
>
>
> On Tue, Jul 11, 2023 at 1:38 PM Granwille Strauss 
> wrote:
>
>> Hi
>>
>> I am using the latest CS 4.18.0.0
>> On 7/11/23 12:28, Slavka Peleva wrote:
>>
>> Can you share the CS version that you're using? There are some differences
>> in enabling the IO policy depending on the version.
>>
>> Best regards,
>> Slavka
>>
>> On Tue, Jul 11, 2023 at 1:05 PM Granwille Strauss  
>> 
>> wrote:
>>
>>
>> Nope, didn't work. I added enable.io.uring=true to
>> /etc/cloudstack/agent/agent.properties, restarted cloudstack-agent.
>> Shutdown one VM machine. Set io.policy=io.uring for this VM then start up
>> VM again and XML dump remains the same:
>>
>> virsh dumpxml i-2-182-VM | grep drive
>>   
>>   
>>   
>>
>>
>> On 7/11/23 11:53, Slavka Peleva wrote:
>>
>> Hi,
>>
>> Can you check if the `enable.io.uring=true` property in the
>> agent.properties file is set? You have to restart the cloudstack-agent
>> service if you make any changes.
>>
>> Best regards,
>> Slavka
>>
>>
>> --
>> Regards / Groete
>>   Granwille Strauss  //  
>> Senior Systems Admin
>>
>> *e:* granwi...@namhost.com
>> *m:* +264 81 323 1260 <+264813231260>
>> *w:* www.namhost.com
>>   
>>  
>> 
>>  
>> 
>>  
>> 
>>  
>>
>> 
>>  
>> 
>>
>> Namhost Internet Services (Pty) Ltd,
>>
>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>
>>
>>
>> The content of this message is confidential. If you have received it by
>> mistake, please inform us by email reply and then delete the message. It is
>> forbidden to copy, forward, or in any way reveal the contents of this
>> message to anyone without our explicit consent. The integrity and security
>> of this email cannot be guaranteed over the Internet. Therefore, the sender
>> will not be held liable for any damage caused by the message. For our full
>> privacy policy and disclaimers, please go 
>> tohttps://www.namhost.com/privacy-policy
>>
>> [image: Powered by 
>> AdSigner]
>>  
>> 
>>
>> --
>> Regards / Groete
>>
>>  Granwille Strauss  //  Senior Systems Admin
>>
>> *e:* granwi...@namhost.com
>> *m:* +264 81 323 1260 <+264813231260>
>> *w:* www.namhost.com
>>
>>  
>> 
>> 
>> 
>>
>>
>> 
>>
>> Namhost Internet Services (Pty) Ltd,
>>
>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>
>>
>>
>> The content of this message is confidential. If you have received it by
>> mistake, please inform us by email reply and then delete the message. It is
>> forbidden to copy, forward, or in any way reveal the contents of this
>> message to anyone without our explicit consent. The integrity and security
>> of this email cannot be guaranteed over the Internet. Therefore, the sender
>> will not be held liable for any damage caused by the message. For our full
>> privacy policy and disclaimers, please go to
>> https://www.namhost.com/privacy-policy
>>
>> [image: Powered by AdSigner]
>> 
>>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>  
> 
> 

Re: Write Speeds

2023-07-11 Thread Granwille Strauss

Hi

Thank you for the feedback, that's exactly what I did earlier, didn't do 
anything. I also set it in global config and you can also set it for the 
local primary storage. Nothing happened in the XML dumps.


On 7/11/23 12:50, Slavka Peleva wrote:
Can you set it from the VM's settings? The VM has to be stopped. I 
will check the global config later.


image.png



On Tue, Jul 11, 2023 at 1:38 PM Granwille Strauss 
 wrote:


Hi

I am using the latest CS 4.18.0.0

On 7/11/23 12:28, Slavka Peleva wrote:

Can you share the CS version that you're using? There are some differences
in enabling the IO policy depending on the version.

Best regards,
Slavka

On Tue, Jul 11, 2023 at 1:05 PM Granwille Strauss  

wrote:


Nope, didn't work. I added enable.io.uring=true to
/etc/cloudstack/agent/agent.properties, restarted cloudstack-agent.
Shutdown one VM machine. Set io.policy=io.uring for this VM then start up
VM again and XML dump remains the same:

virsh dumpxml i-2-182-VM | grep drive
   
   
   


On 7/11/23 11:53, Slavka Peleva wrote:

Hi,

Can you check if the `enable.io.uring=true` property in the
agent.properties file is set? You have to restart the cloudstack-agent
service if you make any changes.

Best regards,
Slavka


--
Regards / Groete

    Granwille Strauss  // 
 Senior Systems Admin

*e:*granwi...@namhost.com
*m:* +264 81 323 1260 <+264813231260>
*w:*www.namhost.com  

    
  
  

  

  





  


Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by
mistake, please inform us by email reply and then delete the message. It is
forbidden to copy, forward, or in any way reveal the contents of this
message to anyone without our explicit consent. The integrity and security
of this email cannot be guaranteed over the Internet. Therefore, the sender
will not be held liable for any damage caused by the message. For our full
privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

[image: Powered by AdSigner]
 
 

-- 
Regards / Groete


 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 







Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received
it by mistake, please inform us by email reply and then delete the
message. It is forbidden to copy, forward, or in any way reveal
the contents of this message to anyone without our explicit
consent. The integrity and security of this email cannot be
guaranteed over the Internet. Therefore, the sender will not be
held liable for any damage caused by the message. For our full
privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

Powered by AdSigner




--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagl

Re: Write Speeds

2023-07-11 Thread Slavka Peleva
Can you set it from the VM's settings? The VM has to be stopped. I will
check the global config later.

[image: image.png]



On Tue, Jul 11, 2023 at 1:38 PM Granwille Strauss 
wrote:

> Hi
>
> I am using the latest CS 4.18.0.0
> On 7/11/23 12:28, Slavka Peleva wrote:
>
> Can you share the CS version that you're using? There are some differences
> in enabling the IO policy depending on the version.
>
> Best regards,
> Slavka
>
> On Tue, Jul 11, 2023 at 1:05 PM Granwille Strauss  
> 
> wrote:
>
>
> Nope, didn't work. I added enable.io.uring=true to
> /etc/cloudstack/agent/agent.properties, restarted cloudstack-agent.
> Shutdown one VM machine. Set io.policy=io.uring for this VM then start up
> VM again and XML dump remains the same:
>
> virsh dumpxml i-2-182-VM | grep drive
>   
>   
>   
>
>
> On 7/11/23 11:53, Slavka Peleva wrote:
>
> Hi,
>
> Can you check if the `enable.io.uring=true` property in the
> agent.properties file is set? You have to restart the cloudstack-agent
> service if you make any changes.
>
> Best regards,
> Slavka
>
>
> --
> Regards / Groete
>   Granwille Strauss  //  
> Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>   
>  
> 
>  
> 
>  
> 
>  
>
> 
>  
> 
>
> Namhost Internet Services (Pty) Ltd,
>
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
>
> The content of this message is confidential. If you have received it by
> mistake, please inform us by email reply and then delete the message. It is
> forbidden to copy, forward, or in any way reveal the contents of this
> message to anyone without our explicit consent. The integrity and security
> of this email cannot be guaranteed over the Internet. Therefore, the sender
> will not be held liable for any damage caused by the message. For our full
> privacy policy and disclaimers, please go 
> tohttps://www.namhost.com/privacy-policy
>
> [image: Powered by 
> AdSigner]
>  
> 
>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>  
> 
> 
> 
>
>
> 
>
> Namhost Internet Services (Pty) Ltd,
>
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
>
> The content of this message is confidential. If you have received it by
> mistake, please inform us by email reply and then delete the message. It is
> forbidden to copy, forward, or in any way reveal the contents of this
> message to anyone without our explicit consent. The integrity and security
> of this email cannot be guaranteed over the Internet. Therefore, the sender
> will not be held liable for any damage caused by the message. For our full
> privacy policy and disclaimers, please go to
> https://www.namhost.com/privacy-policy
>
> [image: Powered by AdSigner]
> 
>


Re: Write Speeds

2023-07-11 Thread Granwille Strauss

Hi

I am using the latest CS 4.18.0.0

On 7/11/23 12:28, Slavka Peleva wrote:

Can you share the CS version that you're using? There are some differences
in enabling the IO policy depending on the version.

Best regards,
Slavka

On Tue, Jul 11, 2023 at 1:05 PM Granwille Strauss
wrote:


Nope, didn't work. I added enable.io.uring=true to
/etc/cloudstack/agent/agent.properties, restarted cloudstack-agent.
Shutdown one VM machine. Set io.policy=io.uring for this VM then start up
VM again and XML dump remains the same:

virsh dumpxml i-2-182-VM | grep drive
   
   
   


On 7/11/23 11:53, Slavka Peleva wrote:

Hi,

Can you check if the `enable.io.uring=true` property in the
agent.properties file is set? You have to restart the cloudstack-agent
service if you make any changes.

Best regards,
Slavka


--
Regards / Groete

  Granwille Strauss  //  Senior Systems Admin

*e:*granwi...@namhost.com
*m:* +264 81 323 1260 <+264813231260>
*w:*www.namhost.com

  







Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by
mistake, please inform us by email reply and then delete the message. It is
forbidden to copy, forward, or in any way reveal the contents of this
message to anyone without our explicit consent. The integrity and security
of this email cannot be guaranteed over the Internet. Therefore, the sender
will not be held liable for any damage caused by the message. For our full
privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

[image: Powered by AdSigner]



--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Write Speeds

2023-07-11 Thread Slavka Peleva
Can you share the CS version that you're using? There are some differences
in enabling the IO policy depending on the version.

Best regards,
Slavka

On Tue, Jul 11, 2023 at 1:05 PM Granwille Strauss 
wrote:

> Nope, didn't work. I added enable.io.uring=true to
> /etc/cloudstack/agent/agent.properties, restarted cloudstack-agent.
> Shutdown one VM machine. Set io.policy=io.uring for this VM then start up
> VM again and XML dump remains the same:
>
> virsh dumpxml i-2-182-VM | grep drive
>   
>   
>   
>
>
> On 7/11/23 11:53, Slavka Peleva wrote:
>
> Hi,
>
> Can you check if the `enable.io.uring=true` property in the
> agent.properties file is set? You have to restart the cloudstack-agent
> service if you make any changes.
>
> Best regards,
> Slavka
>
>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>  
> 
> 
> 
>
>
> 
>
> Namhost Internet Services (Pty) Ltd,
>
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
>
> The content of this message is confidential. If you have received it by
> mistake, please inform us by email reply and then delete the message. It is
> forbidden to copy, forward, or in any way reveal the contents of this
> message to anyone without our explicit consent. The integrity and security
> of this email cannot be guaranteed over the Internet. Therefore, the sender
> will not be held liable for any damage caused by the message. For our full
> privacy policy and disclaimers, please go to
> https://www.namhost.com/privacy-policy
>
> [image: Powered by AdSigner]
> 
>


Re: Write Speeds

2023-07-11 Thread Granwille Strauss
Nope, didn't work. I added enable.io.uring=true to 
/etc/cloudstack/agent/agent.properties, restarted cloudstack-agent. 
Shutdown one VM machine. Set io.policy=io.uring for this VM then start 
up VM again and XML dump remains the same:



virsh dumpxml i-2-182-VM | grep drive
  
  
  


On 7/11/23 11:53, Slavka Peleva wrote:

Hi,

Can you check if the `enable.io.uring=true` property in the
agent.properties file is set? You have to restart the cloudstack-agent
service if you make any changes.

Best regards,
Slavka


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Write Speeds

2023-07-11 Thread Slavka Peleva
Hi,

Can you check if the `enable.io.uring=true` property in the
agent.properties file is set? You have to restart the cloudstack-agent
service if you make any changes.

Best regards,
Slavka


Re: Write Speeds

2023-07-10 Thread Granwille Strauss
Anyone with any ideas why ACS is not detecting I have newer versions of 
qemu and libvirt to enable io_uring?


On 7/10/23 19:57, Jorge Luiz Correa wrote:

Hum, so strange.

I'm not a CloudStack specialist but it look like the code is simple 
and try to test the versions of qemu and libvirt:


https://github.com/apache/cloudstack/pull/5012/commits/c7c3dd3dd9b8869f45c5bd9c17af83d230ac7886

Here, at the slide bottom he shows this simple test too.

https://youtu.be/y0NYuUtm5Kk?list=PLnIKk7GjgFlYfut3ZIOrvN--_YuSPIerQ&t=791 



For some reason CloudStack is not detecting your versions. My disk 
offering is simple, Thin provisioning, custom disk size, QoS = none, 
Write-cache type = no disk cache. I'm using Ubuntu Server 22.04 and 
CloudStack 4.17.2.


Em seg., 10 de jul. de 2023 às 14:22, Granwille Strauss 
 escreveu:


Jorge, I thought so too, but XML dumps does not contain it. So I
figured the oi.policy setting needs to be set in management
server. Here's my KVM details:


Compiled against library: libvirt 8.0.0
Using library: libvirt 8.0.0
Using API: QEMU 8.0.0
Running hypervisor: QEMU 6.2.0


qemu guest agents also exist on VMs. And here's a XML dump:


root@athena03 ~ $ virsh dumpxml i-2-120-VM | grep driver
  
  
root@athena03 ~ $

On 7/10/23 18:51, Jorge Luiz Correa wrote:

Granwille, no special configuration, just the CloudStack default
behavior. As I understand, CloudStack can detect automatically if
host supports this feature based on qemu and libvirt versions.

https://github.com/apache/cloudstack/issues/4883#issuecomment-813955599


What versions of kernel, qemu and libvirt are you using in KVM host?

Em seg., 10 de jul. de 2023 às 13:26, Granwille Strauss
 escreveu:

Hi Jorge

How do you actually enable io_uring via Cloustack?
My KVM does have the necessary requirements.

I enabled io.policy settings in global settings, local storage
and in the VM settings via UI. And my xml dump of VM doesn’t
include io_uring under driver for some reason.

-- 
Regards / Groete


Granwille Strauss  // Senior
Systems Administrator

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 










The content of this message is confidential.

For our full privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

Powered by AdSigner




On 10 Jul 2023, at 5:27 PM, Granwille Strauss

 wrote:



Hi Jorge

Thank you so much for this. I used your FIO config and
surprisingly it seems fine:


write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W)
1300MiB-1300MiB, (T) 1300MiB-1300MiB, ioengine=libaio,
iodepth=1
fio-3.19

Run status group 0 (all jobs):
   READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
  WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s
(336MB/s-336MB/s), io=1300MiB (1363MB), run=4052-4052msec


This is without enabling io_uring. I see I can enable it per
VM using the UI by setting the io.policy = io_uring. Will
enable this on a few VMs and see if it works better.

On 7/10/23 15:41, Jorge Luiz Correa wrote:

Hi Granwille! About the READ/WRITE performance, as Levin suggested, 
check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


 /usr/bin/qemu-system-x86_64
 
   
   
   
   
   fb46fd2c59bd4127851b
   
   
 

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwr

RE: Write Speeds

2023-07-10 Thread Alex Mattioli
I doesn’t necessarily get throttled, the added latency will definitely impact 
the maximum bandwidth achievable per stream, especially if you are using TCP. 
In this case a bandwidth delay calculator can help you find the maximum 
theoretical bandwidth for a given latency:
https://www.switch.ch/network/tools/tcp_throughput/?do+new+calculation=do+new+calculation





From: Granwille Strauss 
Sent: Monday, July 10, 2023 6:04 PM
To: users@cloudstack.apache.org
Cc: Levin Ng ; Jithin Raju ; 
vivek.ku...@indiqus.com
Subject: Re: Write Speeds


Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM host to 
determine a base line to the remote USA VM machine:

- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in a DC 
that's close to our DC  in the same country that we use:

- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the USA VM to 
the local KVM host gets throttled? Based on the results above, the throttling 
doesn't seems to be in from ISPs inside our country.

So yeah, somewhere some ISP is throttling during the USA routes.
On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,



I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.



The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.





On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like



fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest





Regards,

Levin

On 10 Jul 2023 at 11:33 +0100, Granwille Strauss 
<mailto:granwi...@namhost.com>, wrote:

Hi Guys

Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:

1. Network:

- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.

- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant

- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.

Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.

2. Disk Write Speed:

- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.



But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.

On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,



The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .







-Jithin



From: Levin Ng <mailto:levindec...@gmail.com>

Date: Sunday, 9 July 2023 at 5:00 AM

To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org> 
<mailto:users@cloudstack.apache.org>, Granwille 
Strauss <mailto:granwi...@namhost.com>

Cc: vivek.ku...@indiqus.com<mailto:vivek.ku...@indiqus.com> 
<mailto:vivek.ku...@indiqus.com>, Nux 
<mailto:n...@li.nux.ro>

Subject: Re: Write Speeds

Dear Groete,



https://github.com/shapeblue/clouds

Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Vivek, yes I have and its in line with my VMs:


  
    
    
  
  
  


On 7/10/23 18:50, Vivek Kumar wrote:

Did you check on network QoS on VR as well if there is any .!


  Vivek Kumar

Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies





vivek.ku...@indiqus.com

www.indiqus.com <https://www.indiqus.com/>





On 10-Jul-2023, at 9:34 PM, Granwille Strauss  
wrote:


Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM 
host to determine a base line to the remote USA VM machine:


- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in 
a DC that's close to our DC  in the same country that we use:


- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the 
USA VM to the local KVM host gets throttled? Based on the results 
above, the throttling doesn't seems to be in from ISPs inside our 
country.


So yeah, somewhere some ISP is throttling during the USA routes.

On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,

I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.

The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.


On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like

fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest


Regards,
Levin
On 10 Jul 2023 at 11:33 +0100, Granwille Strauss, wrote:

Hi Guys
Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:
1. Network:
- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.
- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant
- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.
Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.
2. Disk Write Speed:
- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.

But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.
On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng
Date: Sunday, 9 July 2023 at 5:00 AM
To:users@cloudstack.apache.org  , Granwille 
Strauss
Cc:vivek.ku...@indiqus.com  , Nux
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, s

Re: Write Speeds

2023-07-10 Thread Jorge Luiz Correa
Granwille, no special configuration, just the CloudStack default behavior.
As I understand, CloudStack can detect automatically if host supports this
feature based on qemu and libvirt versions.

https://github.com/apache/cloudstack/issues/4883#issuecomment-813955599

What versions of kernel, qemu and libvirt are you using in KVM host?

Em seg., 10 de jul. de 2023 às 13:26, Granwille Strauss <
granwi...@namhost.com> escreveu:

> Hi Jorge
>
> How do you actually enable io_uring via Cloustack?
> My KVM does have the necessary requirements.
>
> I enabled io.policy settings in global settings, local storage
> and in the VM settings via UI. And my xml dump of VM doesn’t include
> io_uring under driver for some reason.
>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems
> Administrator
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>   
> 
> 
> 
>
>
> 
>
> The content of this message is confidential.
> For our full privacy policy and disclaimers, please go to
> https://www.namhost.com/privacy-policy
> [image: Powered by AdSigner]
> 
>
> On 10 Jul 2023, at 5:27 PM, Granwille Strauss
>  wrote:
>
> 
>
> Hi Jorge
>
> Thank you so much for this. I used your FIO config and surprisingly it
> seems fine:
>
> write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W) 1300MiB-1300MiB,
> (T) 1300MiB-1300MiB, ioengine=libaio, iodepth=1
> fio-3.19
>
> Run status group 0 (all jobs):
>READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s (1009MB/s-1009MB/s),
> io=3900MiB (4089MB), run=4052-4052msec
>   WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s (336MB/s-336MB/s),
> io=1300MiB (1363MB), run=4052-4052msec
>
> This is without enabling io_uring. I see I can enable it per VM using the
> UI by setting the io.policy = io_uring. Will enable this on a few VMs and
> see if it works better.
> On 7/10/23 15:41, Jorge Luiz Correa wrote:
>
> Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
> the XML of virtual machines looking at disk/device section. Look for
> io='io_uring'.
>
> As stated here:
> https://github.com/apache/cloudstack/issues/4883
>
> CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.
>
> I tried to do some tests at some points like your environment.
>
> ##
> VM in NFS Primary Storage (Hybrid NAS)
> Default disk offering, thin (no restriction)
>
> 
> /usr/bin/qemu-system-x86_64
> 
>   
>file='/mnt/74267a3b-46c5-3f6c-8637-a9f721852954/fb46fd2c-59bd-4127-851b-693a957bd5be'
> index='2'/>
>   
>   
>   fb46fd2c59bd4127851b
>   
>function='0x0'/>
> 
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
> --readwrite=randrw
>
> READ: 569MiB/s
> WRITE: 195MiB/s
>
> ##
> VM in Local Primary Storage (local SSD host)
> Default disk offering, thin (no restriction)
>
> 
> /usr/bin/qemu-system-x86_64
> 
>   
>file='/var/lib/libvirt/images/d100c55d-8ff2-45e5-8452-6fa56c0725e5'
> index='2'/>
>   
>   
>   fb46fd2c59bd4127851b
>   
>function='0x0'/>
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
> --readwrite=randrw
>
> First run (a little bit slow if using "thin" because need to allocate space
> in qcow2):
> READ: bw=796MiB/s
> WRITE: bw=265MiB/s
>
> Second run:
> READ: bw=952MiB/s
> WRITE: bw=317MiB/s
>
> ##
> Directly in local SSD of host:
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
> --readwrite=randrw
>
> READ: bw=931MiB/s
> WRITE: bw=310MiB/s
>
> OBS.: parameters of fio test need to be changed to test in your environment
> as it depends on the number of cpus, memory, --bs, --iodepth etc.
>
> Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
> 4.17.2. So, VM in local SSD of host could have very similar disk
> performance from the host.
>
> I hope this could help you!
>
> Thanks.
>
>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>  
> 
> 
> 

Re: Write Speeds

2023-07-10 Thread Vivek Kumar
Did you check on network QoS on VR as well if there is any .! 


Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com <mailto:vivek.ku...@indiqus.com>
www.indiqus.com <https://www.indiqus.com/>




> On 10-Jul-2023, at 9:34 PM, Granwille Strauss  wrote:
> 
> Hi Levin
> 
> I skipped all the VM testing and tested iperf straight from the KVM host to 
> determine a base line to the remote USA VM machine:
> 
> - KVM to Remote USA VM: 113 Mbits/sec
> - USA VM to KVM Host: 35.9 Mbits/sec
> 
> I then ran the same test again, this this time the remote host was in a DC 
> that's close to our DC  in the same country that we use:
> 
> - KVM to remote host: 409 Mbits/sec
> - Remote host to KVM: 477 Mbits/sec
> 
> So do you think its safe to conclude that somewhere traffic from the USA VM 
> to the local KVM host gets throttled? Based on the results above, the 
> throttling doesn't seems to be in from ISPs inside our country. 
> 
> So yeah, somewhere some ISP is throttling during the USA routes. 
> 
> On 7/10/23 15:56, Levin Ng wrote:
>> Hi Groete,
>> 
>> I’m not sure what is your network setting in ACS, but test between two 
>> public IP with ~500Mbps  sound like u are saturated by in/out bound traffics 
>> in the single network path, can you do a test from outside ACS to your VM 
>> using an same public network segment IP, it will avoid network routing and 
>> confusion., what is your ACS network driver using? If vxlan, better check 
>> with network switch multicast performance.
>> 
>> The remote performances clearly shown the ISP put some limit on the line, 
>> you have to check with them. Unless your line is end-to-end, Metro-Ethernet 
>> etc… otherwise it is not always have guarantee throughput.
>> 
>> 
>> On the disk performance, should you share your fio test command and result 
>> beforehand. I’m assuming you are doing something like
>> 
>> fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
>> -rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
>> -group_reporting -name=mytest
>> 
>> 
>> Regards,
>> Levin
>> On 10 Jul 2023 at 11:33 +0100, Granwille Strauss  
>> <mailto:granwi...@namhost.com>, wrote:
>>> Hi Guys
>>> Thank you, I have been running more tests now with the feedback you guys 
>>> gave. Firstly, I want to break this up into two sections:
>>> 1. Network:
>>> - So I have been running iperf tests between my VMs on their public 
>>> network, and my iperf tests gives me speeds of ~500 Mbps, keep in mind this 
>>> in between two local VMs on the same KVM but on public network.
>>> - I then run iperf tests in and out from my local VMs to remote servers, 
>>> this is where it does funny things. From the remote VM in USA, I run an 
>>> iperf test to my local VM, the speeds show ~50 Mbps. And if I run a test 
>>> from my local VM to a remote USA VM the same ~50 Mbps speeds are 
>>> accomplished. I ran my iperf tests with 1 GB and 2GB flags and the results 
>>> remain constant
>>> - During all these test I kept an eye on my VR resources, which use default 
>>> service offerings, it never spiked or reach thresholds.
>>> Is it safe to assume that because of the MASSIVE distance between the 
>>> remote VM and my local VMs, the speed dropping to ~50 Mbps is normal? Keep 
>>> in mind the remote VM has 1 Gbps line too and this VM is managed by a big 
>>> ISP provider in the USA. To me its quite a massive drop from 1000 Mbps to 
>>> 50 Mbps, this kinda does not make sense to me. I would understand at least 
>>> 150 Mbps.
>>> 2. Disk Write Speed:
>>> - It seems the only changes that can be made is to implement disk cache 
>>> options. And so far I see write-back seems to be common practise for most 
>>> cloud providers, given that they have the necessary power redundancy and VM 
>>> backup images in place.
>>> 
>>> But for now, other than write cache types are there anything else that can 
>>> be done to improve disk writing speeds. I checked RedHat guides on 
>>> optimising VMs and I seem to have most in place, but write speeds remain at 
>>> ~50 Mbps.
>>> On 7/10/23 06:25, Jithin Raju wrote:
>>>> Hi  Groete,
>>>> 
>>>> The VM virtual NIC network throttling is picked up from its compute 
>>>> offering. You may need to create a new compute offering and change the 
>>>> VM’s compute offering. If it is not specified in the compu

Re: Write Speeds

2023-07-10 Thread Granwille Strauss
Hi JorgeHow do you actually enable io_uring via Cloustack?My KVM does have the necessary requirements.I enabled io.policy settings in global settings, local storageand in the VM settings via UI. And my xml dump of VM doesn’t include io_uring under driver for some reason.-- Regards / GroeteGranwille Strauss  //  Senior Systems Administratore: granwi...@namhost.comm: +264 81 323 1260w: www.namhost.comThe content of this message is confidential.For our full privacy policy and disclaimers, please go to https://www.namhost.com/privacy-policyOn 10 Jul 2023, at 5:27 PM, Granwille Strauss  wrote:
  

  
  
Hi Jorge
Thank you so much for this. I used your FIO config and
  surprisingly it seems fine:


  write-test: (g=0): rw=randrw, bs=(R)
1300MiB-1300MiB, (W) 1300MiB-1300MiB, (T) 1300MiB-1300MiB,
ioengine=libaio, iodepth=1
fio-3.19

Run status group 0 (all jobs):
   READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
  WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s
(336MB/s-336MB/s), io=1300MiB (1363MB), run=4052-4052msec
  

This is without enabling io_uring. I see I can enable it per VM
  using the UI by setting the io.policy = io_uring. Will enable this
  on a few VMs and see if it works better. 

On 7/10/23 15:41, Jorge Luiz Correa
  wrote:


  Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  


fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: 569MiB/s
WRITE: 195MiB/s

##
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s

Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s

##
Directly in local SSD of host:

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: bw=931MiB/s
WRITE: bw=310MiB/s

OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.

Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.

I hope this could help you!

Thanks.


  

  
  

-- 
  
  

  
Regards / Groete
  

  
  

Granwille
Strauss  //  Senior Systems Admin
  
  e: granwi...@namhost.com
  m: +264 81 323
  1260
  w: www.namhost.com
  
  
  
  

  
  

  
  
Namhost Internet Services (Pty) Ltd,
  24 Black Eagle Rd, Hermanus, 7210, RSA
  
  

  

  
  

  
The
  content of this message is confidential. If you have
  received it by mistake, please inform us by email reply
  and then delete the message. It is forbidden to copy,
  forward, or in any way reveal the contents of this message
  to anyone without our explicit consent. The integrity and
  security of this email cannot be guaranteed over the
  Internet. Therefore, the sender will not be held liable
  for any damage caused by the message. For our full privacy
  policy and disclaimers, please go to
  https://www.namhost.com/privacy-policy
  

  
   
                                                         
  
  



smime.p7s
Description: S/MIME cryptographic signature


Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM host 
to determine a base line to the remote USA VM machine:


- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in a 
DC that's close to our DC  in the same country that we use:


- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the USA 
VM to the local KVM host gets throttled? Based on the results above, the 
throttling doesn't seems to be in from ISPs inside our country.


So yeah, somewhere some ISP is throttling during the USA routes.

On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,

I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.

The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.


On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like

fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest


Regards,
Levin
On 10 Jul 2023 at 11:33 +0100, Granwille Strauss, wrote:

Hi Guys
Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:
1. Network:
- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.
- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant
- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.
Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.
2. Disk Write Speed:
- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.

But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.
On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng
Date: Sunday, 9 July 2023 at 5:00 AM
To:users@cloudstack.apache.org  , Granwille 
Strauss
Cc:vivek.ku...@indiqus.com  , Nux
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.

Regards,
Levin



On 8 Jul 2023 at 09:02 +0100, Granwille Strauss, wrote:

Hi Levin
Thank you very much, I do appreciate your feedback and time replying back to 
me. I believe I have picked up on something. My VMs XML dump ALL show the 
following:

 


Specifically:
   
All

Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Hi Jorge

Thank you so much for this. I used your FIO config and surprisingly it 
seems fine:


write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W) 
1300MiB-1300MiB, (T) 1300MiB-1300MiB, ioengine=libaio, iodepth=1

fio-3.19

Run status group 0 (all jobs):
   READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s 
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
  WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s (336MB/s-336MB/s), 
io=1300MiB (1363MB), run=4052-4052msec


This is without enabling io_uring. I see I can enable it per VM using 
the UI by setting the io.policy = io_uring. Will enable this on a few 
VMs and see if it works better.


On 7/10/23 15:41, Jorge Luiz Correa wrote:

Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


 /usr/bin/qemu-system-x86_64
 
   
   
   
   
   fb46fd2c59bd4127851b
   
   
 

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: 569MiB/s
WRITE: 195MiB/s

##
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)


 /usr/bin/qemu-system-x86_64
 
   
   
   
   
   fb46fd2c59bd4127851b
   
   

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s

Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s

##
Directly in local SSD of host:

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: bw=931MiB/s
WRITE: bw=310MiB/s

OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.

Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.

I hope this could help you!

Thanks.


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Write Speeds

2023-07-10 Thread Levin Ng
Hi Groete,

I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.

The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.


On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like

fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest


Regards,
Levin
On 10 Jul 2023 at 11:33 +0100, Granwille Strauss , wrote:
> Hi Guys
> Thank you, I have been running more tests now with the feedback you guys 
> gave. Firstly, I want to break this up into two sections:
> 1. Network:
> - So I have been running iperf tests between my VMs on their public network, 
> and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
> two local VMs on the same KVM but on public network.
> - I then run iperf tests in and out from my local VMs to remote servers, this 
> is where it does funny things. From the remote VM in USA, I run an iperf test 
> to my local VM, the speeds show ~50 Mbps. And if I run a test from my local 
> VM to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my 
> iperf tests with 1 GB and 2GB flags and the results remain constant
> - During all these test I kept an eye on my VR resources, which use default 
> service offerings, it never spiked or reach thresholds.
> Is it safe to assume that because of the MASSIVE distance between the remote 
> VM and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind 
> the remote VM has 1 Gbps line too and this VM is managed by a big ISP 
> provider in the USA. To me its quite a massive drop from 1000 Mbps to 50 
> Mbps, this kinda does not make sense to me. I would understand at least 150 
> Mbps.
> 2. Disk Write Speed:
> - It seems the only changes that can be made is to implement disk cache 
> options. And so far I see write-back seems to be common practise for most 
> cloud providers, given that they have the necessary power redundancy and VM 
> backup images in place.
>
> But for now, other than write cache types are there anything else that can be 
> done to improve disk writing speeds. I checked RedHat guides on optimising 
> VMs and I seem to have most in place, but write speeds remain at ~50 Mbps.
> On 7/10/23 06:25, Jithin Raju wrote:
> > Hi  Groete,
> >
> > The VM virtual NIC network throttling is picked up from its compute 
> > offering. You may need to create a new compute offering and change the VM’s 
> > compute offering. If it is not specified in the compute offering, means it 
> > is taking the values from the global settings: vm.network.throttling.rate .
> >
> >
> >
> > -Jithin
> >
> > From: Levin Ng 
> > Date: Sunday, 9 July 2023 at 5:00 AM
> > To: users@cloudstack.apache.org , Granwille 
> > Strauss 
> > Cc: vivek.ku...@indiqus.com , Nux 
> > Subject: Re: Write Speeds
> > Dear Groete,
> >
> > https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java
> >
> > https://libvirt.org/formatnetwork.html#quality-of-service
> >
> > This is in kilobytes/second, u have to divide by 8
> >
> > 1Gbps / 8bit = 128MB = 128000KBps
> >
> > You can verify by iperf test, and yes, u need to ensure both VR and VM 
> > match bandwidth settings to get a consistent result, something u also need 
> > to pay attention on VR resource, default system router resource offering is 
> > quite limited, the network speed may throttled if VR running are out of CPU 
> > resource.
> >
> > Regards,
> > Levin
> >
> >
> >
> > On 8 Jul 2023 at 09:02 +0100, Granwille Strauss , 
> > wrote:
> > > Hi Levin
> > > Thank you very much, I do appreciate your feedback and time replying back 
> > > to me. I believe I have picked up on something. My VMs XML dump ALL show 
> > > the following:
> > > >> > > bridge='cloudbr0'/>   > > > peak='128000'/>   
> > > >
> > > >> > > domain

Re: Write Speeds

2023-07-10 Thread Jorge Luiz Correa
Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  


fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: 569MiB/s
WRITE: 195MiB/s

##
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s

Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s

##
Directly in local SSD of host:

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: bw=931MiB/s
WRITE: bw=310MiB/s

OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.

Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.

I hope this could help you!

Thanks.

>

-- 
__
Aviso de confidencialidade

Esta mensagem da 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica 
federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro 
de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter 
informacoes  confidenciais, protegidas  por sigilo profissional.  Sua 
utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei. 
Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao 
emitente, esclarecendo o equivoco.

Confidentiality note

This message from 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government 
company  established under  Brazilian law (5.851/72), is directed 
exclusively to  its addressee  and may contain confidential data,  
protected under  professional secrecy  rules. Its unauthorized  use is 
illegal and  may subject the transgressor to the law's penalties. If you 
are not the addressee, please send it back, elucidating the failure.


Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Hi Guys

Thank you, I have been running more tests now with the feedback you guys 
gave. Firstly, I want to break this up into two sections:


1. Network:

- So I have been running iperf tests between my VMs on their public 
network, and my iperf tests gives me speeds of ~500 Mbps, keep in mind 
this in between two local VMs on the same KVM but on public network.


- I then run iperf tests in and out from my local VMs to remote servers, 
this is where it does funny things. From the remote VM in USA, I run an 
iperf test to my local VM, the speeds show ~50 Mbps. And if I run a test 
from my local VM to a remote USA VM the same ~50 Mbps speeds are 
accomplished. I ran my iperf tests with 1 GB and 2GB flags and the 
results remain constant


- During all these test I kept an eye on my VR resources, which use 
default service offerings, it never spiked or reach thresholds.


Is it safe to assume that because of the MASSIVE distance between the 
remote VM and my local VMs, the speed dropping to ~50 Mbps is normal? 
Keep in mind the remote VM has 1 Gbps line too and this VM is managed by 
a big ISP provider in the USA. To me its quite a massive drop from 1000 
Mbps to 50 Mbps, this kinda does not make sense to me. I would 
understand at least 150 Mbps.


2. Disk Write Speed:

- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for 
most cloud providers, given that they have the necessary power 
redundancy and VM backup images in place.


But for now, other than write cache types are there anything else that 
can be done to improve disk writing speeds. I checked RedHat guides on 
optimising VMs and I seem to have most in place, but write speeds remain 
at ~50 Mbps.


On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng
Date: Sunday, 9 July 2023 at 5:00 AM
To:users@cloudstack.apache.org  , Granwille 
Strauss
Cc:vivek.ku...@indiqus.com  , Nux
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.

Regards,
Levin



On 8 Jul 2023 at 09:02 +0100, Granwille Strauss, wrote:

Hi Levin
Thank you very much, I do appreciate your feedback and time replying back to 
me. I believe I have picked up on something. My VMs XML dump ALL show the 
following:

 


Specifically:
   
All my VM has this in place, a 128 Mbps limit even the VR, which makes no 
sense. I found this thread posted 10 years ago and Nux says that this value is 
affected by the service 
offerings:https://users.cloudstack.apache.narkive.com/T6Gx7BoV/cloudstack-network-limitation
  But all my service offerings are set to 1000 Mbps. See attached screenshots. 
The 4.18 documentation also confirms if the values are null, which most default 
service offerings are, it takes the values set in global settings 
network.throttling.rate and vm.network.throttling.rate, which I also have set 
as 1000 as you can see in the screenshots.

I then found 
this:https://cwiki.apache.org/confluence/display/CLOUDSTACK/Network+throttling+in+CloudStack
  with no KVM details, seems this part is missing to tell me how KVM throttling 
is applied. So as DuJun said 10 years ago, I feel confused about how Cloudstack 
limit the network rate for guest. And yes, I have stopped my VMs and rebooted 
MANY times it doesn't update the XML at all.
Please also take into account documentation states that in shared networking 
there's supposed to be no limits on incoming traffic (ingress), as far as I 
understand it.
On 7/7/23 23:35, Levin Ng wrote:

Hi Groete,


IMO, You should bypass any ACS provisioning to troubleshoot the performance 
case first, which allow you get more idea on the hardware + kvm performance 
with minimal influent, then you can compare the libvirt xml different between 
plain KVM and ACS. That help you sort out the different where it come from, you 
will see QoS bandwidth setting in the VM xml if you do.

We are trying to tell you, when you diagnose the throughput problem, you should 
first identify the bottleneck where it come first.  Iperf is a 

Re: Write Speeds

2023-07-09 Thread Jithin Raju
Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng 
Date: Sunday, 9 July 2023 at 5:00 AM
To: users@cloudstack.apache.org , Granwille 
Strauss 
Cc: vivek.ku...@indiqus.com , Nux 
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.

Regards,
Levin



On 8 Jul 2023 at 09:02 +0100, Granwille Strauss , wrote:
> Hi Levin
> Thank you very much, I do appreciate your feedback and time replying back to 
> me. I believe I have picked up on something. My VMs XML dump ALL show the 
> following:
> >> bridge='cloudbr0'/>   
> >> dev='vnet219'/>> name='net0'/>   > bus='0x00' slot='0x03' function='0x0'/> 
> Specifically:
>average='128000' peak='128000'/> 
> All my VM has this in place, a 128 Mbps limit even the VR, which makes no 
> sense. I found this thread posted 10 years ago and Nux says that this value 
> is affected by the service offerings: 
> https://users.cloudstack.apache.narkive.com/T6Gx7BoV/cloudstack-network-limitation
>  But all my service offerings are set to 1000 Mbps. See attached screenshots. 
> The 4.18 documentation also confirms if the values are null, which most 
> default service offerings are, it takes the values set in global settings 
> network.throttling.rate and vm.network.throttling.rate, which I also have set 
> as 1000 as you can see in the screenshots.
>
> I then found this: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Network+throttling+in+CloudStack
>  with no KVM details, seems this part is missing to tell me how KVM 
> throttling is applied. So as DuJun said 10 years ago, I feel confused about 
> how Cloudstack limit the network rate for guest. And yes, I have stopped my 
> VMs and rebooted MANY times it doesn't update the XML at all.
> Please also take into account documentation states that in shared networking 
> there's supposed to be no limits on incoming traffic (ingress), as far as I 
> understand it.
> On 7/7/23 23:35, Levin Ng wrote:
> > Hi Groete,
> >
> >
> > IMO, You should bypass any ACS provisioning to troubleshoot the performance 
> > case first, which allow you get more idea on the hardware + kvm performance 
> > with minimal influent, then you can compare the libvirt xml different 
> > between plain KVM and ACS. That help you sort out the different where it 
> > come from, you will see QoS bandwidth setting in the VM xml if you do.
> >
> > We are trying to tell you, when you diagnose the throughput problem, you 
> > should first identify the bottleneck where it come first.  Iperf is a tools 
> > that you can test the line speed end to end into your VM, if the result in 
> > 1Gbps network are near 800+ Mbps, you can focus on the VM performance or 
> > the copy protocol you are using, try different protocol, ssh/rsync/ftp/nfs, 
> > see any different.
> >
> > You are already test the write-back caching which will improve disk I/O 
> > performance, it is another story you need to deep dive the pro and cons on 
> > the write cache, there are risk to corrupt the VM filesystem in some case, 
> > this is what u need to learn about each cache mode.
> >
> > VM Guest performance are involved by many factor, you cannot expect VM 
> > perform nearly the bare metal does. There are long journey to do such 
> > optimization, take time and improve it gradually. There are lot of kvm 
> > tuning guide you can reference and prove it on your hardware. Read 
> > thoughtfully on each tuning that may bring improvement and also introduce 
> > risk factor.
> >
> >
> > Regards,
> > Levin
> >
> >
> >
> >
> > On 7 Jul 2023 at 21:24 +0100, Granwille Strauss , 
> > wrote:
> > > Sorry that I have to ask, can you perhaps be a bit more specific, plea

Re: Write Speeds

2023-07-08 Thread Levin Ng
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.

Regards,
Levin



On 8 Jul 2023 at 09:02 +0100, Granwille Strauss , wrote:
> Hi Levin
> Thank you very much, I do appreciate your feedback and time replying back to 
> me. I believe I have picked up on something. My VMs XML dump ALL show the 
> following:
> >> bridge='cloudbr0'/>   
> >> dev='vnet219'/>> name='net0'/>   > bus='0x00' slot='0x03' function='0x0'/> 
> Specifically:
>average='128000' peak='128000'/> 
> All my VM has this in place, a 128 Mbps limit even the VR, which makes no 
> sense. I found this thread posted 10 years ago and Nux says that this value 
> is affected by the service offerings: 
> https://users.cloudstack.apache.narkive.com/T6Gx7BoV/cloudstack-network-limitation
>  But all my service offerings are set to 1000 Mbps. See attached screenshots. 
> The 4.18 documentation also confirms if the values are null, which most 
> default service offerings are, it takes the values set in global settings 
> network.throttling.rate and vm.network.throttling.rate, which I also have set 
> as 1000 as you can see in the screenshots.
>
> I then found this: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Network+throttling+in+CloudStack
>  with no KVM details, seems this part is missing to tell me how KVM 
> throttling is applied. So as DuJun said 10 years ago, I feel confused about 
> how Cloudstack limit the network rate for guest. And yes, I have stopped my 
> VMs and rebooted MANY times it doesn't update the XML at all.
> Please also take into account documentation states that in shared networking 
> there's supposed to be no limits on incoming traffic (ingress), as far as I 
> understand it.
> On 7/7/23 23:35, Levin Ng wrote:
> > Hi Groete,
> >
> >
> > IMO, You should bypass any ACS provisioning to troubleshoot the performance 
> > case first, which allow you get more idea on the hardware + kvm performance 
> > with minimal influent, then you can compare the libvirt xml different 
> > between plain KVM and ACS. That help you sort out the different where it 
> > come from, you will see QoS bandwidth setting in the VM xml if you do.
> >
> > We are trying to tell you, when you diagnose the throughput problem, you 
> > should first identify the bottleneck where it come first.  Iperf is a tools 
> > that you can test the line speed end to end into your VM, if the result in 
> > 1Gbps network are near 800+ Mbps, you can focus on the VM performance or 
> > the copy protocol you are using, try different protocol, ssh/rsync/ftp/nfs, 
> > see any different.
> >
> > You are already test the write-back caching which will improve disk I/O 
> > performance, it is another story you need to deep dive the pro and cons on 
> > the write cache, there are risk to corrupt the VM filesystem in some case, 
> > this is what u need to learn about each cache mode.
> >
> > VM Guest performance are involved by many factor, you cannot expect VM 
> > perform nearly the bare metal does. There are long journey to do such 
> > optimization, take time and improve it gradually. There are lot of kvm 
> > tuning guide you can reference and prove it on your hardware. Read 
> > thoughtfully on each tuning that may bring improvement and also introduce 
> > risk factor.
> >
> >
> > Regards,
> > Levin
> >
> >
> >
> >
> > On 7 Jul 2023 at 21:24 +0100, Granwille Strauss , 
> > wrote:
> > > Sorry that I have to ask, can you perhaps be a bit more specific, please. 
> > > The only QOS settings I see in service offering are "None", "Hypervisor" 
> > > and "Storage", which doesn't really seem network related. Or am I missing 
> > > the point? Take note that I use the default offerings for the VR and VMs 
> > > but with slight tweaks such as setting local storage etc and only 
> > > increased the Network rate from 200 Mbps to 1000 Mbps.
> > >
> > > So can you kindly explain by what QOS settings you guys are referring to, 
> > > please?
> > > PS, the Write-back disk caching seems to give the VM a slight increase, I 
> > > now see writes at 190 Mbps from ~70 Mbps.
> > > On 7/7/23 21:11, Vivek Kumar wrote:
> > > > Hello,
> > > >
> > > > IPerf will simply tell you the bandwidth and the open pipe between 2 
> > > > VMs, so I don’t think that it’s depends on disk performance, it’s 
> > > > better to check the network QoS at every layer, VR and VM.
> > > >
> > > >
> > > >
> > > > Vivek 

Re: Write Speeds

2023-07-07 Thread Levin Ng
Hi Groete,


IMO, You should bypass any ACS provisioning to troubleshoot the performance 
case first, which allow you get more idea on the hardware + kvm performance 
with minimal influent, then you can compare the libvirt xml different between 
plain KVM and ACS. That help you sort out the different where it come from, you 
will see QoS bandwidth setting in the VM xml if you do.

We are trying to tell you, when you diagnose the throughput problem, you should 
first identify the bottleneck where it come first.  Iperf is a tools that you 
can test the line speed end to end into your VM, if the result in 1Gbps network 
are near 800+ Mbps, you can focus on the VM performance or the copy protocol 
you are using, try different protocol, ssh/rsync/ftp/nfs, see any different.

You are already test the write-back caching which will improve disk I/O 
performance, it is another story you need to deep dive the pro and cons on the 
write cache, there are risk to corrupt the VM filesystem in some case, this is 
what u need to learn about each cache mode.

VM Guest performance are involved by many factor, you cannot expect VM perform 
nearly the bare metal does. There are long journey to do such optimization, 
take time and improve it gradually. There are lot of kvm tuning guide you can 
reference and prove it on your hardware. Read thoughtfully on each tuning that 
may bring improvement and also introduce risk factor.


Regards,
Levin




On 7 Jul 2023 at 21:24 +0100, Granwille Strauss , wrote:
> Sorry that I have to ask, can you perhaps be a bit more specific, please. The 
> only QOS settings I see in service offering are "None", "Hypervisor" and 
> "Storage", which doesn't really seem network related. Or am I missing the 
> point? Take note that I use the default offerings for the VR and VMs but with 
> slight tweaks such as setting local storage etc and only increased the 
> Network rate from 200 Mbps to 1000 Mbps.
>
> So can you kindly explain by what QOS settings you guys are referring to, 
> please?
> PS, the Write-back disk caching seems to give the VM a slight increase, I now 
> see writes at 190 Mbps from ~70 Mbps.
> On 7/7/23 21:11, Vivek Kumar wrote:
> > Hello,
> >
> > IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, 
> > so I don’t think that it’s depends on disk performance, it’s better to 
> > check the network QoS at every layer, VR and VM.
> >
> >
> >
> > Vivek Kumar
> > Sr. Manager - Cloud & DevOps
> > TechOps | Indiqus Technologies
> >
> > vivek.ku...@indiqus.com 
> >www.indiqus.com 
> >
> >
> >
> >
> > > On 07-Jul-2023, at 9:44 PM, Granwille Strauss 
> > >  wrote:
> > >
> > > Hi Levin
> > >
> > > Thank you, I am aware of network offering, the first thing I did was make 
> > > sure it was set to accommodate the KVM's entire 1 Gbps uplink. But now 
> > > that I think if it iperf test prevousily were always stuck on 50 Mbps, 
> > > but this is because of the write speeds on the disk at least that's what 
> > > I believe causes the network bottle neck. I will double-check this again.
> > >
> > > But there is some sort of limit on the VM disk in place. FIO tests show 
> > > that write speeds are in the range of 50 - 90 MB/s on the VM, while fio 
> > > test confirms on the KVM its over 400 MB/s.
> > >
> > > On 7/7/23 18:08, Levin Ng wrote:
> > > > Hi Groete,
> > > >
> > > > Forgot to mention, when you are talking about file copies between 
> > > > remote server, you need to aware there are network QoS option in the 
> > > > offering, make sure the limits correctness. Do iperf test prove that 
> > > > too, test between server and  via virtual router. Hope you can narrow 
> > > > down the problem soon.
> > > >
> > > > Regards,
> > > > Levin
> > > >
> > > > On 7 Jul 2023 at 16:40 +0100, Granwille Strauss  
> > > > , wrote:
> > > > > Hi Levin
> > > > > Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a 
> > > > > hardware RAID 5 configuration, of which I am using local storage 
> > > > > pool, yes. I will run fio test and also playing around with the 
> > > > > controller cache settings to see what happens and provide feedback on 
> > > > > this soon.
> > > > > On 7/7/23 17:23, Levin Ng wrote:
> > > > > > HI Groete,
> > > > > >
> > > > > > Should you run a fio test on the VM and the KVM host to get a 
> > > > > > baseline first. SSD are tricky device, when it fill up the cache or 
> > > > > > nearly full, the performance will drop significantly, especially 
> > > > > > consumer grade SSD. There are option to limit IOPs in ACS offering 
> > > > > > setting, I believe you leave it empty, so it is no limit. When you 
> > > > > > talking about KVM uses SSDs, I think you are using Local Disk Pool 
> > > > > > right? If you have RAID controller underlying, try toggle the 
> > > > > > controller cache, SSD may perform vary on different disk controller 
> > > > > > cache setting.
> > 

Re: Write Speeds

2023-07-07 Thread Granwille Strauss
Sorry that I have to ask, can you perhaps be a bit more specific, 
please. The only QOS settings I see in service offering are "None", 
"Hypervisor" and "Storage", which doesn't really seem network related. 
Or am I missing the point? Take note that I use the default offerings 
for the VR and VMs but with slight tweaks such as setting local storage 
etc and only increased the Network rate from 200 Mbps to 1000 Mbps.


So can you kindly explain by what QOS settings you guys are referring 
to, please?


PS, the Write-back disk caching seems to give the VM a slight increase, 
I now see writes at 190 Mbps from ~70 Mbps.


On 7/7/23 21:11, Vivek Kumar wrote:

Hello,

IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, so I 
don’t think that it’s depends on disk performance, it’s better to check the 
network QoS at every layer, VR and VM.



Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com  
www.indiqus.com  





On 07-Jul-2023, at 9:44 PM, Granwille Strauss  
wrote:

Hi Levin

Thank you, I am aware of network offering, the first thing I did was make sure 
it was set to accommodate the KVM's entire 1 Gbps uplink. But now that I think 
if it iperf test prevousily were always stuck on 50 Mbps, but this is because 
of the write speeds on the disk at least that's what I believe causes the 
network bottle neck. I will double-check this again.

But there is some sort of limit on the VM disk in place. FIO tests show that 
write speeds are in the range of 50 - 90 MB/s on the VM, while fio test 
confirms on the KVM its over 400 MB/s.

On 7/7/23 18:08, Levin Ng wrote:

Hi Groete,

Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.

Regards,
Levin

On 7 Jul 2023 at 16:40 +0100, Granwille Strauss  
, wrote:

Hi Levin
Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware RAID 
5 configuration, of which I am using local storage pool, yes. I will run fio 
test and also playing around with the controller cache settings to see what 
happens and provide feedback on this soon.
On 7/7/23 17:23, Levin Ng wrote:

HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss  
, wrote:

Hi Guys
Does Cloudstack have a disk write speed limit somewhere in its setting? We have been transferring 
many files from remote servers to VM machines on our Cloudstack instance and we recently noticed 
that the VM write speeds are all limited to about 5-8 MB/s. But the underlying hardware of the KVM 
uses SSDs capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms are set to 
"No Disk Cache" with thin provisioning, could this be the reason? I understand that 
"Write Back Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
disk controller. What could I be missing in this case?
--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com  
m: +264 81 323 1260
w:www.namhost.com  




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com  

Re: Write Speeds

2023-07-07 Thread Vivek Kumar
Hello, 

IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, so I 
don’t think that it’s depends on disk performance, it’s better to check the 
network QoS at every layer, VR and VM. 



Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 07-Jul-2023, at 9:44 PM, Granwille Strauss  
> wrote:
> 
> Hi Levin
> 
> Thank you, I am aware of network offering, the first thing I did was make 
> sure it was set to accommodate the KVM's entire 1 Gbps uplink. But now that I 
> think if it iperf test prevousily were always stuck on 50 Mbps, but this is 
> because of the write speeds on the disk at least that's what I believe causes 
> the network bottle neck. I will double-check this again. 
> 
> But there is some sort of limit on the VM disk in place. FIO tests show that 
> write speeds are in the range of 50 - 90 MB/s on the VM, while fio test 
> confirms on the KVM its over 400 MB/s. 
> 
> On 7/7/23 18:08, Levin Ng wrote:
>> Hi Groete,
>> 
>> Forgot to mention, when you are talking about file copies between remote 
>> server, you need to aware there are network QoS option in the offering, make 
>> sure the limits correctness. Do iperf test prove that too, test between 
>> server and  via virtual router. Hope you can narrow down the problem soon.
>> 
>> Regards,
>> Levin
>> 
>> On 7 Jul 2023 at 16:40 +0100, Granwille Strauss  
>> , wrote:
>>> Hi Levin
>>> Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware 
>>> RAID 5 configuration, of which I am using local storage pool, yes. I will 
>>> run fio test and also playing around with the controller cache settings to 
>>> see what happens and provide feedback on this soon.
>>> On 7/7/23 17:23, Levin Ng wrote:
 HI Groete,
 
 Should you run a fio test on the VM and the KVM host to get a baseline 
 first. SSD are tricky device, when it fill up the cache or nearly full, 
 the performance will drop significantly, especially consumer grade SSD. 
 There are option to limit IOPs in ACS offering setting, I believe you 
 leave it empty, so it is no limit. When you talking about KVM uses SSDs, I 
 think you are using Local Disk Pool right? If you have RAID controller 
 underlying, try toggle the controller cache, SSD may perform vary on 
 different disk controller cache setting.
 
 Controller type scsi, or virtio performance are similar, no need to worry 
 about it. Of coz, in general, using RAW format and thick provisioning 
 could get a best io performance result, but consume space and lack of 
 snapshot capabliblity , so most the time it is not prefer go this path.
 
 Please gather more information first
 
 Regards,
 Levin
 On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
  , 
 wrote:
> Hi Guys
> Does Cloudstack have a disk write speed limit somewhere in its setting? 
> We have been transferring many files from remote servers to VM machines 
> on our Cloudstack instance and we recently noticed that the VM write 
> speeds are all limited to about 5-8 MB/s. But the underlying hardware of 
> the KVM uses SSDs capable of write speeds of 300 - 600 MB/s. My disk 
> offering on my current vms are set to "No Disk Cache" with thin 
> provisioning, could this be the reason? I understand that "Write Back 
> Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
> disk controller. What could I be missing in this case?
> --
> Regards / Groete
> 
> Granwille Strauss  //  Senior Systems Admin
> 
> e: granwi...@namhost.com 
> m: +264 81 323 1260
> w: www.namhost.com 
> 
> 
> 
> 
> Namhost Internet Services (Pty) Ltd,
> 24 Black Eagle Rd, Hermanus, 7210, RSA
> 
> 
> The content of this message is confidential. If you have received it by 
> mistake, please inform us by email reply and then delete the message. It 
> is forbidden to copy, forward, or in any way reveal the contents of this 
> message to anyone without our explicit consent. The integrity and 
> security of this email cannot be guaranteed over the Internet. Therefore, 
> the sender will not be held liable for any damage caused by the message. 
> For our full privacy policy and disclaimers, please go to 
> https://www.namhost.com/privacy-policy
> 
>>> --
>>> Regards / Groete
>>> 
>>> Granwille Strauss  //  Senior Systems Admin
>>> 
>>> e: granwi...@namhost.com 
>>> m: +264 81 323 1260
>>> w: www.namhost.com 
>>> 
>>> 
>>> 
>>> 
>>> Namhost Internet Services (Pty) Ltd,
>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>> 
>>> 
>>> The content

Re: Write Speeds

2023-07-07 Thread Granwille Strauss

Hi Levin

Thank you, I am aware of network offering, the first thing I did was 
make sure it was set to accommodate the KVM's entire 1 Gbps uplink. But 
now that I think if it iperf test prevousily were always stuck on 50 
Mbps, but this is because of the write speeds on the disk at least 
that's what I believe causes the network bottle neck. I will 
double-check this again.


But there is some sort of limit on the VM disk in place. FIO tests show 
that write speeds are in the range of 50 - 90 MB/s on the VM, while fio 
test confirms on the KVM its over 400 MB/s.


On 7/7/23 18:08, Levin Ng wrote:

Hi Groete,

Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.

Regards,
Levin

On 7 Jul 2023 at 16:40 +0100, Granwille Strauss, wrote:

Hi Levin
Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware RAID 
5 configuration, of which I am using local storage pool, yes. I will run fio 
test and also playing around with the controller cache settings to see what 
happens and provide feedback on this soon.
On 7/7/23 17:23, Levin Ng wrote:

HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss, 
wrote:

Hi Guys
Does Cloudstack have a disk write speed limit somewhere in its setting? We have been transferring 
many files from remote servers to VM machines on our Cloudstack instance and we recently noticed 
that the VM write speeds are all limited to about 5-8 MB/s. But the underlying hardware of the KVM 
uses SSDs capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms are set to 
"No Disk Cache" with thin provisioning, could this be the reason? I understand that 
"Write Back Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
disk controller. What could I be missing in this case?
--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com
m: +264 81 323 1260
w:www.namhost.com




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com
m: +264 81 323 1260
w:www.namhost.com




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of t

Re: Write Speeds

2023-07-07 Thread Levin Ng
Hi Groete,

Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.

Regards,
Levin

On 7 Jul 2023 at 16:40 +0100, Granwille Strauss , wrote:
> Hi Levin
> Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware 
> RAID 5 configuration, of which I am using local storage pool, yes. I will run 
> fio test and also playing around with the controller cache settings to see 
> what happens and provide feedback on this soon.
> On 7/7/23 17:23, Levin Ng wrote:
> > HI Groete,
> >
> > Should you run a fio test on the VM and the KVM host to get a baseline 
> > first. SSD are tricky device, when it fill up the cache or nearly full, the 
> > performance will drop significantly, especially consumer grade SSD. There 
> > are option to limit IOPs in ACS offering setting, I believe you leave it 
> > empty, so it is no limit. When you talking about KVM uses SSDs, I think you 
> > are using Local Disk Pool right? If you have RAID controller underlying, 
> > try toggle the controller cache, SSD may perform vary on different disk 
> > controller cache setting.
> >
> > Controller type scsi, or virtio performance are similar, no need to worry 
> > about it. Of coz, in general, using RAW format and thick provisioning could 
> > get a best io performance result, but consume space and lack of snapshot 
> > capabliblity , so most the time it is not prefer go this path.
> >
> > Please gather more information first
> >
> > Regards,
> > Levin
> > On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
> > , wrote:
> > > Hi Guys
> > > Does Cloudstack have a disk write speed limit somewhere in its setting? 
> > > We have been transferring many files from remote servers to VM machines 
> > > on our Cloudstack instance and we recently noticed that the VM write 
> > > speeds are all limited to about 5-8 MB/s. But the underlying hardware of 
> > > the KVM uses SSDs capable of write speeds of 300 - 600 MB/s. My disk 
> > > offering on my current vms are set to "No Disk Cache" with thin 
> > > provisioning, could this be the reason? I understand that "Write Back 
> > > Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
> > > disk controller. What could I be missing in this case?
> > > --
> > > Regards / Groete
> > >
> > > Granwille Strauss  //  Senior Systems Admin
> > >
> > > e: granwi...@namhost.com
> > > m: +264 81 323 1260
> > > w: www.namhost.com
> > >
> > >
> > >
> > >
> > > Namhost Internet Services (Pty) Ltd,
> > > 24 Black Eagle Rd, Hermanus, 7210, RSA
> > >
> > >
> > > The content of this message is confidential. If you have received it by 
> > > mistake, please inform us by email reply and then delete the message. It 
> > > is forbidden to copy, forward, or in any way reveal the contents of this 
> > > message to anyone without our explicit consent. The integrity and 
> > > security of this email cannot be guaranteed over the Internet. Therefore, 
> > > the sender will not be held liable for any damage caused by the message. 
> > > For our full privacy policy and disclaimers, please go to 
> > > https://www.namhost.com/privacy-policy
> > >
> --
> Regards / Groete
>
> Granwille Strauss  //  Senior Systems Admin
>
> e: granwi...@namhost.com
> m: +264 81 323 1260
> w: www.namhost.com
>
>
>
>
> Namhost Internet Services (Pty) Ltd,
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
> The content of this message is confidential. If you have received it by 
> mistake, please inform us by email reply and then delete the message. It is 
> forbidden to copy, forward, or in any way reveal the contents of this message 
> to anyone without our explicit consent. The integrity and security of this 
> email cannot be guaranteed over the Internet. Therefore, the sender will not 
> be held liable for any damage caused by the message. For our full privacy 
> policy and disclaimers, please go to https://www.namhost.com/privacy-policy
>


Re: Write Speeds

2023-07-07 Thread Granwille Strauss

Hi Levin

Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a 
hardware RAID 5 configuration, of which I am using local storage pool, 
yes. I will run fio test and also playing around with the controller 
cache settings to see what happens and provide feedback on this soon.


On 7/7/23 17:23, Levin Ng wrote:

HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss, 
wrote:

Hi Guys
Does Cloudstack have a disk write speed limit somewhere in its setting? We have been transferring 
many files from remote servers to VM machines on our Cloudstack instance and we recently noticed 
that the VM write speeds are all limited to about 5-8 MB/s. But the underlying hardware of the KVM 
uses SSDs capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms are set to 
"No Disk Cache" with thin provisioning, could this be the reason? I understand that 
"Write Back Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
disk controller. What could I be missing in this case?
--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com
m: +264 81 323 1260
w:www.namhost.com




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Write Speeds

2023-07-07 Thread Levin Ng
HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
, wrote:
> Hi Guys
> Does Cloudstack have a disk write speed limit somewhere in its setting? We 
> have been transferring many files from remote servers to VM machines on our 
> Cloudstack instance and we recently noticed that the VM write speeds are all 
> limited to about 5-8 MB/s. But the underlying hardware of the KVM uses SSDs 
> capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms 
> are set to "No Disk Cache" with thin provisioning, could this be the reason? 
> I understand that "Write Back Disk Cach" has better write speeds. Also I have 
> VMs set as virtio for its disk controller. What could I be missing in this 
> case?
> --
> Regards / Groete
>
> Granwille Strauss  //  Senior Systems Admin
>
> e: granwi...@namhost.com
> m: +264 81 323 1260
> w: www.namhost.com
>
>
>
>
> Namhost Internet Services (Pty) Ltd,
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
> The content of this message is confidential. If you have received it by 
> mistake, please inform us by email reply and then delete the message. It is 
> forbidden to copy, forward, or in any way reveal the contents of this message 
> to anyone without our explicit consent. The integrity and security of this 
> email cannot be guaranteed over the Internet. Therefore, the sender will not 
> be held liable for any damage caused by the message. For our full privacy 
> policy and disclaimers, please go to https://www.namhost.com/privacy-policy
>