Re: Write Speeds

2023-07-10 Thread Granwille Strauss
Anyone with any ideas why ACS is not detecting I have newer versions of 
qemu and libvirt to enable io_uring?


On 7/10/23 19:57, Jorge Luiz Correa wrote:

Hum, so strange.

I'm not a CloudStack specialist but it look like the code is simple 
and try to test the versions of qemu and libvirt:


https://github.com/apache/cloudstack/pull/5012/commits/c7c3dd3dd9b8869f45c5bd9c17af83d230ac7886

Here, at the slide bottom he shows this simple test too.

https://youtu.be/y0NYuUtm5Kk?list=PLnIKk7GjgFlYfut3ZIOrvN--_YuSPIerQ&t=791 



For some reason CloudStack is not detecting your versions. My disk 
offering is simple, Thin provisioning, custom disk size, QoS = none, 
Write-cache type = no disk cache. I'm using Ubuntu Server 22.04 and 
CloudStack 4.17.2.


Em seg., 10 de jul. de 2023 às 14:22, Granwille Strauss 
 escreveu:


Jorge, I thought so too, but XML dumps does not contain it. So I
figured the oi.policy setting needs to be set in management
server. Here's my KVM details:


Compiled against library: libvirt 8.0.0
Using library: libvirt 8.0.0
Using API: QEMU 8.0.0
Running hypervisor: QEMU 6.2.0


qemu guest agents also exist on VMs. And here's a XML dump:


root@athena03 ~ $ virsh dumpxml i-2-120-VM | grep driver
  
  
root@athena03 ~ $

On 7/10/23 18:51, Jorge Luiz Correa wrote:

Granwille, no special configuration, just the CloudStack default
behavior. As I understand, CloudStack can detect automatically if
host supports this feature based on qemu and libvirt versions.

https://github.com/apache/cloudstack/issues/4883#issuecomment-813955599


What versions of kernel, qemu and libvirt are you using in KVM host?

Em seg., 10 de jul. de 2023 às 13:26, Granwille Strauss
 escreveu:

Hi Jorge

How do you actually enable io_uring via Cloustack?
My KVM does have the necessary requirements.

I enabled io.policy settings in global settings, local storage
and in the VM settings via UI. And my xml dump of VM doesn’t
include io_uring under driver for some reason.

-- 
Regards / Groete


Granwille Strauss  // Senior
Systems Administrator

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 










The content of this message is confidential.

For our full privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy

Powered by AdSigner




On 10 Jul 2023, at 5:27 PM, Granwille Strauss

 wrote:



Hi Jorge

Thank you so much for this. I used your FIO config and
surprisingly it seems fine:


write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W)
1300MiB-1300MiB, (T) 1300MiB-1300MiB, ioengine=libaio,
iodepth=1
fio-3.19

Run status group 0 (all jobs):
   READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
  WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s
(336MB/s-336MB/s), io=1300MiB (1363MB), run=4052-4052msec


This is without enabling io_uring. I see I can enable it per
VM using the UI by setting the io.policy = io_uring. Will
enable this on a few VMs and see if it works better.

On 7/10/23 15:41, Jorge Luiz Correa wrote:

Hi Granwille! About the READ/WRITE performance, as Levin suggested, 
check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


 /usr/bin/qemu-system-x86_64
 
   
   
   
   
   fb46fd2c59bd4127851b
   
   
 

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwr

RE: Write Speeds

2023-07-10 Thread Alex Mattioli
I doesn’t necessarily get throttled, the added latency will definitely impact 
the maximum bandwidth achievable per stream, especially if you are using TCP. 
In this case a bandwidth delay calculator can help you find the maximum 
theoretical bandwidth for a given latency:
https://www.switch.ch/network/tools/tcp_throughput/?do+new+calculation=do+new+calculation





From: Granwille Strauss 
Sent: Monday, July 10, 2023 6:04 PM
To: users@cloudstack.apache.org
Cc: Levin Ng ; Jithin Raju ; 
vivek.ku...@indiqus.com
Subject: Re: Write Speeds


Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM host to 
determine a base line to the remote USA VM machine:

- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in a DC 
that's close to our DC  in the same country that we use:

- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the USA VM to 
the local KVM host gets throttled? Based on the results above, the throttling 
doesn't seems to be in from ISPs inside our country.

So yeah, somewhere some ISP is throttling during the USA routes.
On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,



I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.



The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.





On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like



fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest





Regards,

Levin

On 10 Jul 2023 at 11:33 +0100, Granwille Strauss 
, wrote:

Hi Guys

Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:

1. Network:

- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.

- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant

- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.

Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.

2. Disk Write Speed:

- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.



But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.

On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,



The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .







-Jithin



From: Levin Ng 

Date: Sunday, 9 July 2023 at 5:00 AM

To: users@cloudstack.apache.org 
, Granwille 
Strauss 

Cc: vivek.ku...@indiqus.com 
, Nux 


Subject: Re: Write Speeds

Dear Groete,



https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hyperv

RE: Async backup

2023-07-10 Thread Alex Mattioli
> all snapshots make a local copy to the management server disk before sending 
> it to the secondary storage locations, yes.

What exactly do you mean by that?

From: Granwille Strauss 
Sent: Monday, July 10, 2023 1:21 PM
To: users@cloudstack.apache.org
Cc: Nikolaos Tsinganos 
Subject: Re: Async backup


Correct me if I am wrong, all snapshots make a local copy to the management 
server disk before sending it to the secondary storage locations, yes. Async or 
not. But yes, I believe that is the case. However, when async is enabled you 
get to use and work on other tasks with the management server, while it does 
the snapshot process in the background. Whereas having async disabled, you are 
restricted from doing other task on the management server until the snapshot 
process completes, that's how I understand it.
On 7/10/23 13:04, Nikolaos Tsinganos wrote:

Thank you, Granwille for your prompt answer.



Can you elaborate on the "locally"  and "storage location" terms?

On a host that has multiple primary storages (e.g. NFS and some other storage 
over iSCSI)  but no local disks are used, what is considered as 'locally'?

Also, by "storage location"  do you mean the secondary storage?



Regards,

Nikolaos



From: Granwille Strauss 

Sent: Monday, July 10, 2023 1:48 PM

To: users@cloudstack.apache.org

Cc: Nikolaos Tsinganos 

Subject: Re: Async backup



In short and in layman's terms, it makes the volume snapshot, stores it locally 
and then in the background transfers it to the storage location. This helps 
with server resource usage. But see attached screenshot for a detailed answer.

On 7/10/23 12:38, Nikolaos Tsinganos wrote:

Hi All,



Can somebody explain what the "Async backup" option does while taking volume 
snapshot?



Regards,

Nikolaos


--

Regards / Groete
[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/logo/621b3fa39fb210001f975298/cd2904ba-304d-4a49-bf33-cbe9ac76d929_248x-.png]
Granwille Strauss  //  Senior Systems Admin

e: granwi...@namhost.com
m: +264 81 323 1260
w: www.namhost.com

[https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_01/621b3fa39fb210001f975298/9151954b-b298-41aa-89c8-1d68af075373_48x48.png][https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_02/621b3fa39fb210001f975298/85a9dc7c-7bd1-4958-85a9-e6a25baeb028_48x48.png][https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_03/621b3fa39fb210001f975298/c1c5386c-914c-43cf-9d37-5b4aa8e317ab_48x48.png][https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_04/621b3fa39fb210001f975298/3aaa7968-130e-48ec-821d-559a332cce47_48x48.png][https://www.adsigner.com/v1/s/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/social_icon_05/621b3fa39fb210001f975298/3a8c09e6-588f-43a8-acfd-be4423fd3fb6_48x48.png]

[https://www.adsigner.com/v1/i/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner/940x300]
Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA

The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy



 



Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Vivek, yes I have and its in line with my VMs:


  
    
    
  
  
  


On 7/10/23 18:50, Vivek Kumar wrote:

Did you check on network QoS on VR as well if there is any .!


  Vivek Kumar

Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies





vivek.ku...@indiqus.com

www.indiqus.com 





On 10-Jul-2023, at 9:34 PM, Granwille Strauss  
wrote:


Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM 
host to determine a base line to the remote USA VM machine:


- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in 
a DC that's close to our DC  in the same country that we use:


- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the 
USA VM to the local KVM host gets throttled? Based on the results 
above, the throttling doesn't seems to be in from ISPs inside our 
country.


So yeah, somewhere some ISP is throttling during the USA routes.

On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,

I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.

The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.


On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like

fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest


Regards,
Levin
On 10 Jul 2023 at 11:33 +0100, Granwille Strauss, wrote:

Hi Guys
Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:
1. Network:
- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.
- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant
- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.
Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.
2. Disk Write Speed:
- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.

But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.
On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng
Date: Sunday, 9 July 2023 at 5:00 AM
To:users@cloudstack.apache.org  , Granwille 
Strauss
Cc:vivek.ku...@indiqus.com  , Nux
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need t

Re: Write Speeds

2023-07-10 Thread Jorge Luiz Correa
Granwille, no special configuration, just the CloudStack default behavior.
As I understand, CloudStack can detect automatically if host supports this
feature based on qemu and libvirt versions.

https://github.com/apache/cloudstack/issues/4883#issuecomment-813955599

What versions of kernel, qemu and libvirt are you using in KVM host?

Em seg., 10 de jul. de 2023 às 13:26, Granwille Strauss <
granwi...@namhost.com> escreveu:

> Hi Jorge
>
> How do you actually enable io_uring via Cloustack?
> My KVM does have the necessary requirements.
>
> I enabled io.policy settings in global settings, local storage
> and in the VM settings via UI. And my xml dump of VM doesn’t include
> io_uring under driver for some reason.
>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems
> Administrator
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>   
> 
> 
> 
>
>
> 
>
> The content of this message is confidential.
> For our full privacy policy and disclaimers, please go to
> https://www.namhost.com/privacy-policy
> [image: Powered by AdSigner]
> 
>
> On 10 Jul 2023, at 5:27 PM, Granwille Strauss
>  wrote:
>
> 
>
> Hi Jorge
>
> Thank you so much for this. I used your FIO config and surprisingly it
> seems fine:
>
> write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W) 1300MiB-1300MiB,
> (T) 1300MiB-1300MiB, ioengine=libaio, iodepth=1
> fio-3.19
>
> Run status group 0 (all jobs):
>READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s (1009MB/s-1009MB/s),
> io=3900MiB (4089MB), run=4052-4052msec
>   WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s (336MB/s-336MB/s),
> io=1300MiB (1363MB), run=4052-4052msec
>
> This is without enabling io_uring. I see I can enable it per VM using the
> UI by setting the io.policy = io_uring. Will enable this on a few VMs and
> see if it works better.
> On 7/10/23 15:41, Jorge Luiz Correa wrote:
>
> Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
> the XML of virtual machines looking at disk/device section. Look for
> io='io_uring'.
>
> As stated here:
> https://github.com/apache/cloudstack/issues/4883
>
> CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.
>
> I tried to do some tests at some points like your environment.
>
> ##
> VM in NFS Primary Storage (Hybrid NAS)
> Default disk offering, thin (no restriction)
>
> 
> /usr/bin/qemu-system-x86_64
> 
>   
>file='/mnt/74267a3b-46c5-3f6c-8637-a9f721852954/fb46fd2c-59bd-4127-851b-693a957bd5be'
> index='2'/>
>   
>   
>   fb46fd2c59bd4127851b
>   
>function='0x0'/>
> 
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
> --readwrite=randrw
>
> READ: 569MiB/s
> WRITE: 195MiB/s
>
> ##
> VM in Local Primary Storage (local SSD host)
> Default disk offering, thin (no restriction)
>
> 
> /usr/bin/qemu-system-x86_64
> 
>   
>file='/var/lib/libvirt/images/d100c55d-8ff2-45e5-8452-6fa56c0725e5'
> index='2'/>
>   
>   
>   fb46fd2c59bd4127851b
>   
>function='0x0'/>
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
> --readwrite=randrw
>
> First run (a little bit slow if using "thin" because need to allocate space
> in qcow2):
> READ: bw=796MiB/s
> WRITE: bw=265MiB/s
>
> Second run:
> READ: bw=952MiB/s
> WRITE: bw=317MiB/s
>
> ##
> Directly in local SSD of host:
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
> --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
> --readwrite=randrw
>
> READ: bw=931MiB/s
> WRITE: bw=310MiB/s
>
> OBS.: parameters of fio test need to be changed to test in your environment
> as it depends on the number of cpus, memory, --bs, --iodepth etc.
>
> Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
> 4.17.2. So, VM in local SSD of host could have very similar disk
> performance from the host.
>
> I hope this could help you!
>
> Thanks.
>
>
> --
> Regards / Groete
>
>  Granwille Strauss  //  Senior Systems Admin
>
> *e:* granwi...@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
>  
> 
> 
> 

Re: Write Speeds

2023-07-10 Thread Vivek Kumar
Did you check on network QoS on VR as well if there is any .! 


Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 10-Jul-2023, at 9:34 PM, Granwille Strauss  wrote:
> 
> Hi Levin
> 
> I skipped all the VM testing and tested iperf straight from the KVM host to 
> determine a base line to the remote USA VM machine:
> 
> - KVM to Remote USA VM: 113 Mbits/sec
> - USA VM to KVM Host: 35.9 Mbits/sec
> 
> I then ran the same test again, this this time the remote host was in a DC 
> that's close to our DC  in the same country that we use:
> 
> - KVM to remote host: 409 Mbits/sec
> - Remote host to KVM: 477 Mbits/sec
> 
> So do you think its safe to conclude that somewhere traffic from the USA VM 
> to the local KVM host gets throttled? Based on the results above, the 
> throttling doesn't seems to be in from ISPs inside our country. 
> 
> So yeah, somewhere some ISP is throttling during the USA routes. 
> 
> On 7/10/23 15:56, Levin Ng wrote:
>> Hi Groete,
>> 
>> I’m not sure what is your network setting in ACS, but test between two 
>> public IP with ~500Mbps  sound like u are saturated by in/out bound traffics 
>> in the single network path, can you do a test from outside ACS to your VM 
>> using an same public network segment IP, it will avoid network routing and 
>> confusion., what is your ACS network driver using? If vxlan, better check 
>> with network switch multicast performance.
>> 
>> The remote performances clearly shown the ISP put some limit on the line, 
>> you have to check with them. Unless your line is end-to-end, Metro-Ethernet 
>> etc… otherwise it is not always have guarantee throughput.
>> 
>> 
>> On the disk performance, should you share your fio test command and result 
>> beforehand. I’m assuming you are doing something like
>> 
>> fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
>> -rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
>> -group_reporting -name=mytest
>> 
>> 
>> Regards,
>> Levin
>> On 10 Jul 2023 at 11:33 +0100, Granwille Strauss  
>> , wrote:
>>> Hi Guys
>>> Thank you, I have been running more tests now with the feedback you guys 
>>> gave. Firstly, I want to break this up into two sections:
>>> 1. Network:
>>> - So I have been running iperf tests between my VMs on their public 
>>> network, and my iperf tests gives me speeds of ~500 Mbps, keep in mind this 
>>> in between two local VMs on the same KVM but on public network.
>>> - I then run iperf tests in and out from my local VMs to remote servers, 
>>> this is where it does funny things. From the remote VM in USA, I run an 
>>> iperf test to my local VM, the speeds show ~50 Mbps. And if I run a test 
>>> from my local VM to a remote USA VM the same ~50 Mbps speeds are 
>>> accomplished. I ran my iperf tests with 1 GB and 2GB flags and the results 
>>> remain constant
>>> - During all these test I kept an eye on my VR resources, which use default 
>>> service offerings, it never spiked or reach thresholds.
>>> Is it safe to assume that because of the MASSIVE distance between the 
>>> remote VM and my local VMs, the speed dropping to ~50 Mbps is normal? Keep 
>>> in mind the remote VM has 1 Gbps line too and this VM is managed by a big 
>>> ISP provider in the USA. To me its quite a massive drop from 1000 Mbps to 
>>> 50 Mbps, this kinda does not make sense to me. I would understand at least 
>>> 150 Mbps.
>>> 2. Disk Write Speed:
>>> - It seems the only changes that can be made is to implement disk cache 
>>> options. And so far I see write-back seems to be common practise for most 
>>> cloud providers, given that they have the necessary power redundancy and VM 
>>> backup images in place.
>>> 
>>> But for now, other than write cache types are there anything else that can 
>>> be done to improve disk writing speeds. I checked RedHat guides on 
>>> optimising VMs and I seem to have most in place, but write speeds remain at 
>>> ~50 Mbps.
>>> On 7/10/23 06:25, Jithin Raju wrote:
 Hi  Groete,
 
 The VM virtual NIC network throttling is picked up from its compute 
 offering. You may need to create a new compute offering and change the 
 VM’s compute offering. If it is not specified in the compute offering, 
 means it is taking the values from the global settings: 
 vm.network.throttling.rate .
 
 
 
 -Jithin
 
 From: Levin Ng  
 Date: Sunday, 9 July 2023 at 5:00 AM
 To: users@cloudstack.apache.org  
  , 
 Granwille Strauss  
 Cc: vivek.ku...@indiqus.com  
  , Nux 
  
 Subject: Re: Write Speeds
 Dear Groete,
 

Re: ACS with vmware hypervisors

2023-07-10 Thread Vivek Kumar
Did you choose right zone/pod/cluster, because I am also using 4.15.2 and it’s 
giving me option PreSetup, do you have option called - vmfs ? 


Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 10-Jul-2023, at 3:34 PM, Gary Dixon  
> wrote:
> 
> Hi Jithin
> 
> This is the odd thing - when we try and add the vcenter datastore to ACS as 
> Primary storage - we do not have the 'preSetup' protocol option in the "add 
> primary storage" UI ?
> 
> 
> Gary Dixon​
> Senior Technical Consultant
> T:  +44 161 537 4990
> E:  v ms@quadris‑support.com
> W: www.quadris.co.uk
> 
> The information contained in this e-mail from Quadris may be confidential and 
> privileged for the private use of the named recipient.  The contents of this 
> e-mail may not necessarily represent the official views of Quadris.  If you 
> have received this information in error you must not copy, distribute or take 
> any action or reliance on its contents.  Please destroy any hard copies and 
> delete this message.
> -Original Message-
> From: Jithin Raju 
> Sent: Monday, July 10, 2023 10:54 AM
> To: users@cloudstack.apache.org
> Subject: Re: ACS with vmware hypervisors
> 
> Hi Gary,
> 
> I am unable to tell the cause of the VM deployment failures with the log 
> snippets below.
> Could you try adding the storage as a datastore in vCenter and add it to 
> CloudStack as ‘presetup’ ?
> 
> -Jithin
> 
> From: Gary Dixon 
> Date: Monday, 10 July 2023 at 2:12 PM
> To: users@cloudstack.apache.org 
> Subject: RE: ACS with vmware hypervisors Hi Jithin
> 
> We are using ACS 4.15.2 and vsphere esxi v7.0.3
> 
> This is the log output for job-42701:
> 
> 2023-07-07 14:10:48,968 INFO [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-13:ctx-36699a50 job-42701) (logid:717a5506) Add job-42701 
> into job monitoring
> 2023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
> (API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
> deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; 
> id=1
> 2023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
> (API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
> Unable to create a deployment for VM[User|i-2-3207-VM]
> 2023-07-07 14:10:49,210 INFO [o.a.c.f.j.i.AsyncJobMonitor] 
> (API-Job-Executor-13:ctx-36699a50 job-42701) (logid:96c5f242) Remove 
> job-42701 from job monitoring
> 
> Do we also need to add the iSCSI datatstore in vcenter as Primary storage to 
> cloudstack UI?
> 
> BR
> 
> Gary
> Gary Dixon​
> Senior Technical Consultant
> T: +44 161 537 4990
> E: vms@quadris‑support.com
> W: http://www.quadris.co.uk/
> [cid:image458271.png@D68BB0C8.6CA2D3B5]
> The information contained in this e-mail from Quadris may be confidential and 
> privileged for the private use of the named recipient. The contents of this 
> e-mail may not necessarily represent the official views of Quadris. If you 
> have received this information in error you must not copy, distribute or take 
> any action or reliance on its contents. Please destroy any hard copies and 
> delete this message.
> 
> 
> 
> -Original Message-
> From: Jithin Raju 
> Sent: Monday, July 10, 2023 5:12 AM
> To: users@cloudstack.apache.org
> Subject: Re: ACS with vmware hypervisors
> 
> Hi Gary,
> 
> What are the ACS and Vmware ESXi versions you are using? Could you share the 
> entire logs for this day or job-42701?
> 
> -Jithin
> 
> From: Gary Dixon 
> Date: Friday, 7 July 2023 at 8:49 PM
> To: users@cloudstack.apache.org 
> Subject: ACS with vmware hypervisors
> 
> 
> 
> 
> 
> 
> I was wondering if anyone has any experience with ACS and vmware ESXi as the 
> hypervisor? I'm facing a problem when trying to deploy a new/fresh instance.
> 
> I've deployed a vCenter appliance, created a data centre, cluster(s) and the 
> hosts have all been added to ACS. When I attempt to deploy a fresh instance 
> to the vmware cluster/hosts to build the OS from an ISO, the following errors 
> are displayed/logged:
> 
> UI Error:
> 
> Unable to create a deployment for VM[User|i-2-3207-VM]
> 
> Management Log:
> 
> ..about 1/2 way into the error " at 
> com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source)" is logged.
> 
> 023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
> (API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
> Unable to create a deployment for VM[User|i-2-3207-VM]
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
> deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; 
> id=1 at 
> org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:225)
> at 
> org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve

Re: Write Speeds

2023-07-10 Thread Granwille Strauss
Hi JorgeHow do you actually enable io_uring via Cloustack?My KVM does have the necessary requirements.I enabled io.policy settings in global settings, local storageand in the VM settings via UI. And my xml dump of VM doesn’t include io_uring under driver for some reason.-- Regards / GroeteGranwille Strauss  //  Senior Systems Administratore: granwi...@namhost.comm: +264 81 323 1260w: www.namhost.comThe content of this message is confidential.For our full privacy policy and disclaimers, please go to https://www.namhost.com/privacy-policyOn 10 Jul 2023, at 5:27 PM, Granwille Strauss  wrote:
  

  
  
Hi Jorge
Thank you so much for this. I used your FIO config and
  surprisingly it seems fine:


  write-test: (g=0): rw=randrw, bs=(R)
1300MiB-1300MiB, (W) 1300MiB-1300MiB, (T) 1300MiB-1300MiB,
ioengine=libaio, iodepth=1
fio-3.19

Run status group 0 (all jobs):
   READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
  WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s
(336MB/s-336MB/s), io=1300MiB (1363MB), run=4052-4052msec
  

This is without enabling io_uring. I see I can enable it per VM
  using the UI by setting the io.policy = io_uring. Will enable this
  on a few VMs and see if it works better. 

On 7/10/23 15:41, Jorge Luiz Correa
  wrote:


  Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  


fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: 569MiB/s
WRITE: 195MiB/s

##
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s

Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s

##
Directly in local SSD of host:

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: bw=931MiB/s
WRITE: bw=310MiB/s

OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.

Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.

I hope this could help you!

Thanks.


  

  
  

-- 
  
  

  
Regards / Groete
  

  
  

Granwille
Strauss  //  Senior Systems Admin
  
  e: granwi...@namhost.com
  m: +264 81 323
  1260
  w: www.namhost.com
  
  
  
  

  
  

  
  
Namhost Internet Services (Pty) Ltd,
  24 Black Eagle Rd, Hermanus, 7210, RSA
  
  

  

  
  

  
The
  content of this message is confidential. If you have
  received it by mistake, please inform us by email reply
  and then delete the message. It is forbidden to copy,
  forward, or in any way reveal the contents of this message
  to anyone without our explicit consent. The integrity and
  security of this email cannot be guaranteed over the
  Internet. Therefore, the sender will not be held liable
  for any damage caused by the message. For our full privacy
  policy and disclaimers, please go to
  https://www.namhost.com/privacy-policy
  

  
   
                                                         
  
  



smime.p7s
Description: S/MIME cryptographic signature


Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Hi Levin

I skipped all the VM testing and tested iperf straight from the KVM host 
to determine a base line to the remote USA VM machine:


- KVM to Remote USA VM: 113 Mbits/sec
- USA VM to KVM Host: 35.9 Mbits/sec

I then ran the same test again, this this time the remote host was in a 
DC that's close to our DC  in the same country that we use:


- KVM to remote host: 409 Mbits/sec
- Remote host to KVM: 477 Mbits/sec

So do you think its safe to conclude that somewhere traffic from the USA 
VM to the local KVM host gets throttled? Based on the results above, the 
throttling doesn't seems to be in from ISPs inside our country.


So yeah, somewhere some ISP is throttling during the USA routes.

On 7/10/23 15:56, Levin Ng wrote:

Hi Groete,

I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.

The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.


On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like

fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest


Regards,
Levin
On 10 Jul 2023 at 11:33 +0100, Granwille Strauss, wrote:

Hi Guys
Thank you, I have been running more tests now with the feedback you guys gave. 
Firstly, I want to break this up into two sections:
1. Network:
- So I have been running iperf tests between my VMs on their public network, 
and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
two local VMs on the same KVM but on public network.
- I then run iperf tests in and out from my local VMs to remote servers, this 
is where it does funny things. From the remote VM in USA, I run an iperf test 
to my local VM, the speeds show ~50 Mbps. And if I run a test from my local VM 
to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my iperf 
tests with 1 GB and 2GB flags and the results remain constant
- During all these test I kept an eye on my VR resources, which use default 
service offerings, it never spiked or reach thresholds.
Is it safe to assume that because of the MASSIVE distance between the remote VM 
and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind the 
remote VM has 1 Gbps line too and this VM is managed by a big ISP provider in 
the USA. To me its quite a massive drop from 1000 Mbps to 50 Mbps, this kinda 
does not make sense to me. I would understand at least 150 Mbps.
2. Disk Write Speed:
- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for most cloud 
providers, given that they have the necessary power redundancy and VM backup 
images in place.

But for now, other than write cache types are there anything else that can be 
done to improve disk writing speeds. I checked RedHat guides on optimising VMs 
and I seem to have most in place, but write speeds remain at ~50 Mbps.
On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng
Date: Sunday, 9 July 2023 at 5:00 AM
To:users@cloudstack.apache.org  , Granwille 
Strauss
Cc:vivek.ku...@indiqus.com  , Nux
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.

Regards,
Levin



On 8 Jul 2023 at 09:02 +0100, Granwille Strauss, wrote:

Hi Levin
Thank you very much, I do appreciate your feedback and time replying back to 
me. I believe I have picked up on something. My VMs XML dump ALL show the 
following:

 


Specifically:
   
All my VM has t

Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Hi Jorge

Thank you so much for this. I used your FIO config and surprisingly it 
seems fine:


write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W) 
1300MiB-1300MiB, (T) 1300MiB-1300MiB, ioengine=libaio, iodepth=1

fio-3.19

Run status group 0 (all jobs):
   READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s 
(1009MB/s-1009MB/s), io=3900MiB (4089MB), run=4052-4052msec
  WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s (336MB/s-336MB/s), 
io=1300MiB (1363MB), run=4052-4052msec


This is without enabling io_uring. I see I can enable it per VM using 
the UI by setting the io.policy = io_uring. Will enable this on a few 
VMs and see if it works better.


On 7/10/23 15:41, Jorge Luiz Correa wrote:

Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


 /usr/bin/qemu-system-x86_64
 
   
   
   
   
   fb46fd2c59bd4127851b
   
   
 

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: 569MiB/s
WRITE: 195MiB/s

##
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)


 /usr/bin/qemu-system-x86_64
 
   
   
   
   
   fb46fd2c59bd4127851b
   
   

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s

Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s

##
Directly in local SSD of host:

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: bw=931MiB/s
WRITE: bw=310MiB/s

OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.

Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.

I hope this could help you!

Thanks.


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Write Speeds

2023-07-10 Thread Levin Ng
Hi Groete,

I’m not sure what is your network setting in ACS, but test between two public 
IP with ~500Mbps  sound like u are saturated by in/out bound traffics in the 
single network path, can you do a test from outside ACS to your VM using an 
same public network segment IP, it will avoid network routing and confusion., 
what is your ACS network driver using? If vxlan, better check with network 
switch multicast performance.

The remote performances clearly shown the ISP put some limit on the line, you 
have to check with them. Unless your line is end-to-end, Metro-Ethernet etc… 
otherwise it is not always have guarantee throughput.


On the disk performance, should you share your fio test command and result 
beforehand. I’m assuming you are doing something like

fio -filename=./testfile.bin -direct=1 -iodepth 8 -thread -rw=randrw 
-rwmixread=50 -ioengine=psync -bs=4k -size=1000M -numjobs=30 -runtime=600 
-group_reporting -name=mytest


Regards,
Levin
On 10 Jul 2023 at 11:33 +0100, Granwille Strauss , wrote:
> Hi Guys
> Thank you, I have been running more tests now with the feedback you guys 
> gave. Firstly, I want to break this up into two sections:
> 1. Network:
> - So I have been running iperf tests between my VMs on their public network, 
> and my iperf tests gives me speeds of ~500 Mbps, keep in mind this in between 
> two local VMs on the same KVM but on public network.
> - I then run iperf tests in and out from my local VMs to remote servers, this 
> is where it does funny things. From the remote VM in USA, I run an iperf test 
> to my local VM, the speeds show ~50 Mbps. And if I run a test from my local 
> VM to a remote USA VM the same ~50 Mbps speeds are accomplished. I ran my 
> iperf tests with 1 GB and 2GB flags and the results remain constant
> - During all these test I kept an eye on my VR resources, which use default 
> service offerings, it never spiked or reach thresholds.
> Is it safe to assume that because of the MASSIVE distance between the remote 
> VM and my local VMs, the speed dropping to ~50 Mbps is normal? Keep in mind 
> the remote VM has 1 Gbps line too and this VM is managed by a big ISP 
> provider in the USA. To me its quite a massive drop from 1000 Mbps to 50 
> Mbps, this kinda does not make sense to me. I would understand at least 150 
> Mbps.
> 2. Disk Write Speed:
> - It seems the only changes that can be made is to implement disk cache 
> options. And so far I see write-back seems to be common practise for most 
> cloud providers, given that they have the necessary power redundancy and VM 
> backup images in place.
>
> But for now, other than write cache types are there anything else that can be 
> done to improve disk writing speeds. I checked RedHat guides on optimising 
> VMs and I seem to have most in place, but write speeds remain at ~50 Mbps.
> On 7/10/23 06:25, Jithin Raju wrote:
> > Hi  Groete,
> >
> > The VM virtual NIC network throttling is picked up from its compute 
> > offering. You may need to create a new compute offering and change the VM’s 
> > compute offering. If it is not specified in the compute offering, means it 
> > is taking the values from the global settings: vm.network.throttling.rate .
> >
> >
> >
> > -Jithin
> >
> > From: Levin Ng 
> > Date: Sunday, 9 July 2023 at 5:00 AM
> > To: users@cloudstack.apache.org , Granwille 
> > Strauss 
> > Cc: vivek.ku...@indiqus.com , Nux 
> > Subject: Re: Write Speeds
> > Dear Groete,
> >
> > https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java
> >
> > https://libvirt.org/formatnetwork.html#quality-of-service
> >
> > This is in kilobytes/second, u have to divide by 8
> >
> > 1Gbps / 8bit = 128MB = 128000KBps
> >
> > You can verify by iperf test, and yes, u need to ensure both VR and VM 
> > match bandwidth settings to get a consistent result, something u also need 
> > to pay attention on VR resource, default system router resource offering is 
> > quite limited, the network speed may throttled if VR running are out of CPU 
> > resource.
> >
> > Regards,
> > Levin
> >
> >
> >
> > On 8 Jul 2023 at 09:02 +0100, Granwille Strauss , 
> > wrote:
> > > Hi Levin
> > > Thank you very much, I do appreciate your feedback and time replying back 
> > > to me. I believe I have picked up on something. My VMs XML dump ALL show 
> > > the following:
> > > >> > > bridge='cloudbr0'/>   > > > peak='128000'/>   
> > > >
> > > >> > > domain='0x' bus='0x00' slot='0x03' function='0x0'/> 
> > > Specifically:
> > >> > average='128000' peak='128000'/> 
> > > All my VM has this in place, a 128 Mbps limit even the VR, which makes no 
> > > sense. I found this thread posted 10 years ago and Nux says that this 
> > > value is affected by the service offerings: 
> > > https://users.cloudstack.apache.narkive.com/T6Gx7BoV/cloudstack-network-limitation
> > >  But all my service offerings are se

Re: Write Speeds

2023-07-10 Thread Jorge Luiz Correa
Hi Granwille! About the READ/WRITE performance, as Levin suggested, check
the XML of virtual machines looking at disk/device section. Look for
io='io_uring'.

As stated here:

https://github.com/apache/cloudstack/issues/4883

CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0.

I tried to do some tests at some points like your environment.

##
VM in NFS Primary Storage (Hybrid NAS)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  


fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: 569MiB/s
WRITE: 195MiB/s

##
VM in Local Primary Storage (local SSD host)
Default disk offering, thin (no restriction)


/usr/bin/qemu-system-x86_64

  
  
  
  
  fb46fd2c59bd4127851b
  
  

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

First run (a little bit slow if using "thin" because need to allocate space
in qcow2):
READ: bw=796MiB/s
WRITE: bw=265MiB/s

Second run:
READ: bw=952MiB/s
WRITE: bw=317MiB/s

##
Directly in local SSD of host:

fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test
--filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G
--readwrite=randrw

READ: bw=931MiB/s
WRITE: bw=310MiB/s

OBS.: parameters of fio test need to be changed to test in your environment
as it depends on the number of cpus, memory, --bs, --iodepth etc.

Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is
4.17.2. So, VM in local SSD of host could have very similar disk
performance from the host.

I hope this could help you!

Thanks.

>

-- 
__
Aviso de confidencialidade

Esta mensagem da 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica 
federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro 
de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter 
informacoes  confidenciais, protegidas  por sigilo profissional.  Sua 
utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei. 
Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao 
emitente, esclarecendo o equivoco.

Confidentiality note

This message from 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government 
company  established under  Brazilian law (5.851/72), is directed 
exclusively to  its addressee  and may contain confidential data,  
protected under  professional secrecy  rules. Its unauthorized  use is 
illegal and  may subject the transgressor to the law's penalties. If you 
are not the addressee, please send it back, elucidating the failure.


RE: Async backup

2023-07-10 Thread Nikolaos Tsinganos
Hi Granwille, 

  

Thanks for clarifying this!

  

  

Regards, 

Nikolaos

  

  

From: Granwille Strauss  
Sent: Monday, July 10, 2023 2:21 PM
To: users@cloudstack.apache.org
Cc: Nikolaos Tsinganos 
Subject: Re: Async backup

  

Correct me if I am wrong, all snapshots make a local copy to the management 
server disk before sending it to the secondary storage locations, yes. Async or 
not. But yes, I believe that is the case. However, when async is enabled you 
get to use and work on other tasks with the management server, while it does 
the snapshot process in the background. Whereas having async disabled, you are 
restricted from doing other task on the management server until the snapshot 
process completes, that's how I understand it. 

On 7/10/23 13:04, Nikolaos Tsinganos wrote:



Re: Async backup

2023-07-10 Thread Granwille Strauss
Correct me if I am wrong, all snapshots make a local copy to the 
management server disk before sending it to the secondary storage 
locations, yes. Async or not. But yes, I believe that is the case. 
However, when async is enabled you get to use and work on other tasks 
with the management server, while it does the snapshot process in the 
background. Whereas having async disabled, you are restricted from doing 
other task on the management server until the snapshot process 
completes, that's how I understand it.


On 7/10/23 13:04, Nikolaos Tsinganos wrote:

Thank you, Granwille for your prompt answer.

Can you elaborate on the "locally"  and "storage location" terms?
On a host that has multiple primary storages (e.g. NFS and some other storage 
over iSCSI)  but no local disks are used, what is considered as 'locally'?
Also, by "storage location"  do you mean the secondary storage?

Regards,
Nikolaos

From: Granwille Strauss  
Sent: Monday, July 10, 2023 1:48 PM

To:users@cloudstack.apache.org
Cc: Nikolaos Tsinganos
Subject: Re: Async backup

In short and in layman's terms, it makes the volume snapshot, stores it locally 
and then in the background transfers it to the storage location. This helps 
with server resource usage. But see attached screenshot for a detailed answer.
On 7/10/23 12:38, Nikolaos Tsinganos wrote:
Hi All,

Can somebody explain what the "Async backup" option does while taking volume 
snapshot?

Regards,
Nikolaos


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


RE: Async backup

2023-07-10 Thread Nikolaos Tsinganos
Thank you, Granwille for your prompt answer.

Can you elaborate on the "locally"  and "storage location" terms?
On a host that has multiple primary storages (e.g. NFS and some other storage 
over iSCSI)  but no local disks are used, what is considered as 'locally'?
Also, by "storage location"  do you mean the secondary storage?

Regards, 
Nikolaos

From: Granwille Strauss  
Sent: Monday, July 10, 2023 1:48 PM
To: users@cloudstack.apache.org
Cc: Nikolaos Tsinganos 
Subject: Re: Async backup

In short and in layman's terms, it makes the volume snapshot, stores it locally 
and then in the background transfers it to the storage location. This helps 
with server resource usage. But see attached screenshot for a detailed answer. 
On 7/10/23 12:38, Nikolaos Tsinganos wrote:
Hi All, 

Can somebody explain what the "Async backup" option does while taking volume 
snapshot?

Regards, 
Nikolaos

-- 

Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e: mailto:granwi...@namhost.com
m: tel:+264813231260
w: https://www.namhost.com/




Namhost Internet Services (Pty) Ltd, 
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go to https://www.namhost.com/privacy-policy
   




Re: Async backup

2023-07-10 Thread Granwille Strauss
In short and in layman's terms, it makes the volume snapshot, stores it 
locally and then in the background transfers it to the storage location. 
This helps with server resource usage. But see attached screenshot for a 
detailed answer.


On 7/10/23 12:38, Nikolaos Tsinganos wrote:

Hi All,

Can somebody explain what the "Async backup" option does while taking volume 
snapshot?

Regards,
Nikolaos


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Async backup

2023-07-10 Thread Nikolaos Tsinganos
Hi All, 

Can somebody explain what the "Async backup" option does while taking volume 
snapshot?

Regards, 
Nikolaos



Re: Write Speeds

2023-07-10 Thread Granwille Strauss

Hi Guys

Thank you, I have been running more tests now with the feedback you guys 
gave. Firstly, I want to break this up into two sections:


1. Network:

- So I have been running iperf tests between my VMs on their public 
network, and my iperf tests gives me speeds of ~500 Mbps, keep in mind 
this in between two local VMs on the same KVM but on public network.


- I then run iperf tests in and out from my local VMs to remote servers, 
this is where it does funny things. From the remote VM in USA, I run an 
iperf test to my local VM, the speeds show ~50 Mbps. And if I run a test 
from my local VM to a remote USA VM the same ~50 Mbps speeds are 
accomplished. I ran my iperf tests with 1 GB and 2GB flags and the 
results remain constant


- During all these test I kept an eye on my VR resources, which use 
default service offerings, it never spiked or reach thresholds.


Is it safe to assume that because of the MASSIVE distance between the 
remote VM and my local VMs, the speed dropping to ~50 Mbps is normal? 
Keep in mind the remote VM has 1 Gbps line too and this VM is managed by 
a big ISP provider in the USA. To me its quite a massive drop from 1000 
Mbps to 50 Mbps, this kinda does not make sense to me. I would 
understand at least 150 Mbps.


2. Disk Write Speed:

- It seems the only changes that can be made is to implement disk cache 
options. And so far I see write-back seems to be common practise for 
most cloud providers, given that they have the necessary power 
redundancy and VM backup images in place.


But for now, other than write cache types are there anything else that 
can be done to improve disk writing speeds. I checked RedHat guides on 
optimising VMs and I seem to have most in place, but write speeds remain 
at ~50 Mbps.


On 7/10/23 06:25, Jithin Raju wrote:

Hi  Groete,

The VM virtual NIC network throttling is picked up from its compute offering. 
You may need to create a new compute offering and change the VM’s compute 
offering. If it is not specified in the compute offering, means it is taking 
the values from the global settings: vm.network.throttling.rate .



-Jithin

From: Levin Ng
Date: Sunday, 9 July 2023 at 5:00 AM
To:users@cloudstack.apache.org  , Granwille 
Strauss
Cc:vivek.ku...@indiqus.com  , Nux
Subject: Re: Write Speeds
Dear Groete,

https://github.com/shapeblue/cloudstack/blob/965856057d5147f12b86abe5c9c205cdc5e44615/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/DirectVifDriver.java

https://libvirt.org/formatnetwork.html#quality-of-service

This is in kilobytes/second, u have to divide by 8

1Gbps / 8bit = 128MB = 128000KBps

You can verify by iperf test, and yes, u need to ensure both VR and VM match 
bandwidth settings to get a consistent result, something u also need to pay 
attention on VR resource, default system router resource offering is quite 
limited, the network speed may throttled if VR running are out of CPU resource.

Regards,
Levin



On 8 Jul 2023 at 09:02 +0100, Granwille Strauss, wrote:

Hi Levin
Thank you very much, I do appreciate your feedback and time replying back to 
me. I believe I have picked up on something. My VMs XML dump ALL show the 
following:

 


Specifically:
   
All my VM has this in place, a 128 Mbps limit even the VR, which makes no 
sense. I found this thread posted 10 years ago and Nux says that this value is 
affected by the service 
offerings:https://users.cloudstack.apache.narkive.com/T6Gx7BoV/cloudstack-network-limitation
  But all my service offerings are set to 1000 Mbps. See attached screenshots. 
The 4.18 documentation also confirms if the values are null, which most default 
service offerings are, it takes the values set in global settings 
network.throttling.rate and vm.network.throttling.rate, which I also have set 
as 1000 as you can see in the screenshots.

I then found 
this:https://cwiki.apache.org/confluence/display/CLOUDSTACK/Network+throttling+in+CloudStack
  with no KVM details, seems this part is missing to tell me how KVM throttling 
is applied. So as DuJun said 10 years ago, I feel confused about how Cloudstack 
limit the network rate for guest. And yes, I have stopped my VMs and rebooted 
MANY times it doesn't update the XML at all.
Please also take into account documentation states that in shared networking 
there's supposed to be no limits on incoming traffic (ingress), as far as I 
understand it.
On 7/7/23 23:35, Levin Ng wrote:

Hi Groete,


IMO, You should bypass any ACS provisioning to troubleshoot the performance 
case first, which allow you get more idea on the hardware + kvm performance 
with minimal influent, then you can compare the libvirt xml different between 
plain KVM and ACS. That help you sort out the different where it come from, you 
will see QoS bandwidth setting in the VM xml if you do.

We are trying to tell you, when you diagnose the throughput problem, you should 
first identify the bottleneck where it come first.  Iperf is a tools that yo

RE: ACS with vmware hypervisors

2023-07-10 Thread Gary Dixon
Hi Jithin

This is the odd thing - when we try and add the vcenter datastore to ACS as 
Primary storage - we do not have the 'preSetup' protocol option in the "add 
primary storage" UI ?



Gary Dixon
Senior Technical Consultant
T:  +44 161 537 4990
E:  v...@quadris-support.com
W: www.quadris.co.uk
The information contained in this e-mail from Quadris may be confidential and 
privileged for the private use of the named recipient.  The contents of this 
e-mail may not necessarily represent the official views of Quadris.  If you 
have received this information in error you must not copy, distribute or take 
any action or reliance on its contents.  Please destroy any hard copies and 
delete this message.
-Original Message-
From: Jithin Raju 
Sent: Monday, July 10, 2023 10:54 AM
To: users@cloudstack.apache.org
Subject: Re: ACS with vmware hypervisors

Hi Gary,

I am unable to tell the cause of the VM deployment failures with the log 
snippets below.
Could you try adding the storage as a datastore in vCenter and add it to 
CloudStack as ‘presetup’ ?

-Jithin

From: Gary Dixon 
Date: Monday, 10 July 2023 at 2:12 PM
To: users@cloudstack.apache.org 
Subject: RE: ACS with vmware hypervisors Hi Jithin

We are using ACS 4.15.2 and vsphere esxi v7.0.3

This is the log output for job-42701:

2023-07-07 14:10:48,968 INFO [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-13:ctx-36699a50 job-42701) (logid:717a5506) Add job-42701 
into job monitoring
2023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; id=1
2023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
2023-07-07 14:10:49,210 INFO [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-13:ctx-36699a50 job-42701) (logid:96c5f242) Remove job-42701 
from job monitoring

Do we also need to add the iSCSI datatstore in vcenter as Primary storage to 
cloudstack UI?

BR

Gary
Gary Dixon​
Senior Technical Consultant
T:  +44 161 537 4990
E:  vms@quadris‑support.com
W: http://www.quadris.co.uk/
[cid:image458271.png@D68BB0C8.6CA2D3B5]
The information contained in this e-mail from Quadris may be confidential and 
privileged for the private use of the named recipient.  The contents of this 
e-mail may not necessarily represent the official views of Quadris.  If you 
have received this information in error you must not copy, distribute or take 
any action or reliance on its contents.  Please destroy any hard copies and 
delete this message.



-Original Message-
From: Jithin Raju 
Sent: Monday, July 10, 2023 5:12 AM
To: users@cloudstack.apache.org
Subject: Re: ACS with vmware hypervisors

Hi Gary,

What are the ACS and Vmware ESXi versions you are using? Could you share the 
entire logs for this day or job-42701?

-Jithin

From: Gary Dixon 
Date: Friday, 7 July 2023 at 8:49 PM
To: users@cloudstack.apache.org 
Subject: ACS with vmware hypervisors






I was wondering if anyone has any experience with ACS and vmware ESXi as the 
hypervisor? I'm facing a problem when trying to deploy a new/fresh instance.

I've deployed a vCenter appliance, created a data centre, cluster(s) and the 
hosts have all been added to ACS. When I attempt to deploy a fresh instance to 
the vmware cluster/hosts to build the OS from an ISO, the following errors are 
displayed/logged:

UI Error:

Unable to create a deployment for VM[User|i-2-3207-VM]

Management Log:

..about 1/2 way into the error " at 
com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source)" is logged.

023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; 
id=1 at 
org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:225)
at 
org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:202)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4937)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2897)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(Aop

Re: ACS with vmware hypervisors

2023-07-10 Thread Jithin Raju
Hi Gary,

I am unable to tell the cause of the VM deployment failures with the log 
snippets below.
Could you try adding the storage as a datastore in vCenter and add it to 
CloudStack as ‘presetup’ ?

-Jithin

From: Gary Dixon 
Date: Monday, 10 July 2023 at 2:12 PM
To: users@cloudstack.apache.org 
Subject: RE: ACS with vmware hypervisors
Hi Jithin

We are using ACS 4.15.2 and vsphere esxi v7.0.3

This is the log output for job-42701:

2023-07-07 14:10:48,968 INFO [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-13:ctx-36699a50 job-42701) (logid:717a5506) Add job-42701 
into job monitoring
2023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; id=1
2023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
2023-07-07 14:10:49,210 INFO [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-13:ctx-36699a50 job-42701) (logid:96c5f242) Remove job-42701 
from job monitoring

Do we also need to add the iSCSI datatstore in vcenter as Primary storage to 
cloudstack UI?

BR

Gary
Gary Dixon​
Senior Technical Consultant
T:  +44 161 537 4990
E:  vms@quadris‑support.com
W: www.quadris.co.uk
[cid:image458271.png@D68BB0C8.6CA2D3B5]
The information contained in this e-mail from Quadris may be confidential and 
privileged for the private use of the named recipient.  The contents of this 
e-mail may not necessarily represent the official views of Quadris.  If you 
have received this information in error you must not copy, distribute or take 
any action or reliance on its contents.  Please destroy any hard copies and 
delete this message.
 


-Original Message-
From: Jithin Raju 
Sent: Monday, July 10, 2023 5:12 AM
To: users@cloudstack.apache.org
Subject: Re: ACS with vmware hypervisors

Hi Gary,

What are the ACS and Vmware ESXi versions you are using? Could you share the 
entire logs for this day or job-42701?

-Jithin

From: Gary Dixon 
Date: Friday, 7 July 2023 at 8:49 PM
To: users@cloudstack.apache.org 
Subject: ACS with vmware hypervisors






I was wondering if anyone has any experience with ACS and vmware ESXi as the 
hypervisor? I'm facing a problem when trying to deploy a new/fresh instance.

I've deployed a vCenter appliance, created a data centre, cluster(s) and the 
hosts have all been added to ACS. When I attempt to deploy a fresh instance to 
the vmware cluster/hosts to build the OS from an ISO, the following errors are 
displayed/logged:

UI Error:

Unable to create a deployment for VM[User|i-2-3207-VM]

Management Log:

..about 1/2 way into the error " at 
com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source)" is logged.

023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; 
id=1 at 
org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:225)
at 
org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:202)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4937)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2897)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at 
org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:107)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186

RE: ACS with vmware hypervisors

2023-07-10 Thread Gary Dixon
Hi Jithin

We are using ACS 4.15.2 and vsphere esxi v7.0.3

This is the log output for job-42701:

2023-07-07 14:10:48,968 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-13:ctx-36699a50 job-42701) (logid:717a5506) Add job-42701 
into job monitoring
2023-07-07 14:10:49,189 INFO  [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; id=1
2023-07-07 14:10:49,189 INFO  [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
2023-07-07 14:10:49,210 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(API-Job-Executor-13:ctx-36699a50 job-42701) (logid:96c5f242) Remove job-42701 
from job monitoring

Do we also need to add the iSCSI datatstore in vcenter as Primary storage to 
cloudstack UI?

BR

Gary


Gary Dixon
Senior Technical Consultant
T:  +44 161 537 4990
E:  v...@quadris-support.com
W: www.quadris.co.uk
The information contained in this e-mail from Quadris may be confidential and 
privileged for the private use of the named recipient.  The contents of this 
e-mail may not necessarily represent the official views of Quadris.  If you 
have received this information in error you must not copy, distribute or take 
any action or reliance on its contents.  Please destroy any hard copies and 
delete this message.
-Original Message-
From: Jithin Raju 
Sent: Monday, July 10, 2023 5:12 AM
To: users@cloudstack.apache.org
Subject: Re: ACS with vmware hypervisors

Hi Gary,

What are the ACS and Vmware ESXi versions you are using? Could you share the 
entire logs for this day or job-42701?

-Jithin

From: Gary Dixon 
Date: Friday, 7 July 2023 at 8:49 PM
To: users@cloudstack.apache.org 
Subject: ACS with vmware hypervisors






I was wondering if anyone has any experience with ACS and vmware ESXi as the 
hypervisor? I'm facing a problem when trying to deploy a new/fresh instance.

I've deployed a vCenter appliance, created a data centre, cluster(s) and the 
hosts have all been added to ACS. When I attempt to deploy a fresh instance to 
the vmware cluster/hosts to build the OS from an ISO, the following errors are 
displayed/logged:

UI Error:

Unable to create a deployment for VM[User|i-2-3207-VM]

Management Log:

..about 1/2 way into the error " at 
com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source)" is logged.

023-07-07 14:10:49,189 INFO [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; 
id=1 at 
org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:225)
at 
org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:202)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4937)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2897)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at 
org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:107)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
at com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source) at 
org.apache.cloudstack.api.command.user.vm.StartVMCmd.execute(StartVMCmd.java:169)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:156)
at com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.jav