Re: Write Speeds

2023-07-07 Thread Levin Ng
Hi Groete,


IMO, You should bypass any ACS provisioning to troubleshoot the performance 
case first, which allow you get more idea on the hardware + kvm performance 
with minimal influent, then you can compare the libvirt xml different between 
plain KVM and ACS. That help you sort out the different where it come from, you 
will see QoS bandwidth setting in the VM xml if you do.

We are trying to tell you, when you diagnose the throughput problem, you should 
first identify the bottleneck where it come first.  Iperf is a tools that you 
can test the line speed end to end into your VM, if the result in 1Gbps network 
are near 800+ Mbps, you can focus on the VM performance or the copy protocol 
you are using, try different protocol, ssh/rsync/ftp/nfs, see any different.

You are already test the write-back caching which will improve disk I/O 
performance, it is another story you need to deep dive the pro and cons on the 
write cache, there are risk to corrupt the VM filesystem in some case, this is 
what u need to learn about each cache mode.

VM Guest performance are involved by many factor, you cannot expect VM perform 
nearly the bare metal does. There are long journey to do such optimization, 
take time and improve it gradually. There are lot of kvm tuning guide you can 
reference and prove it on your hardware. Read thoughtfully on each tuning that 
may bring improvement and also introduce risk factor.


Regards,
Levin




On 7 Jul 2023 at 21:24 +0100, Granwille Strauss , wrote:
> Sorry that I have to ask, can you perhaps be a bit more specific, please. The 
> only QOS settings I see in service offering are "None", "Hypervisor" and 
> "Storage", which doesn't really seem network related. Or am I missing the 
> point? Take note that I use the default offerings for the VR and VMs but with 
> slight tweaks such as setting local storage etc and only increased the 
> Network rate from 200 Mbps to 1000 Mbps.
>
> So can you kindly explain by what QOS settings you guys are referring to, 
> please?
> PS, the Write-back disk caching seems to give the VM a slight increase, I now 
> see writes at 190 Mbps from ~70 Mbps.
> On 7/7/23 21:11, Vivek Kumar wrote:
> > Hello,
> >
> > IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, 
> > so I don’t think that it’s depends on disk performance, it’s better to 
> > check the network QoS at every layer, VR and VM.
> >
> >
> >
> > Vivek Kumar
> > Sr. Manager - Cloud & DevOps
> > TechOps | Indiqus Technologies
> >
> > vivek.ku...@indiqus.com 
> >www.indiqus.com 
> >
> >
> >
> >
> > > On 07-Jul-2023, at 9:44 PM, Granwille Strauss 
> > >  wrote:
> > >
> > > Hi Levin
> > >
> > > Thank you, I am aware of network offering, the first thing I did was make 
> > > sure it was set to accommodate the KVM's entire 1 Gbps uplink. But now 
> > > that I think if it iperf test prevousily were always stuck on 50 Mbps, 
> > > but this is because of the write speeds on the disk at least that's what 
> > > I believe causes the network bottle neck. I will double-check this again.
> > >
> > > But there is some sort of limit on the VM disk in place. FIO tests show 
> > > that write speeds are in the range of 50 - 90 MB/s on the VM, while fio 
> > > test confirms on the KVM its over 400 MB/s.
> > >
> > > On 7/7/23 18:08, Levin Ng wrote:
> > > > Hi Groete,
> > > >
> > > > Forgot to mention, when you are talking about file copies between 
> > > > remote server, you need to aware there are network QoS option in the 
> > > > offering, make sure the limits correctness. Do iperf test prove that 
> > > > too, test between server and  via virtual router. Hope you can narrow 
> > > > down the problem soon.
> > > >
> > > > Regards,
> > > > Levin
> > > >
> > > > On 7 Jul 2023 at 16:40 +0100, Granwille Strauss  
> > > > , wrote:
> > > > > Hi Levin
> > > > > Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a 
> > > > > hardware RAID 5 configuration, of which I am using local storage 
> > > > > pool, yes. I will run fio test and also playing around with the 
> > > > > controller cache settings to see what happens and provide feedback on 
> > > > > this soon.
> > > > > On 7/7/23 17:23, Levin Ng wrote:
> > > > > > HI Groete,
> > > > > >
> > > > > > Should you run a fio test on the VM and the KVM host to get a 
> > > > > > baseline first. SSD are tricky device, when it fill up the cache or 
> > > > > > nearly full, the performance will drop significantly, especially 
> > > > > > consumer grade SSD. There are option to limit IOPs in ACS offering 
> > > > > > setting, I believe you leave it empty, so it is no limit. When you 
> > > > > > talking about KVM uses SSDs, I think you are using Local Disk Pool 
> > > > > > right? If you have RAID controller underlying, try toggle the 
> > > > > > controller cache, SSD may perform vary on different disk controller 
> > > > > > cache setting.
> > 

Re: Write Speeds

2023-07-07 Thread Granwille Strauss
Sorry that I have to ask, can you perhaps be a bit more specific, 
please. The only QOS settings I see in service offering are "None", 
"Hypervisor" and "Storage", which doesn't really seem network related. 
Or am I missing the point? Take note that I use the default offerings 
for the VR and VMs but with slight tweaks such as setting local storage 
etc and only increased the Network rate from 200 Mbps to 1000 Mbps.


So can you kindly explain by what QOS settings you guys are referring 
to, please?


PS, the Write-back disk caching seems to give the VM a slight increase, 
I now see writes at 190 Mbps from ~70 Mbps.


On 7/7/23 21:11, Vivek Kumar wrote:

Hello,

IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, so I 
don’t think that it’s depends on disk performance, it’s better to check the 
network QoS at every layer, VR and VM.



Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com  
www.indiqus.com  





On 07-Jul-2023, at 9:44 PM, Granwille Strauss  
wrote:

Hi Levin

Thank you, I am aware of network offering, the first thing I did was make sure 
it was set to accommodate the KVM's entire 1 Gbps uplink. But now that I think 
if it iperf test prevousily were always stuck on 50 Mbps, but this is because 
of the write speeds on the disk at least that's what I believe causes the 
network bottle neck. I will double-check this again.

But there is some sort of limit on the VM disk in place. FIO tests show that 
write speeds are in the range of 50 - 90 MB/s on the VM, while fio test 
confirms on the KVM its over 400 MB/s.

On 7/7/23 18:08, Levin Ng wrote:

Hi Groete,

Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.

Regards,
Levin

On 7 Jul 2023 at 16:40 +0100, Granwille Strauss  
, wrote:

Hi Levin
Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware RAID 
5 configuration, of which I am using local storage pool, yes. I will run fio 
test and also playing around with the controller cache settings to see what 
happens and provide feedback on this soon.
On 7/7/23 17:23, Levin Ng wrote:

HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss  
, wrote:

Hi Guys
Does Cloudstack have a disk write speed limit somewhere in its setting? We have been transferring 
many files from remote servers to VM machines on our Cloudstack instance and we recently noticed 
that the VM write speeds are all limited to about 5-8 MB/s. But the underlying hardware of the KVM 
uses SSDs capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms are set to 
"No Disk Cache" with thin provisioning, could this be the reason? I understand that 
"Write Back Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
disk controller. What could I be missing in this case?
--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com  
m: +264 81 323 1260
w:www.namhost.com  




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com  

Re: Write Speeds

2023-07-07 Thread Vivek Kumar
Hello, 

IPerf will simply tell you the bandwidth and the open pipe between 2 VMs, so I 
don’t think that it’s depends on disk performance, it’s better to check the 
network QoS at every layer, VR and VM. 



Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 07-Jul-2023, at 9:44 PM, Granwille Strauss  
> wrote:
> 
> Hi Levin
> 
> Thank you, I am aware of network offering, the first thing I did was make 
> sure it was set to accommodate the KVM's entire 1 Gbps uplink. But now that I 
> think if it iperf test prevousily were always stuck on 50 Mbps, but this is 
> because of the write speeds on the disk at least that's what I believe causes 
> the network bottle neck. I will double-check this again. 
> 
> But there is some sort of limit on the VM disk in place. FIO tests show that 
> write speeds are in the range of 50 - 90 MB/s on the VM, while fio test 
> confirms on the KVM its over 400 MB/s. 
> 
> On 7/7/23 18:08, Levin Ng wrote:
>> Hi Groete,
>> 
>> Forgot to mention, when you are talking about file copies between remote 
>> server, you need to aware there are network QoS option in the offering, make 
>> sure the limits correctness. Do iperf test prove that too, test between 
>> server and  via virtual router. Hope you can narrow down the problem soon.
>> 
>> Regards,
>> Levin
>> 
>> On 7 Jul 2023 at 16:40 +0100, Granwille Strauss  
>> , wrote:
>>> Hi Levin
>>> Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware 
>>> RAID 5 configuration, of which I am using local storage pool, yes. I will 
>>> run fio test and also playing around with the controller cache settings to 
>>> see what happens and provide feedback on this soon.
>>> On 7/7/23 17:23, Levin Ng wrote:
 HI Groete,
 
 Should you run a fio test on the VM and the KVM host to get a baseline 
 first. SSD are tricky device, when it fill up the cache or nearly full, 
 the performance will drop significantly, especially consumer grade SSD. 
 There are option to limit IOPs in ACS offering setting, I believe you 
 leave it empty, so it is no limit. When you talking about KVM uses SSDs, I 
 think you are using Local Disk Pool right? If you have RAID controller 
 underlying, try toggle the controller cache, SSD may perform vary on 
 different disk controller cache setting.
 
 Controller type scsi, or virtio performance are similar, no need to worry 
 about it. Of coz, in general, using RAW format and thick provisioning 
 could get a best io performance result, but consume space and lack of 
 snapshot capabliblity , so most the time it is not prefer go this path.
 
 Please gather more information first
 
 Regards,
 Levin
 On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
  , 
 wrote:
> Hi Guys
> Does Cloudstack have a disk write speed limit somewhere in its setting? 
> We have been transferring many files from remote servers to VM machines 
> on our Cloudstack instance and we recently noticed that the VM write 
> speeds are all limited to about 5-8 MB/s. But the underlying hardware of 
> the KVM uses SSDs capable of write speeds of 300 - 600 MB/s. My disk 
> offering on my current vms are set to "No Disk Cache" with thin 
> provisioning, could this be the reason? I understand that "Write Back 
> Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
> disk controller. What could I be missing in this case?
> --
> Regards / Groete
> 
> Granwille Strauss  //  Senior Systems Admin
> 
> e: granwi...@namhost.com 
> m: +264 81 323 1260
> w: www.namhost.com 
> 
> 
> 
> 
> Namhost Internet Services (Pty) Ltd,
> 24 Black Eagle Rd, Hermanus, 7210, RSA
> 
> 
> The content of this message is confidential. If you have received it by 
> mistake, please inform us by email reply and then delete the message. It 
> is forbidden to copy, forward, or in any way reveal the contents of this 
> message to anyone without our explicit consent. The integrity and 
> security of this email cannot be guaranteed over the Internet. Therefore, 
> the sender will not be held liable for any damage caused by the message. 
> For our full privacy policy and disclaimers, please go to 
> https://www.namhost.com/privacy-policy
> 
>>> --
>>> Regards / Groete
>>> 
>>> Granwille Strauss  //  Senior Systems Admin
>>> 
>>> e: granwi...@namhost.com 
>>> m: +264 81 323 1260
>>> w: www.namhost.com 
>>> 
>>> 
>>> 
>>> 
>>> Namhost Internet Services (Pty) Ltd,
>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>> 
>>> 
>>> The content

Re: Write Speeds

2023-07-07 Thread Granwille Strauss

Hi Levin

Thank you, I am aware of network offering, the first thing I did was 
make sure it was set to accommodate the KVM's entire 1 Gbps uplink. But 
now that I think if it iperf test prevousily were always stuck on 50 
Mbps, but this is because of the write speeds on the disk at least 
that's what I believe causes the network bottle neck. I will 
double-check this again.


But there is some sort of limit on the VM disk in place. FIO tests show 
that write speeds are in the range of 50 - 90 MB/s on the VM, while fio 
test confirms on the KVM its over 400 MB/s.


On 7/7/23 18:08, Levin Ng wrote:

Hi Groete,

Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.

Regards,
Levin

On 7 Jul 2023 at 16:40 +0100, Granwille Strauss, wrote:

Hi Levin
Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware RAID 
5 configuration, of which I am using local storage pool, yes. I will run fio 
test and also playing around with the controller cache settings to see what 
happens and provide feedback on this soon.
On 7/7/23 17:23, Levin Ng wrote:

HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss, 
wrote:

Hi Guys
Does Cloudstack have a disk write speed limit somewhere in its setting? We have been transferring 
many files from remote servers to VM machines on our Cloudstack instance and we recently noticed 
that the VM write speeds are all limited to about 5-8 MB/s. But the underlying hardware of the KVM 
uses SSDs capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms are set to 
"No Disk Cache" with thin provisioning, could this be the reason? I understand that 
"Write Back Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
disk controller. What could I be missing in this case?
--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com
m: +264 81 323 1260
w:www.namhost.com




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com
m: +264 81 323 1260
w:www.namhost.com




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of t

Re: Write Speeds

2023-07-07 Thread Levin Ng
Hi Groete,

Forgot to mention, when you are talking about file copies between remote 
server, you need to aware there are network QoS option in the offering, make 
sure the limits correctness. Do iperf test prove that too, test between server 
and  via virtual router. Hope you can narrow down the problem soon.

Regards,
Levin

On 7 Jul 2023 at 16:40 +0100, Granwille Strauss , wrote:
> Hi Levin
> Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a hardware 
> RAID 5 configuration, of which I am using local storage pool, yes. I will run 
> fio test and also playing around with the controller cache settings to see 
> what happens and provide feedback on this soon.
> On 7/7/23 17:23, Levin Ng wrote:
> > HI Groete,
> >
> > Should you run a fio test on the VM and the KVM host to get a baseline 
> > first. SSD are tricky device, when it fill up the cache or nearly full, the 
> > performance will drop significantly, especially consumer grade SSD. There 
> > are option to limit IOPs in ACS offering setting, I believe you leave it 
> > empty, so it is no limit. When you talking about KVM uses SSDs, I think you 
> > are using Local Disk Pool right? If you have RAID controller underlying, 
> > try toggle the controller cache, SSD may perform vary on different disk 
> > controller cache setting.
> >
> > Controller type scsi, or virtio performance are similar, no need to worry 
> > about it. Of coz, in general, using RAW format and thick provisioning could 
> > get a best io performance result, but consume space and lack of snapshot 
> > capabliblity , so most the time it is not prefer go this path.
> >
> > Please gather more information first
> >
> > Regards,
> > Levin
> > On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
> > , wrote:
> > > Hi Guys
> > > Does Cloudstack have a disk write speed limit somewhere in its setting? 
> > > We have been transferring many files from remote servers to VM machines 
> > > on our Cloudstack instance and we recently noticed that the VM write 
> > > speeds are all limited to about 5-8 MB/s. But the underlying hardware of 
> > > the KVM uses SSDs capable of write speeds of 300 - 600 MB/s. My disk 
> > > offering on my current vms are set to "No Disk Cache" with thin 
> > > provisioning, could this be the reason? I understand that "Write Back 
> > > Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
> > > disk controller. What could I be missing in this case?
> > > --
> > > Regards / Groete
> > >
> > > Granwille Strauss  //  Senior Systems Admin
> > >
> > > e: granwi...@namhost.com
> > > m: +264 81 323 1260
> > > w: www.namhost.com
> > >
> > >
> > >
> > >
> > > Namhost Internet Services (Pty) Ltd,
> > > 24 Black Eagle Rd, Hermanus, 7210, RSA
> > >
> > >
> > > The content of this message is confidential. If you have received it by 
> > > mistake, please inform us by email reply and then delete the message. It 
> > > is forbidden to copy, forward, or in any way reveal the contents of this 
> > > message to anyone without our explicit consent. The integrity and 
> > > security of this email cannot be guaranteed over the Internet. Therefore, 
> > > the sender will not be held liable for any damage caused by the message. 
> > > For our full privacy policy and disclaimers, please go to 
> > > https://www.namhost.com/privacy-policy
> > >
> --
> Regards / Groete
>
> Granwille Strauss  //  Senior Systems Admin
>
> e: granwi...@namhost.com
> m: +264 81 323 1260
> w: www.namhost.com
>
>
>
>
> Namhost Internet Services (Pty) Ltd,
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
> The content of this message is confidential. If you have received it by 
> mistake, please inform us by email reply and then delete the message. It is 
> forbidden to copy, forward, or in any way reveal the contents of this message 
> to anyone without our explicit consent. The integrity and security of this 
> email cannot be guaranteed over the Internet. Therefore, the sender will not 
> be held liable for any damage caused by the message. For our full privacy 
> policy and disclaimers, please go to https://www.namhost.com/privacy-policy
>


Re: Write Speeds

2023-07-07 Thread Granwille Strauss

Hi Levin

Thank you, yes I leave IOPs empty. And the KVM host has SSDs in a 
hardware RAID 5 configuration, of which I am using local storage pool, 
yes. I will run fio test and also playing around with the controller 
cache settings to see what happens and provide feedback on this soon.


On 7/7/23 17:23, Levin Ng wrote:

HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss, 
wrote:

Hi Guys
Does Cloudstack have a disk write speed limit somewhere in its setting? We have been transferring 
many files from remote servers to VM machines on our Cloudstack instance and we recently noticed 
that the VM write speeds are all limited to about 5-8 MB/s. But the underlying hardware of the KVM 
uses SSDs capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms are set to 
"No Disk Cache" with thin provisioning, could this be the reason? I understand that 
"Write Back Disk Cach" has better write speeds. Also I have VMs set as virtio for its 
disk controller. What could I be missing in this case?
--
Regards / Groete

Granwille Strauss  //  Senior Systems Admin

e:granwi...@namhost.com
m: +264 81 323 1260
w:www.namhost.com




Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA


The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It is 
forbidden to copy, forward, or in any way reveal the contents of this message 
to anyone without our explicit consent. The integrity and security of this 
email cannot be guaranteed over the Internet. Therefore, the sender will not be 
held liable for any damage caused by the message. For our full privacy policy 
and disclaimers, please go tohttps://www.namhost.com/privacy-policy


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Write Speeds

2023-07-07 Thread Levin Ng
HI Groete,

Should you run a fio test on the VM and the KVM host to get a baseline first. 
SSD are tricky device, when it fill up the cache or nearly full, the 
performance will drop significantly, especially consumer grade SSD. There are 
option to limit IOPs in ACS offering setting, I believe you leave it empty, so 
it is no limit. When you talking about KVM uses SSDs, I think you are using 
Local Disk Pool right? If you have RAID controller underlying, try toggle the 
controller cache, SSD may perform vary on different disk controller cache 
setting.

Controller type scsi, or virtio performance are similar, no need to worry about 
it. Of coz, in general, using RAW format and thick provisioning could get a 
best io performance result, but consume space and lack of snapshot capabliblity 
, so most the time it is not prefer go this path.

Please gather more information first

Regards,
Levin
On 7 Jul 2023 at 15:30 +0100, Granwille Strauss 
, wrote:
> Hi Guys
> Does Cloudstack have a disk write speed limit somewhere in its setting? We 
> have been transferring many files from remote servers to VM machines on our 
> Cloudstack instance and we recently noticed that the VM write speeds are all 
> limited to about 5-8 MB/s. But the underlying hardware of the KVM uses SSDs 
> capable of write speeds of 300 - 600 MB/s. My disk offering on my current vms 
> are set to "No Disk Cache" with thin provisioning, could this be the reason? 
> I understand that "Write Back Disk Cach" has better write speeds. Also I have 
> VMs set as virtio for its disk controller. What could I be missing in this 
> case?
> --
> Regards / Groete
>
> Granwille Strauss  //  Senior Systems Admin
>
> e: granwi...@namhost.com
> m: +264 81 323 1260
> w: www.namhost.com
>
>
>
>
> Namhost Internet Services (Pty) Ltd,
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
> The content of this message is confidential. If you have received it by 
> mistake, please inform us by email reply and then delete the message. It is 
> forbidden to copy, forward, or in any way reveal the contents of this message 
> to anyone without our explicit consent. The integrity and security of this 
> email cannot be guaranteed over the Internet. Therefore, the sender will not 
> be held liable for any damage caused by the message. For our full privacy 
> policy and disclaimers, please go to https://www.namhost.com/privacy-policy
>


ACS with vmware hypervisors

2023-07-07 Thread Gary Dixon






I was wondering if anyone has any experience with ACS and vmware ESXi as the 
hypervisor?  I'm facing a problem when trying to deploy a new/fresh instance.

I've deployed a vCenter appliance, created a data centre, cluster(s) and the 
hosts have all been added to ACS.  When I attempt to deploy a fresh instance to 
the vmware cluster/hosts to build the OS from an ISO, the following errors are 
displayed/logged:

UI Error:

Unable to create a deployment for VM[User|i-2-3207-VM]

Management Log:

..about 1/2 way into the error " at 
com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source)" is logged.

023-07-07 14:10:49,189 INFO  [o.a.c.a.c.u.v.StartVMCmd] 
(API-Job-Executor-13:ctx-36699a50 job-42701 ctx-a057c849) (logid:96c5f242) 
Unable to create a deployment for VM[User|i-2-3207-VM]
com.cloud.exception.InsufficientServerCapacityException: Unable to create a 
deployment for VM[User|i-2-3207-VM]Scope=interface com.cloud.dc.DataCenter; id=1
at 
org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:225)
at 
org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:202)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:4937)
at 
com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2897)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at 
org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:107)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at 
com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
at com.sun.proxy.$Proxy181.startVirtualMachine(Unknown Source)
at 
org.apache.cloudstack.api.command.user.vm.StartVMCmd.execute(StartVMCmd.java:169)
at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:156)
at 
com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:108)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:620)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:568)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)



Gary Dixon
Senior Technical Consultant
T:  +44 161 537 4990
E:  v...@quadris-support.com
W: www.quadris.co.uk
The information contained in this e-mail from Quadris may be confidential and 
privileged for the private use of the named recipient.  The contents of this 
e-mail may not necessarily represent the official views of Quadris.  If you 
have received this information in error you must not copy, distribute or take 
any action or reliance on its contents.  Please destroy any hard copies and 
delete 

Write Speeds

2023-07-07 Thread Granwille Strauss

Hi Guys

Does Cloudstack have a disk write speed limit somewhere in its setting? 
We have been transferring many files from remote servers to VM machines 
on our Cloudstack instance and we recently noticed that the VM write 
speeds are all limited to about 5-8 MB/s. But the underlying hardware of 
the KVM uses SSDs capable of write speeds of 300 - 600 MB/s. My disk 
offering on my current vms are set to "No Disk Cache" with thin 
provisioning, could this be the reason? I understand that "Write Back 
Disk Cach" has better write speeds. Also I have VMs set as virtio for 
its disk controller. What could I be missing in this case?


--
Regards / Groete

 Granwille Strauss  // Senior Systems Admin

*e:* granwi...@namhost.com
*m:* +264 81 323 1260 
*w:* www.namhost.com 





Namhost Internet Services (Pty) Ltd,

24 Black Eagle Rd, Hermanus, 7210, RSA



The content of this message is confidential. If you have received it by 
mistake, please inform us by email reply and then delete the message. It 
is forbidden to copy, forward, or in any way reveal the contents of this 
message to anyone without our explicit consent. The integrity and 
security of this email cannot be guaranteed over the Internet. 
Therefore, the sender will not be held liable for any damage caused by 
the message. For our full privacy policy and disclaimers, please go to 
https://www.namhost.com/privacy-policy


Powered by AdSigner 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Unable to ping System VM in Advanced Zone (without SG)

2023-07-07 Thread Wei ZHOU
Hi,

Have you configured the switch ports to trunk mode ?

-Wei

On Fri, 7 Jul 2023 at 07:58, Pratik Chandrakar 
wrote:

> Hi Rohit,
> I am facing another strange issue: the VM works properly only if the VM and
> VR are only placed on the same host. If the VM placement is done on another
> host then it doesn't get the IP, I tried to manually add the IP on the VM
> but it didn't connect to the network.
> I flushed the iptables rules, moved to podman from docker (for ceph OSD) on
> the host, but still stuck on this issue.
>
>
> On Thu, Jul 6, 2023 at 11:48 AM Rohit Yadav 
> wrote:
>
> > Glad it helped Pratik.
> >
> > Regards.
> >
> > Regards.
> > 
> > From: Pratik Chandrakar 
> > Sent: Tuesday, July 4, 2023 12:42:14 PM
> > To: users@cloudstack.apache.org 
> > Subject: Re: Unable to ping System VM in Advanced Zone (without SG)
> >
> > Hi all,
> > Finally, I am able to resolve the issue and especially thank Mr Rohit
> Yadav
> > for his blog https://rohityadav.cloud/blog/cloudstack-kvm/
> > I was running Ceph OSD and Cloudstack on the same host and docker was
> > causing the issue.
> >
> > On Tue, Jun 20, 2023 at 10:35 AM Pratik Chandrakar <
> > chandrakarpra...@gmail.com> wrote:
> >
> > > Yes
> > >
> > > On Fri, Jun 16, 2023 at 5:19 PM Wei ZHOU 
> wrote:
> > >
> > >> Have you configured the vlan tag of public ip range ?
> > >>
> > >> -Wei
> > >>
> > >> On Friday, 16 June 2023, Pratik Chandrakar <
> chandrakarpra...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi,
> > >> > I am pinging the Public IP. I tried to ping the gateway from CPVM
> also
> > >> but
> > >> > was unable to connect to the gateway, whereas in the SG-enabled
> setup,
> > >> the
> > >> > same IP and configuration work. I have checked the switch as well
> and
> > I
> > >> am
> > >> > getting the MAC of the System VMs.
> > >> > Unable to identify this strange issue.
> > >> >
> > >> >
> > >> > On Fri, Jun 16, 2023 at 11:48 AM Wei ZHOU 
> > >> wrote:
> > >> >
> > >> > > Hi,
> > >> > >
> > >> > > System vms have multiple IPs, which IP do you ping to?
> > >> > >
> > >> > > -Wei
> > >> > >
> > >> > > On Thursday, 15 June 2023, Pratik Chandrakar <
> > >> chandrakarpra...@gmail.com
> > >> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Hi Wei,
> > >> > > > Switch ports are in trunk mode.
> > >> > > > VM on the same host is also not working even though its agent
> > state
> > >> is
> > >> > > > showing up.
> > >> > > > I think the Configuration is also correct because the same
> > >> > configuration
> > >> > > is
> > >> > > > working for SG-enabled Zone.
> > >> > > >
> > >> > > > On Thu, Jun 15, 2023 at 3:30 PM Wei ZHOU  >
> > >> > wrote:
> > >> > > >
> > >> > > > > Hi,
> > >> > > > >
> > >> > > > > If VMs on the same host work, but not on different hosts -
> this
> > >> is a
> > >> > > > > typical issue of network misconfiguration.
> > >> > > > > Please check if switch ports are trunk mode, hosts and
> > cloudstack
> > >> > > network
> > >> > > > > configurations are correct.
> > >> > > > >
> > >> > > > > -Wei
> > >> > > > >
> > >> > > > > On Thu, 15 Jun 2023 at 11:46, Pratik Chandrakar <
> > >> > > > > chandrakarpra...@gmail.com>
> > >> > > > > wrote:
> > >> > > > >
> > >> > > > > > Hi all,
> > >> > > > > > In the Advanced Zone-SG Enabled deployment there is no issue
> > >> but on
> > >> > > the
> > >> > > > > > same setup when I deploy Advanced Zone-Without SG, I am not
> > >> able to
> > >> > > > ping
> > >> > > > > > System VMs from other client machines/CS hosts. Both Console
> > >> Proxy
> > >> > > and
> > >> > > > > > Secondary storage are UP and agent states are also UP in
> case
> > >> > > > Management
> > >> > > > > > Server and System VMs are deployed on the same host whereas
> > when
> > >> > > System
> > >> > > > > VMs
> > >> > > > > > are deployed on other hosts then it does not show any state
> in
> > >> > > > Management
> > >> > > > > > Server. I checked the logs but couldn't find any error on
> both
> > >> > > > management
> > >> > > > > > and agent logs.
> > >> > > > > > Environment Details are
> > >> > > > > >   ACS - 4.18
> > >> > > > > >   Host - Ubuntu 22.04.2
> > >> > > > > >
> > >> > > > > > Please guide me as I am unable to find any solution to this
> > >> strange
> > >> > > > > issue.
> > >> > > > > >
> > >> > > > > > --
> > >> > > > > > *Regards,*
> > >> > > > > > *Pratik Chandrakar*
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > > >
> > >> > > > --
> > >> > > > *Regards,*
> > >> > > > *Pratik Chandrakar*
> > >> > > >
> > >> > >
> > >> >
> > >> >
> > >> > --
> > >> > *Regards,*
> > >> > *Pratik Chandrakar*
> > >> >
> > >>
> > >
> > >
> > > --
> > > *Regards,*
> > > *Pratik Chandrakar*
> > >
> >
> >
> > --
> > *Regards,*
> > *Pratik Chandrakar*
> >
> >
> >
> >
>
> --
> *Regards,*
> *Pratik Chandrakar*
>