Re: Need help in adding primary storage

2023-05-15 Thread Vivek Kumar
Hello Hitesh,

In KVM you can use shared mount point if you have LUN mounted from backend on 
each server, so for creating a shared mount point ( if you have multiple KVM 
hosts), you will need to implement a solution like PCS cluster ( pacemaker, 
corosync ). If you can use  NFS then it’s better for KVM otherwise PCS cluster, 
it’s a bit hard to manage in long term and required dedicated skilled  man 
power to manage it.



Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 16-May-2023, at 6:47 AM, hitesh ingole  wrote:
> 
> Hello Team,
> 
> I am trying to set up a cloud stack env with kvm hypervisor on an HP
> blade servers. This blade servers has a lun storage mounted using a
> multipath.
> 
> I need to configure this device as a primary storage.
> Which protocol and provider should I use ?
> 
> Regards
> Hitesh Ingole


-- 
This message is intended only for the use of the individual or entity to 
which it is addressed and may contain confidential and/or privileged 
information. If you are not the intended recipient, please delete the 
original message and any copy of it from your computer system. You are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited unless proper authorization has been 
obtained for such action. If you have received this communication in error, 
please notify the sender immediately. Although IndiQus attempts to sweep 
e-mail and attachments for viruses, it does not guarantee that both are 
virus-free and accepts no liability for any damage sustained as a result of 
viruses.


Need help in adding primary storage

2023-05-15 Thread hitesh ingole
Hello Team,

I am trying to set up a cloud stack env with kvm hypervisor on an HP
blade servers. This blade servers has a lun storage mounted using a
multipath.

I need to configure this device as a primary storage.
Which protocol and provider should I use ?

Regards
Hitesh Ingole


Re: Enabling AVX support for guests CPUs

2023-05-15 Thread Wei ZHOU
Hi,

Can you check the agent.log on the host ? It might be possible that some
SandyBridge features are not supported by the host cpu. If so, you need to
disable the cpu feature by adding the feature with "-" as first letter in
guest.cpu.features.

-Wei


On Monday, 15 May 2023, Fariborz Navidan  wrote:

> Hi Wei,
>
> Thank you for your reply. I just checked available CPU models  in libvirt
> at /usr/share/libvirt/cpumap.xml with their features and found SandyBridge
> to have avx feature. Then I edited the agent.properties file and changed
> guest.cpu.mode to custom and guest.cpu.model to SandyBridge in order to
> expose avx feature to guests. Then stoped one of guest VNs and tried to
> start it. Unfortunately now the guest VM does not start with the general
> error  "Unable to create a deployment plan due to insufficient capacity.
> So, I reversed the changes and VM started again.
>
> What can be the reason for this problem?
>
> Best Regards.
>
> On Mon, May 15, 2023 at 2:05 AM Wei ZHOU  wrote:
>
> > Hi,
> >
> > Please refer to
> >
> > https://docs.cloudstack.apache.org/en/4.18.0.0/
> installguide/hypervisor/kvm.html#configure-cpu-model-for-
> kvm-guest-optional
> >
> > Kind regards,
> > Wei
> >
> > On Sunday, 14 May 2023, Fariborz Navidan  wrote:
> >
> > > Hello Community,
> > >
> > > We are  runningACS 4.14 in one of our locations. We need to enable AVX
> > > support on guests CPUs. The CPU model which is used for guests is QEMU
> > > virtual CPU. I'm wondering if it possible to enable the AVX CPU flag
> for
> > > virtual CPU on which gh guests are running. Also, please share with me
> if
> > > it is possible to enable this flag virtually without needing the
> physical
> > > CPU to support it. By this, I mean software defined implementation of
> the
> > > AVX feature.
> > >
> > > Thanks in advance.
> > > Regards.
> > >
> >
>


Re: Enabling AVX support for guests CPUs

2023-05-15 Thread Fariborz Navidan
Hi Wei,

Thank you for your reply. I just checked available CPU models  in libvirt
at /usr/share/libvirt/cpumap.xml with their features and found SandyBridge
to have avx feature. Then I edited the agent.properties file and changed
guest.cpu.mode to custom and guest.cpu.model to SandyBridge in order to
expose avx feature to guests. Then stoped one of guest VNs and tried to
start it. Unfortunately now the guest VM does not start with the general
error  "Unable to create a deployment plan due to insufficient capacity.
So, I reversed the changes and VM started again.

What can be the reason for this problem?

Best Regards.

On Mon, May 15, 2023 at 2:05 AM Wei ZHOU  wrote:

> Hi,
>
> Please refer to
>
> https://docs.cloudstack.apache.org/en/4.18.0.0/installguide/hypervisor/kvm.html#configure-cpu-model-for-kvm-guest-optional
>
> Kind regards,
> Wei
>
> On Sunday, 14 May 2023, Fariborz Navidan  wrote:
>
> > Hello Community,
> >
> > We are  runningACS 4.14 in one of our locations. We need to enable AVX
> > support on guests CPUs. The CPU model which is used for guests is QEMU
> > virtual CPU. I'm wondering if it possible to enable the AVX CPU flag for
> > virtual CPU on which gh guests are running. Also, please share with me if
> > it is possible to enable this flag virtually without needing the physical
> > CPU to support it. By this, I mean software defined implementation of the
> > AVX feature.
> >
> > Thanks in advance.
> > Regards.
> >
>


Re: SSVM routing issue

2023-05-15 Thread Antoine Boucher
Hello,

Would anyone have clues on my on going SSVM issue below?  

However, I can work around the issue by deleting my Storage Network traffic 
definition and recreating the SSVM..

What would be the impact of deleting the Storage Network traffic definition on 
other part of the system? My Primary Storage configuration seems to all be done 
part of my hosts static configuration.

Regards,
Antoine


> On May 11, 2023, at 10:27 AM, Antoine Boucher  wrote:
> 
> Good morning/afternoon/evening,
> 
> I am following up with my SSVM routing issue when a Storage Network is 
> defined.
> 
> I have a zone with Xen and KVM servers that have a Storage Network defined as 
> Cloudbr53 with a storage network-specific subnet (Cloudbr0 is also defined 
> for Management and Cloudbr1 for Guests)
> 
> The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the 
> specific storage ip subnet range. The Storage traffic type for the Zone is 
> defined with Cloudbr53 and VLAN as blank. 
> 
> You will see that the storage network route on the SSVM is pointed to the 
> wrong eth1 interface as it should be eth3
> 
> 10.101.6.0cloudrouter01.n 255.255.254.0   UG   00 0 eth1
> 
> root@s-394-VM:~# route
> Kernel IP routing table
> DestinationGateway  Genmask  Flags Metric Ref   Use Iface
> default  148.59.36.49   0.0.0.0  UG   00 0 eth2
> 10.0.0.0 cloudrouter01.n 255.0.0.0 UG   00 0 eth1
> 10.91.0.0 cloudrouter01.n 255.255.254.0   UG   00 0 eth1
> 10.91.6.0 cloudrouter01.n 255.255.255.0   UG   00 0 eth1
> 10.101.0.00.0.0.0  255.255.252.0   U00 0 eth1
> nimbus.haltondc 10.101.6.1255.255.255.255 UGH   00 0 eth3
> 10.101.6.0cloudrouter01.n 255.255.254.0   UG   00 0 eth1
> 148.59.36.48   0.0.0.0  255.255.255.240 U00 0 eth2
> link-local0.0.0.0  255.255.0.0U00 0 eth0
> 172.16.0.0cloudrouter01.n 255.240.0.0UG   00 0 eth1
> 192.168.0.0cloudrouter01.n 255.255.0.0UG   00 0 eth1
> 
> 
> I also tried to define the storage traffic type with VLAN 53; the VLAN/VNI 
> column shows blank, but It looks to be changing the routing to eth3; however, 
> I experienced the same overall communication issue. When communicating to the 
> management network is from the source IP on the storage network and dies 
> coming back since I have no routing between the two networks.
> 
> However, as a workaround, if I remove the storage traffic definition on the 
> Zone, all traffic will be routed through the management network. All is well 
> if I allow my secondary storage (NFS) on the management network.
> 
> 
> 
> I’m using the host-configured “storage network” for primary storage on all my 
> Zones without issues.
> 
> What would be the potential issues of deleting the Storage Network definition 
> traffic type in my zones, assuming I would keep all my secondary storage on 
> or accessible on the management network and recreating the SSVMs?
> 
> Is the storage definition only or mainly used for the SSVM?  
> 
> Regards,
> Antoine 
> 
> 
> Confidentiality Warning: This message and any attachments are intended only 
> for the use of the intended recipient(s), are confidential, and may be 
> privileged. If you are not the intended recipient, you are hereby notified 
> that any review, retransmission, conversion to hard copy, copying, 
> circulation or other use of this message and any attachments is strictly 
> prohibited. If you are not the intended recipient, please notify the sender 
> immediately by return e-mail, and delete this message and any attachments 
> from your system.
> 
> 
>> On Feb 28, 2023, at 11:39 AM, Antoine Boucher  wrote:
>> 
>> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
>> 
>> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM 
>> zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=
>> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource 
>> instance=SecStorage sslcopy=true role=templateProcessor mtu=1500 
>> eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49 
>> public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0 
>> eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22 
>> localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212 
>> eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0 
>> storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8 
>> nfsVersion=null keystore_password=*
>> 
>> 
>> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz | 
>> zgrep SecStorageSetupCommand
>> 
>> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request] 
>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq 
>> 47-6546545008336437249: Sending  { Cmd , MgmtId: 130593671224, via: 
>> 47(s-292-VM), Ver: v1, Flags: 100111, 
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.ag