Migrating volume failed (large volume)
Hi! I'm trying to migrate a volume from a storage pool to another. I've made it for some instance volumes with sizes up to 300GB. Now I have to migrate an instance volume with 3.8 TB and I'm getting: At the GUI popup: Migrating volume failed Resource [StoragePool:24] is unreachable: Volume [{"name":"ROOT-3373","uuid":"512f9beb-1ab1-442b-a058-3cb329f43319"}] migration failed due to [com.cloud.utils.exception.CloudRuntimeException: Failed to copy /mnt/fb1a9368-6fb7-3866-bdf6-00a3ebc26fa3/512f9beb-1ab1-442b-a058-3cb329f43319 to 715a08b9-074b-4261-bea5-2670a86f95e7.qcow2]. At management.log 2024-04-08 17:00:21,255 ERROR [c.c.a.ApiAsyncJobDispatcher] (API-Job-Executor-37:ctx-4bd39cf3 job-16314) (logid:6bdc565e) Une xpected exception while executing org.apache.cloudstack.api.command.admin.volume.MigrateVolumeCmdByAdmin com.cloud.utils.exception.CloudRuntimeException: Resource [StoragePool:24] is unreachable: Volume [{"name":"ROOT-3373","uuid" :"512f9beb-1ab1-442b-a058-3cb329f43319"}] migration failed due to [com.cloud.utils.exception.CloudRuntimeException: Failed to copy /mnt/fb1a9368-6fb7-3866-bdf6-00a3ebc26fa3/512f9beb-1ab1-442b-a058-3cb329f43319 to f4f94229-1bc1-4dc8-b1e5-5beb96c4e55a. qcow2]. at com.cloud.storage.VolumeApiServiceImpl.orchestrateMigrateVolume(VolumeApiServiceImpl.java:3406) at com.cloud.storage.VolumeApiServiceImpl.orchestrateMigrateVolume(VolumeApiServiceImpl.java:4823) ... ... at java.base/java.lang.Thread.run(Thread.java:829) 2024-04-08 17:00:21,264 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-37:ctx-4bd39cf3 job-16314) (logid:6bdc565e) Complete async job-16314, jobStatus: FAILED, resultCode: 530, result: org.apache.cloudstack.api.response.ExceptionResponse/n ull/{"uuidList":[],"errorcode":"530","errortext":"Resource [StoragePool:24] is unreachable: Volume [{"name":"ROOT-3373","uuid ":"512f9beb-1ab1-442b-a058-3cb329f43319"}] migration failed due to [com.cloud.utils.exception.CloudRuntimeException: Failed t o copy /mnt/fb1a9368-6fb7-3866-bdf6-00a3ebc26fa3/512f9beb-1ab1-442b-a058-3cb329f43319 to f4f94229-1bc1-4dc8-b1e5-5beb96c4e55a .qcow2]."} CS version 4.19.0.0 The instance is stopped. First storage pool is cluster scoped and the destination storage pool is zone scoped. I'm trying to do that via GUI, selecting the destination storage pool (a new one, with all space free). I'm checking the option "Replace disk offering" and select the new one. I've changed the option "Kvm storage offline migration wait" to 86400. It looks like a timeout problem. Is there some other option I've to customize? As said, for smaller volumes it worked. Thank you so much! -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: How to create one network per project using as few public addresses as possible?
Returning to this topic with the 4.19 release, I can create a domain VPC and tiers in each project connected to this domain VPC. Each tier has its ACL rules. This is ok to filter Egress traffic, for example. But, I couldn't find a way to configure port forward in VPC (Ingress). Is there in GUI? For example, in Networks > Public IP addresses -> choose any isolated network. I can see options like "Details, Firewall, Port forwarding, Load balancing, VPN, Events, Comments". When a tier is created its public IP is also listed in Networks > Public IP addresses. But, when I click on the public IP address from the VPC the options are only "Details, VPN". How can I configure ingress options, as port forwarding? For example, I need to forward ports 80 and 443 to a specific VM in some tier. Thank you! Em qua., 29 de nov. de 2023 às 14:50, Jorge Luiz Correa < jorge.l.cor...@embrapa.br> escreveu: > Hi Gabriel! This is exactly what I was looking for. I couldn't find this > request in github when looking for something. Thank you for sharing. > > No problem in creating through the API. So, I'll wait for the test > results. If you could share with us, I would appreciate. And thank you so > much for these tests! > > :) > > Em qua., 29 de nov. de 2023 às 10:01, Gabriel Ortiga Fernandes < > gabriel.ort...@hotmail.com> escreveu: > >> Hello Jorge, >> >> A soon as release 4.19 is launched, the feature of Domain VPCs( >> https://github.com/apache/cloudstack/pull/7153) will be available, which >> will allow users and operators to create tiers to VPCs for any account (or >> in your case project) to which the VPC owner has access, regardless of >> domain, thus, allowing all the projects to share a single VR. >> >> For now, this feature is not available in the GUI; however, you can >> create a tier through the API 'createNetwork', informing both the projectId >> and vpcId. >> >> This feature has been tested using accounts, but not projects, so I will >> run some tests in the next few days and give you an answer regarding its >> viability. >> >> Kind regards, >> >> GaOrtiga >> >> PS: This email will probably be a duplicate since I tried sending it >> through a different provider, but it took too long, so I am sending this >> again to save time. >> > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Experience on GPU Support?
Hi Bryan! We are using here but in a different way, customized for our environment and using how it is possible the features of CloudStack. In documentation we can see support for some GPU models a little bit old today. We are using pci passthrough. All hosts with GPU are configured to boot with IOMMU and vfio-pci, not loading kernel modules for each GPU. Then, we create a serviceoffering to describe VMs that will have GPU. In this serviceoffering we use the serviceofferingdetails[1].value field to insert a block of configuration related to the GPU. It is something like " ... ... address type=pci" that describes the PCI bus from each GPU. Then, we use tags to force this computeoffering to run only in hosts with GPUs. We create a Cloudstack cluster with a lot of hosts equipped with GPUs. When a user needs a VM with GPU he/she should use the created computeoffering. VM will be instantiated in some host of the cluster and GPUs are passthrough to VM. There are no control executed by cloudstack. For example, it can try to instantiate a VM in a host when a GPU is already being used (will fail). Our management is that the ROOT admin always controls that creation. We launch all VMs using all GPUs from the infrastructure. Then we use a queue manager to run jobs in those VMs with GPUs. When a user needs a dedicated VM to develop something, we can shutdown a VM already running (that is part of the queue manager as processor node) and then create this dedicated VM, that uses the GPUs isolated. There are some possibilities when using GPUs. For example, some models accept virtualization when we can divide a GPU. In that case, Cloudstack would need to support that, so it would manage the driver, creating the virtual GPUs based on information input from the user, as memory size. Then, it should manage the hypervisor to passthrough the virtual gpu to VM. Another possibility that would help us in our scenario is to make some control about PCI buses in hosts. For example, if Cloustack could check if a PCI is being used in some host and then use this information in VM scheduling, would be great. Cloudstack could launch VMs in a host that has a PCI address free. This would be used not only for GPUs, but any PCI device. I hope this can help in some way, to think of new scenarios etc. Thank you! Em qui., 22 de fev. de 2024 às 07:56, Bryan Tiang escreveu: > Hi Guys, > > Anyone running Cloudstack with GPU Support in Production? Say NVIDIA H100 > or AMD M1300X? > > Just want to know if there is any support for this still on going, or > anyone who is running a cloud business with GPUs. > > Regards, > Bryan > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Problem with SSL and Java keystore after upgrade to 4.19.
Thank you so much Wei! It worked! In my case the LDAP server accepts anonymous bind. So, instead of updating the value for ldap.bind.password (there is no line with this value) I had to update lines with ldap.truststore.password. Now users can authenticate. Thank you! Em qua., 14 de fev. de 2024 às 16:34, Wei ZHOU escreveu: > Can you try the workaround described in > https://github.com/apache/cloudstack/issues/8637? > > -Wei > > 在 2024年2月14日星期三,Jorge Luiz Correa 写道: > > > Hello! > > > > I've upgraded from 4.17.2 to 4.19.0. I'm using Ubuntu Server 22.04.3 LTS, > > Java 11.0.21 (no changes with upgrade). I'm using a LDAP server to > > authenticate users, with SSL. > > > > After the upgrade users can't authenticate anymore. The errors at the end > > of this message could be found in management.log. I've read it could be a > > problem accessing the keystore file. > > > > I've already tried to > > - regenerate the keystore (with default parameters) > > - check the password with keytool, everything is ok (no changes from > > 4.17.2, it was working) > > - change permissions from cloud.jks > > - put https.keystore.password between '...' in server.properties > > > > I appreciate any help where I can try something to restore the ldap > > authentication with SSL. > > > > Thank you! > > > > errors in management.log > > > > > > *2024-02-14 15:43:58,248 DEBUG [o.a.c.l.LdapManagerImpl] > > (qtp1753127384-22:ctx-cfc59ea9) (logid:c7732509) ldap > > Exception:javax.naming.CommunicationException: ldapserver.mydomain:636 > > [Root exception is java.net.SocketException: > > java.security.NoSuchAlgorithmException: Error constructing implementation > > (algorithm: Default, provider: SunJSSE, class: > > sun.security.ssl.SSLContextImpl$DefaultSSLContext)]* > > at > > java.naming/com.sun.jndi.ldap.Connection.(Connection.java:252) > > at > > java.naming/com.sun.jndi.ldap.LdapClient.(LdapClient.java:137) > > at > > > java.naming/com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1616) > > at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java: > > 2847) > > at java.naming/com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:348) > > at > > java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl( > > LdapCtxFactory.java:266) > > at > > java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL( > > LdapCtxFactory.java:226) > > at > > java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs( > > LdapCtxFactory.java:284) > > at > > java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance( > > LdapCtxFactory.java:185) > > at > > java.naming/com.sun.jndi.ldap.LdapCtxFactory.getInitialContext( > > LdapCtxFactory.java:115) > > at > > java.naming/javax.naming.spi.NamingManager.getInitialContext( > > NamingManager.java:730) > > at > > java.naming/javax.naming.InitialContext.getDefaultInitCtx( > > InitialContext.java:305) > > at > > java.naming/javax.naming.InitialContext.init(InitialContext.java:236) > > at > > java.naming/javax.naming.ldap.InitialLdapContext.( > > InitialLdapContext.java:154) > > at > > org.apache.cloudstack.ldap.LdapContextFactory.createInitialDirContext( > > LdapContextFactory.java:62) > > at > > org.apache.cloudstack.ldap.LdapContextFactory.createBindContext( > > LdapContextFactory.java:51) > > at > > org.apache.cloudstack.ldap.LdapContextFactory.createBindContext( > > LdapContextFactory.java:45) > > at > > org.apache.cloudstack.ldap.LdapManagerImpl.getUser( > > LdapManagerImpl.java:314) > > at > > org.apache.cloudstack.ldap.LdapAuthenticator.authenticate( > > LdapAuthenticator.java:229) > > at > > org.apache.cloudstack.ldap.LdapAuthenticator.authenticate( > > LdapAuthenticator.java:84) > > at > > com.cloud.user.AccountManagerImpl.getUserAccount( > > AccountManagerImpl.java:2656) > > at > > com.cloud.user.AccountManagerImpl.authenticateUser( > > AccountManagerImpl.java:2494) > > at > > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > at > > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke( > > NativeMethodAccessorImpl.java:62) > > at > > java.base/jdk.internal.reflect.DelegatingMethodAccessor
Problem with SSL and Java keystore after upgrade to 4.19.
Hello! I've upgraded from 4.17.2 to 4.19.0. I'm using Ubuntu Server 22.04.3 LTS, Java 11.0.21 (no changes with upgrade). I'm using a LDAP server to authenticate users, with SSL. After the upgrade users can't authenticate anymore. The errors at the end of this message could be found in management.log. I've read it could be a problem accessing the keystore file. I've already tried to - regenerate the keystore (with default parameters) - check the password with keytool, everything is ok (no changes from 4.17.2, it was working) - change permissions from cloud.jks - put https.keystore.password between '...' in server.properties I appreciate any help where I can try something to restore the ldap authentication with SSL. Thank you! errors in management.log *2024-02-14 15:43:58,248 DEBUG [o.a.c.l.LdapManagerImpl] (qtp1753127384-22:ctx-cfc59ea9) (logid:c7732509) ldap Exception:javax.naming.CommunicationException: ldapserver.mydomain:636 [Root exception is java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)]* at java.naming/com.sun.jndi.ldap.Connection.(Connection.java:252) at java.naming/com.sun.jndi.ldap.LdapClient.(LdapClient.java:137) at java.naming/com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1616) at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2847) at java.naming/com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:348) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:266) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:226) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:284) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:185) at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:115) at java.naming/javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:730) at java.naming/javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:305) at java.naming/javax.naming.InitialContext.init(InitialContext.java:236) at java.naming/javax.naming.ldap.InitialLdapContext.(InitialLdapContext.java:154) at org.apache.cloudstack.ldap.LdapContextFactory.createInitialDirContext(LdapContextFactory.java:62) at org.apache.cloudstack.ldap.LdapContextFactory.createBindContext(LdapContextFactory.java:51) at org.apache.cloudstack.ldap.LdapContextFactory.createBindContext(LdapContextFactory.java:45) at org.apache.cloudstack.ldap.LdapManagerImpl.getUser(LdapManagerImpl.java:314) at org.apache.cloudstack.ldap.LdapAuthenticator.authenticate(LdapAuthenticator.java:229) at org.apache.cloudstack.ldap.LdapAuthenticator.authenticate(LdapAuthenticator.java:84) at com.cloud.user.AccountManagerImpl.getUserAccount(AccountManagerImpl.java:2656) at com.cloud.user.AccountManagerImpl.authenticateUser(AccountManagerImpl.java:2494) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) at com.sun.proxy.$Proxy128.authenticateUser(Unknown Source) at com.cloud.api.ApiServer.loginUser(ApiServer.java:) at com.cloud.api.auth.DefaultLoginAPIAuthenticatorCmd.authenticate(DefaultLoginAPIAuthenticatorCmd.java:156) at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:257) at com.cloud.api.ApiServlet$1.run(ApiServlet.java:154) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102) at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
Re: How to create one network per project using as few public addresses as possible?
Hi Gabriel! This is exactly what I was looking for. I couldn't find this request in github when looking for something. Thank you for sharing. No problem in creating through the API. So, I'll wait for the test results. If you could share with us, I would appreciate. And thank you so much for these tests! :) Em qua., 29 de nov. de 2023 às 10:01, Gabriel Ortiga Fernandes < gabriel.ort...@hotmail.com> escreveu: > Hello Jorge, > > A soon as release 4.19 is launched, the feature of Domain VPCs( > https://github.com/apache/cloudstack/pull/7153) will be available, which > will allow users and operators to create tiers to VPCs for any account (or > in your case project) to which the VPC owner has access, regardless of > domain, thus, allowing all the projects to share a single VR. > > For now, this feature is not available in the GUI; however, you can create > a tier through the API 'createNetwork', informing both the projectId and > vpcId. > > This feature has been tested using accounts, but not projects, so I will > run some tests in the next few days and give you an answer regarding its > viability. > > Kind regards, > > GaOrtiga > > PS: This email will probably be a duplicate since I tried sending it > through a different provider, but it took too long, so I am sending this > again to save time. > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
How to create one network per project using as few public addresses as possible?
We have a lot of research centers here. Each one is a domain in CloudStack with its administrators. I would like each domain to use as few public IPs as possible and also use Projects to make management easier. For example, it would be ok if each domain had one virtual router with one public IP for NAT. a) If each Project has a network, each will use one public IP (VR). Many projects, many public IPs, what I'm trying to avoid. b) If I use one VPC all VMs should be in the same Project. I can't share VPC or Tiers with different Projects, as it is possible with Isolated Networks. So, Projects lose their purpose. Here I can separate VMs in different networks (tiers) but I can't use Projects features. c) If I use one Isolated Network (for example, created by a domain admin), I can share it with all projects inside the domain. However, all the VMs should be connected to this network, without project isolation. In fact, this would be a flat network shared by all VMs inside the domain and Projects will be just separating VMs in groups with their administrators. Anyone could suggest a way to use Domains with Projects and one or two public IPs per domain? Will it be possible to share different tiers with different projects, at some point? I appreciate any suggestions! Thank you! -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Cloud init settings for Config Drive on L2 networks
Just sharing some scripts used here. I hope they can help you. Create file cloud.cfg_jammy Change the following lines: cloud_init_modules: . . - [ssh, always] cloud_config_modules: . . - [set-passwords, always] Download the cloud-set-guest-password-configdrive.sh script. Create custom-networking_v2.cfg: network: version: 2 ethernets: ens3: dhcp4: true apt install libguestfs-tools wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img virt-customize --run-command 'rm /etc/cloud/cloud.cfg' -a jammy-server-cloudimg-amd64.img virt-customize --upload cloud.cfg_jammy:/etc/cloud/cloud.cfg -a jammy-server-cloudimg-amd64.img virt-customize --mkdir /var/lib/cloud/scripts/per-boot -a jammy-server-cloudimg-amd64.img virt-customize --mkdir /var/lib/cloud/scripts/per-instance -a jammy-server-cloudimg-amd64.img virt-customize --upload cloud-set-guest-password-configdrive.sh:/var/lib/cloud/scripts/per-boot/cloud-set-guest-password-configdrive.sh -a jammy-server-cloudimg-amd64.img virt-customize --upload cloud-set-guest-password-configdrive.sh:/var/lib/cloud/scripts/per-instance/cloud-set-guest-password-configdrive.sh -a jammy-server-cloudimg-amd64.img virt-customize --upload cnptia-per-instance-script.sh:/var/lib/cloud/scripts/per-instance/cnptia-per-instance-script.sh -a jammy-server-cloudimg-amd64.img virt-customize --upload custom-networking_v2.cfg:/etc/cloud/cloud.cfg.d/custom-networking_v2.cfg -a jammy-server-cloudimg-amd64.img One important thing noted here, if you intend to use a DHCP server in this L2 network, without static configured hosts. All VMs will be launched from the same template and the /etc/machine-id will be the same. The DHCP client will derivate one client id from this information. So, for all VMs, the DHCP server thinks they are the same host, offerging the same IP. Caos! I've read some documents and posts saying the image distributor (maybe Canonical, distributing de qcow2 image), is the indicated figure to fix the problem, making some configuration to reset the machine id. Indeed, if you truncate (you cannot remove the file) /etc/machine-id and /var/lib/dbus/machine-id, it will be generated on first boot. Here, as the template is already uploaded and distributed to the Zone, I made one ansible that fix this problem. But, I think you could run virt-customize and truncate them. Maybe: virt-customize --run-command 'truncate -s0 /etc/machine-id /var/lib/dbus/machine-id' -a jammy-server-cloudimg-amd64.img Em qui., 5 de out. de 2023 às 05:57, Joan g escreveu: > Thanks wei... > > On Thu, 5 Oct, 2023, 13:20 Wei ZHOU, wrote: > > > You need to add a script in the template to get password from configdrive > > and reset user password. For example > > > > > https://github.com/apache/cloudstack/blob/main/setup/bindir/cloud-set-guest-sshkey-password-userdata-configdrive.in > > > > > > > > -Wei > > > > On Thu, 5 Oct 2023 at 09:38, Joan g wrote: > > > > > Hello Community, > > > > > > Can someone guide me on configuration that should be added to > cloud-init > > > settings for creating password enabled templates using configdrive in > > > ubuntu 20,22. > > > > > > We need to deploy passsword and sshkey enabled templates on ubuntu that > > > will be using L2 networks. > > > > > > Thanks joan > > > > > > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Restricting instance deletion to the creator.
Is there any way (global configuration, role changes, etc.) to restrict deletion of an instance to the creator only? For example, I have a Project with a few users. If user A creates an instance, only user A can delete it. The goal is that one user can't delete instances from another by mistake. Thanks! :) -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Write Speeds
Granwille, no special configuration, just the CloudStack default behavior. As I understand, CloudStack can detect automatically if host supports this feature based on qemu and libvirt versions. https://github.com/apache/cloudstack/issues/4883#issuecomment-813955599 What versions of kernel, qemu and libvirt are you using in KVM host? Em seg., 10 de jul. de 2023 às 13:26, Granwille Strauss < granwi...@namhost.com> escreveu: > Hi Jorge > > How do you actually enable io_uring via Cloustack? > My KVM does have the necessary requirements. > > I enabled io.policy settings in global settings, local storage > and in the VM settings via UI. And my xml dump of VM doesn’t include > io_uring under driver for some reason. > > -- > Regards / Groete > > <https://www.namhost.com/> Granwille Strauss // Senior Systems > Administrator > > *e:* granwi...@namhost.com > *m:* +264 81 323 1260 <+264813231260> > *w:* www.namhost.com > > <https://www.facebook.com/namhost> <https://twitter.com/namhost> > <https://www.instagram.com/namhostinternetservices/> > <https://www.linkedin.com/company/namhos> > <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA> > > > <https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner> > > The content of this message is confidential. > For our full privacy policy and disclaimers, please go to > https://www.namhost.com/privacy-policy > [image: Powered by AdSigner] > <https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818> > > On 10 Jul 2023, at 5:27 PM, Granwille Strauss > wrote: > > > > Hi Jorge > > Thank you so much for this. I used your FIO config and surprisingly it > seems fine: > > write-test: (g=0): rw=randrw, bs=(R) 1300MiB-1300MiB, (W) 1300MiB-1300MiB, > (T) 1300MiB-1300MiB, ioengine=libaio, iodepth=1 > fio-3.19 > > Run status group 0 (all jobs): >READ: bw=962MiB/s (1009MB/s), 962MiB/s-962MiB/s (1009MB/s-1009MB/s), > io=3900MiB (4089MB), run=4052-4052msec > WRITE: bw=321MiB/s (336MB/s), 321MiB/s-321MiB/s (336MB/s-336MB/s), > io=1300MiB (1363MB), run=4052-4052msec > > This is without enabling io_uring. I see I can enable it per VM using the > UI by setting the io.policy = io_uring. Will enable this on a few VMs and > see if it works better. > On 7/10/23 15:41, Jorge Luiz Correa wrote: > > Hi Granwille! About the READ/WRITE performance, as Levin suggested, check > the XML of virtual machines looking at disk/device section. Look for > io='io_uring'. > > As stated here: > https://github.com/apache/cloudstack/issues/4883 > > CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0. > > I tried to do some tests at some points like your environment. > > ## > VM in NFS Primary Storage (Hybrid NAS) > Default disk offering, thin (no restriction) > > > /usr/bin/qemu-system-x86_64 > > >file='/mnt/74267a3b-46c5-3f6c-8637-a9f721852954/fb46fd2c-59bd-4127-851b-693a957bd5be' > index='2'/> > > > fb46fd2c59bd4127851b > >function='0x0'/> > > > fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test > --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G > --readwrite=randrw > > READ: 569MiB/s > WRITE: 195MiB/s > > ## > VM in Local Primary Storage (local SSD host) > Default disk offering, thin (no restriction) > > > /usr/bin/qemu-system-x86_64 > > >file='/var/lib/libvirt/images/d100c55d-8ff2-45e5-8452-6fa56c0725e5' > index='2'/> > > > fb46fd2c59bd4127851b > >function='0x0'/> > > fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test > --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G > --readwrite=randrw > > First run (a little bit slow if using "thin" because need to allocate space > in qcow2): > READ: bw=796MiB/s > WRITE: bw=265MiB/s > > Second run: > READ: bw=952MiB/s > WRITE: bw=317MiB/s > > ## > Directly in local SSD of host: > > fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test > --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G > --readwrite=randrw > > READ: bw=931MiB/s > WRITE: bw=310MiB/s > > OBS.: parameters of fio test need to be changed to test in your environment > as it depends on the number of cpus, memory, --bs, --iodepth etc. > > Host is running 5.
Re: Write Speeds
Hi Granwille! About the READ/WRITE performance, as Levin suggested, check the XML of virtual machines looking at disk/device section. Look for io='io_uring'. As stated here: https://github.com/apache/cloudstack/issues/4883 CloudStack can use io_uring with Qemu >= 5.0 and Libvirt >= 6.3.0. I tried to do some tests at some points like your environment. ## VM in NFS Primary Storage (Hybrid NAS) Default disk offering, thin (no restriction) /usr/bin/qemu-system-x86_64 fb46fd2c59bd4127851b fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G --readwrite=randrw READ: 569MiB/s WRITE: 195MiB/s ## VM in Local Primary Storage (local SSD host) Default disk offering, thin (no restriction) /usr/bin/qemu-system-x86_64 fb46fd2c59bd4127851b fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G --readwrite=randrw First run (a little bit slow if using "thin" because need to allocate space in qcow2): READ: bw=796MiB/s WRITE: bw=265MiB/s Second run: READ: bw=952MiB/s WRITE: bw=317MiB/s ## Directly in local SSD of host: fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/tmp/random_read_write.fio --bs=1300m --iodepth=8 --size=6G --readwrite=randrw READ: bw=931MiB/s WRITE: bw=310MiB/s OBS.: parameters of fio test need to be changed to test in your environment as it depends on the number of cpus, memory, --bs, --iodepth etc. Host is running 5.15.0-43 kernel, qemu 6.2 and libvirt 8. CloudStack is 4.17.2. So, VM in local SSD of host could have very similar disk performance from the host. I hope this could help you! Thanks. > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: L2 Network on CS 4.18
Hi Christian, take a look at this thread https://lists.apache.org/thread/s0xkt1k93hfoz33qw1mhmt9333rsy1s9 Recently I've had some issues trying to use L2 networks but still in 4.17. The trick was to define a domain admin account as the owner of the networks, as Stephan said. And that can be made only with API. I hope this can help in some way. Tks Em seg., 19 de jun. de 2023 às 14:40, Christian Reichert < christian.reich...@scsynergy.com> escreveu: > Hello, > > > > we upgraded a couple of week ago from 4.16.1 to 4.18. Now if I crate a L2 > Network as root admin and assign it to another Domain and Account after the > dialog is closed the network belongs to the ROOT Domain and Admin. I found > no way to change it. > > > > In the logs I found > > > > 2023-06-18 18:15:50,033 DEBUG [c.c.u.AccountManagerImpl] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Access > granted to Account > [{"accountName":"admin","id":2,"uuid":"67f8c0fd-cb90-11ec-9995-286ed489a2a6"}] > to [Network Offering [17-Guest-DefaultL2NetworkOfferingConfigDriveVlan] by > AffinityGroupAccessChecker > > 2023-06-18 18:15:50,053 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network, the physical isolation type is not BCF_SEGMENT > > 2023-06-18 18:15:50,055 DEBUG [o.a.c.n.c.m.ContrailGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,057 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,059 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,061 DEBUG [c.c.n.g.OvsGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,063 INFO [c.c.n.g.DirectNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,082 DEBUG [c.c.n.g.DirectNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) VLAN: VLAN > > 2023-06-18 18:15:50,082 INFO [c.c.n.g.DirectNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,084 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) SSP not > configured to be active > > 2023-06-18 18:15:50,086 DEBUG [o.a.c.n.t.s.TungstenGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,087 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Refusing to > design this network > > 2023-06-18 18:15:50,088 DEBUG [o.a.c.e.o.NetworkOrchestrator] > (qtp1444635922-321:ctx-c7d2926b ctx-0f6ad820) (logid:6df0b4a6) Releasing > lock for Account > [{"accountName":"admin","id":2,"uuid":"67f8c0fd-cb90-11ec-9995-286ed489a2a6"}] > > > > Is it an 4.18 issue or did I miss something in my configuration? > > > > Thanks and Regards, > > > > Christian > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: How to create network based on DefaultL2NetworkOfferingConfigDriveVlan?
Hi Stephan, thank you so much! Using API and defining one account with 'domain admin' role in domain, the L2 network was created and is working fine! Thanks! :) Em sex., 26 de mai. de 2023 às 08:50, Stephan Bienek escreveu: > Hello Jeorge, > > indeed you have to be root admin to deploy a network with SpecifyVlan = > true > I experienced the same challenge as you did - the network will be owned by > the root domain, even tough providing a domain (without an account). > > In my case it perfectly works when i define the domain AND an admin > account of this domain - the network is owned by the domain and account > correctly. > > Are you sure you defined the Domain AND the right account (not user) of > the domain? > Did you try via API / Cloudmonkey? > > Best regards, > Stephan > > > Jorge Luiz Correa hat am 26.05.2023 > 13:24 CEST geschrieben: > > > > > > Hi all! How can I use the network offering > > DefaultL2NetworkOfferingConfigDriveVlan? I would like to create a new L2 > > network to be used by domain TEST, with vlan id 2123. DHCP will be > > external. I need to use ConfigDrive to be able to set VM hostname and > > password and Vlan to define the vlan id for the network. > > > > Using a domain admin account in TEST domain I can't create that network > > because I can't choose the vlan id. If I use ROOT Admin to create I can > > inform the vlan id but, even choosing the domain TEST, after creation the > > new network has Domain=ROOT and Account=admin. I've already tried to > inform > > the domain and one account from TEST domain (as say help, 'account that > > will own the network') but no success. Inside the TEST domain I can't see > > the new network. > > > > As a workaround I've created a new network offering using shared, with > > vlan, then enabled just UserData : ConfigDrive as provider. So, I could > > create a new network as ROOT admin, configure the vlan id and the domain. > > But I guess this is not the right way, I had to configure gateway, start > > and end ip addresses, but none of these make sense, they are not used. > > > > Appreciate any help! > > :) > > > > -- > > Jorge Luiz Corrêa > > Embrapa Agricultura Digital > > > > echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu > > YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm > > NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln > > aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW > > xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD > > RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF > > NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 > > Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm > > JyCgo="|base64 -d > > > > -- > > __ > > Aviso de confidencialidade > > > > Esta mensagem da > > Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica > > federal regida pelo disposto na Lei Federal no. 5.851, de 7 de > dezembro > > de 1972, e enviada exclusivamente a seu destinatario e pode conter > > informacoes confidenciais, protegidas por sigilo profissional. Sua > > utilizacao desautorizada e ilegal e sujeita o infrator as penas da > lei. > > Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao > > emitente, esclarecendo o equivoco. > > > > Confidentiality note > > > > This message from > > Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government > > company established under Brazilian law (5.851/72), is directed > > exclusively to its addressee and may contain confidential data, > > protected under professional secrecy rules. Its unauthorized use is > > illegal and may subject the transgressor to the law's penalties. If you > > are not the addressee, please send it back, elucidating the failure. > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
How to create network based on DefaultL2NetworkOfferingConfigDriveVlan?
Hi all! How can I use the network offering DefaultL2NetworkOfferingConfigDriveVlan? I would like to create a new L2 network to be used by domain TEST, with vlan id 2123. DHCP will be external. I need to use ConfigDrive to be able to set VM hostname and password and Vlan to define the vlan id for the network. Using a domain admin account in TEST domain I can't create that network because I can't choose the vlan id. If I use ROOT Admin to create I can inform the vlan id but, even choosing the domain TEST, after creation the new network has Domain=ROOT and Account=admin. I've already tried to inform the domain and one account from TEST domain (as say help, 'account that will own the network') but no success. Inside the TEST domain I can't see the new network. As a workaround I've created a new network offering using shared, with vlan, then enabled just UserData : ConfigDrive as provider. So, I could create a new network as ROOT admin, configure the vlan id and the domain. But I guess this is not the right way, I had to configure gateway, start and end ip addresses, but none of these make sense, they are not used. Appreciate any help! :) -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Using NFS as primary storage, performance issues
Just enjoying the subject, I'm curious about what we can reach in disk performance when using NFS primary storage. We have two infrastructures with different purposes, one to instantiate normal VMs and other to be used by VMs involved with scientific researches running simulations, AI training, big databases loads etc. I've made two simple tests here using the "fio" tool. The results are: SAS 7200RPM 2.7 TB x 12 -> PERC H710P Mini (Embedded) -> RAID6 -> NFS -> Primary Storage Pool (Normal server) Test inside one VM in this pool READ: bw=168MiB/s (177MB/s) WRITE: bw=186MiB/s (195MB/s) 4 TB SSD x 6 + SAS 10.000RPM 1.8 TB x 39 (HYBRID) -> RAID6 -> NFS -> Primary Storage Pool (Dell Unity XT 380 NAS) Test inside one VM in this pool READ: bw=417MiB/s WRITE: bw=461MiB/s Network connections among hosts and primary storage are 10 Gbps with jumbo frames. If anyone could share numbers like that I would appreciate. Can we reach better R/W performances? Ok, it looks obvious that with better hardware we can get better performance. But, in some tests made with servers using only NVMe units, I could see a limitation around 480 MB/s. This same server can reach around 4800 MB/s using the same fio test, locally. So, can we say the bottleneck is NFS? Thank you! Em ter., 2 de mai. de 2023 às 09:33, Pierre Le Fevre escreveu: > Big thanks for all the suggestions :) > > We've ordered some NVME SSDs for write caching, this is something we missed > when setting up the machine originally. > Sounds like the setup should work other than that. > > Best, > Pierre > kthcloud > > On Mon, 1 May 2023 at 17:55, wrote: > > > Me use flat network and jumbo frame too. > > El 29 de abr. de 2023 15:47 -0300, S.Fuller , > > escribió: > > > Anything else different about the setup? Interface speeds? Routed vs > flat > > > network? MTU size being used by the network interfaces perhaps? > > > > > > - Steve > > > > > > On Fri, Apr 28, 2023 at 3:47 AM Pierre Le Fevre > wrote: > > > > > > > Hi all, > > > > We're working on upgrading our storage solution to a proper network > > > > attached storage. > > > > Before this, we had an NFS share on some mounted disks on the > > management > > > > server. Our new setup is a NAS running TrueNAS (zfs) with 64 GB ram > and > > > > 8x8TB, 7200 rpm hard disks mounted to cloudstack over NFS. > > > > > > > > It seems the performance is however much lower than before, resulting > > in > > > > somewhat unusable virtual machines. In VMs, IOPS can be as low as <10 > > IOPS, > > > > vs 100 with the management server setup. > > > > > > > > Is there a recommended setup for primary storage to yield better > > > > performance? > > > > > > > > All the best > > > > Pierre, > > > > kthcloud > > > > > > > > > > > > > -- > > > Steve Fuller > > > steveful...@gmail.com > > > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Problem migrating big volume between primary storage pools.
Thank you so much Bryan! It worked! I would like to comment on two things. The migration I was trying, using web gui options and the secondary storage as intermediate, probably was failing because of two timeout parameters. kvm.storage.offline.migration.wait: 28800 kvm.storage.online.migration.wait: 28800 The default value was 10800 and I've noted that two attempts stopped exactly in 3 hours. I didn't know these parameters, so I think if I try now the volume could be copied. So, beyond job.cancel.threshold.minutes, migratewait, storage.pool.max.waitseconds and wait, we need to configure kvm.storage.offline.migration.wait and kvm.storage.online.migration.wait too. To use (admin@uds) 🐱 > migrate virtualmachinewithvolume hostid=UUID virtualmachineid=UUID migrateto[0].volume=UUID migrateto[0].pool=UUID I had to configure timeout too: (admin@uds) 🐱 > set timeout 28800 so, everything worked :) after 3h40min, the 1.6 TB volume was live migrated. Thank you :) Em qua., 26 de abr. de 2023 às 16:59, Bryan Lima escreveu: > Hey Jorge, > > Nice to see another fellow around! > > > Both methods didn't work with a volume of 1.1 TB. Do they do the same > > thing? > Both methods have different validations; however, essentially they do > the same thing: while the VM is stopped, the volume is copied to the > secondary storage and then to the primary storage. On the other hand, > when the VM is running, ACS copies the volume directly to the > destination pool. Could you try migrating these volumes while the VM is > still running (using API *migrateVirtualMachineWithVolume*)? In this > scenario, the migration would not copy the volumes to the secondary > storage; thus, it would be faster and reduce the stress/load in your > network and storage systems. Let me know if this option worked for you > or if you have any doubts about how to use the live migration with KVM. > > Besides that, we have seen some problems when this migration process is > not finished properly, which leaves leftovers in the storage pool, > consuming valuable storage resources and database inconsistencies. It is > worth taking a look at the storage pool for these files and also > validating the database, to see if inconsistencies were created there. > > Best regards, > Bryan > On 26/04/2023 16:24, Jorge Luiz Correa wrote: > > Anyone had problems when migrating "big" volumes between different > pools? I > > have 3 storage pools. The overprovisioning factor was configured with 2.0 > > (default) and pool2 got full. So, I've configured factor as 1.0 and then > > had to move some volumes from pool2 to pool3. > > > > CS 4.17.2.0, Ubuntu 22.04 LTS. I'm using KVM with NFS. Same zone, same > pod, > > same cluster. All hosts (hypervisors) had all 3 pools mounted. I've tried > > two ways: > > > > 1) from instance details page, with instance stopped, using the option > > "Migrate instance to another primary storage" (when instance is running > > this option is named "Migrate instance to another host"). Then, I've > marked > > "Migrate all volume(s) of the instance to a single primary storage" and > > choose the destination primary storage pool3. > > > > 2) from volume details page, with instance stopped, using the option > > "Migrate volume" and then selecting the destination primary storage > pool3. > > > > Both methods didn't work with a volume of 1.1 TB. Do they do the same > thing? > > > > Looking at the host that executes the action, I can see that it mounts > the > > Secondary Storage, starts a "qemu-img convert" process to generate a new > > volume. After some time (3 hours) and copy 1.1 TB, the process fail with: > > > > com.cloud.utils.exception.CloudRuntimeException: Resource [StoragePool:8] > > is unreachable: Migrate volume failed: > > com.cloud.utils.exception.CloudRuntimeException: Failed to copy > > > /mnt/4be0a812-1d87-376f-9e72-db79206a796c/565fa2dd-ff14-4b28-a5d0-dbe88b860ee9 > > to d3d5a858-285c-452b-b33f-c152c294711b.qcow2 > > > > I checked in the database that StoragePool:8 is pool3, the destination. > > > > After failing, the async job is finished. But, the new qcow2 file remains > > at secondary storage, lost. > > > > So, the host is saying it can't access the pool3. BUT, this pool is > > mounted! There are other VMs running using this pool3. And, I've > > successfully migrated many others VMs using 1) or 2), but these VMs had > up > > to 100 GB. > > > > I'm using > > > > job.cancel.threshold.minutes: 480 > > migratewait: 28800 > > storage.
Problem migrating big volume between primary storage pools.
Anyone had problems when migrating "big" volumes between different pools? I have 3 storage pools. The overprovisioning factor was configured with 2.0 (default) and pool2 got full. So, I've configured factor as 1.0 and then had to move some volumes from pool2 to pool3. CS 4.17.2.0, Ubuntu 22.04 LTS. I'm using KVM with NFS. Same zone, same pod, same cluster. All hosts (hypervisors) had all 3 pools mounted. I've tried two ways: 1) from instance details page, with instance stopped, using the option "Migrate instance to another primary storage" (when instance is running this option is named "Migrate instance to another host"). Then, I've marked "Migrate all volume(s) of the instance to a single primary storage" and choose the destination primary storage pool3. 2) from volume details page, with instance stopped, using the option "Migrate volume" and then selecting the destination primary storage pool3. Both methods didn't work with a volume of 1.1 TB. Do they do the same thing? Looking at the host that executes the action, I can see that it mounts the Secondary Storage, starts a "qemu-img convert" process to generate a new volume. After some time (3 hours) and copy 1.1 TB, the process fail with: com.cloud.utils.exception.CloudRuntimeException: Resource [StoragePool:8] is unreachable: Migrate volume failed: com.cloud.utils.exception.CloudRuntimeException: Failed to copy /mnt/4be0a812-1d87-376f-9e72-db79206a796c/565fa2dd-ff14-4b28-a5d0-dbe88b860ee9 to d3d5a858-285c-452b-b33f-c152c294711b.qcow2 I checked in the database that StoragePool:8 is pool3, the destination. After failing, the async job is finished. But, the new qcow2 file remains at secondary storage, lost. So, the host is saying it can't access the pool3. BUT, this pool is mounted! There are other VMs running using this pool3. And, I've successfully migrated many others VMs using 1) or 2), but these VMs had up to 100 GB. I'm using job.cancel.threshold.minutes: 480 migratewait: 28800 storage.pool.max.waitseconds: 28800 wait: 28800 so, no log messages about timeouts. Any help? Thank you :) -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: How to dedicate host or cluster to projects?
Hi Daniel, thank you so much for the answer and tests! So I leave here the question for developers if this option can be used in that way or if it's not recommended. dedicateHost dedicateCluster dedicateZone dedicatePod All these APIs receive "account - the name of the account which needs dedication. Must be used with domainId" as parameter. What are implications of use PrjAcct-* as one account? Thanks! Em seg., 10 de out. de 2022 às 14:08, Daniel Augusto Veronezi Salvador < dvsalvador...@gmail.com> escreveu: > Hello, Jorge > > Via UI it is not possible; however, the API for dedicating resources > receives the parameters account and domainid. ACS creates an account for > every project created (as found out). Therefore, you can call the APIs > passing the name of the account created by ACS (the one with prefix > 'PrjAcct-') and the domain related to the project. > > I did a few tests in our QA and it worked fine (dedicated a host to a > project and created a VM for it - outside the project it was not possible > to instantiate a VM to that host); however, deeper tests should be done > before using this approach in production. > > Best regards, > Daniel Salvador (gutoveronezi) > > On 04/10/2022 09:39, Jorge Luiz Correa wrote: > > Looking at documentation I can't find a way to dedicate a host or cluster > > to a project. > > > > Is there some way to do that today? > > > > Cloudstack intends to make it possible some day? > > > > Each project creates an account and we can dedicate hosts and clusters to > > accounts. So, it looks like to be the same thing, right? > > > > Thank you! > > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
How to dedicate host or cluster to projects?
Looking at documentation I can't find a way to dedicate a host or cluster to a project. Is there some way to do that today? Cloudstack intends to make it possible some day? Each project creates an account and we can dedicate hosts and clusters to accounts. So, it looks like to be the same thing, right? Thank you! -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
createServiceOffering: problems in web UI and cgroup version.
Hi! I'm having some problems related to the creation of a new service offering. I need to create one using the "Custom constrained" option to limit CPUs and memory. But, when "Custom constrained" is selected "CPU (in MHz)" is required. (1) The first problem is that "CPU (in MHz)" has nothing to do with the CPU speed, right? The value configured as "CPU (in MHz)" is mapped to "shares" element in the cputune section of libvirt configuration of domain (KVM). Apparently this mapping is done using (N. CPUs) * CPU (in MHz). So, if I have a 1000 Mhz value in service offering and try to launch a VM with 80 CPUs, I will get 8 configured as 'shares' value. 'Shares' is a relative value that defines how much CPU time a VM will have when compared to others. This description in MHz is very confusing. Could this be changed? (2) The second problem is why is this value required? Why couldn't we just inform the number of CPUs and memory to limit the ranges that service offering permits? In UI the value is required. Looking at createServiceOffering API definition the only two required parameters are displaytext and name. But trying to create I have: create serviceoffering name="Custom VM" displaytext="Custom VM" storagetype=shared provisioningtype=fat mincpunumber=2 maxcpunumber=80 minmemory=1000 maxmemory=100 offerha=true dynamicscalingenabled=true Error: (HTTP 431, error code 4350) For creating a custom compute offering min/max cpu and min/max memory/cpu speed should all be null or all specified (3) The third problem is about the value of "CPU (in MHz)". According to ( libvirt.org/formatdomain.html) the value should be in range [2, 262144]. But, for operating systems using cgroup v2 the maximum value is 1. I know that Ubuntu 22.04 I'm using here is not supported yet. But this will be an issue as other OSs adopt cgroup v2 too. So, I think this parameter deserves attention. If the value of (N. CPU) * CPU (in MHz) is greater than 1 I get "Value specified in CPUWeight is out of range" in hypervisor. As a workaround I configured service offering with 1 Mhz. This implies that VMs with more CPUs have much more chance to get the CPU than VMs with lower number of CPUs, because besides having more CPUs, they have more chance to get the host' CPUs. VM1 10 CPU * 1 MHz -> shares = 10 VM2 80 CPU * 1 MHz -> shares = 80 If we look at 1 CPU of VM2, it will have 8 times more chance to get the host's CPU than 1 CPU of VM1, right? Thank you! :) -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Usage computing removed items
Same here, exactly the same problem with volumes. I've looked in github source for the SQL query that generates the aggregated registries to try to understand but I couldn't find. :/ Appreciate any help too. Em ter., 26 de jul. de 2022 17:31, Matheus Fontes escreveu: > Hi, > Is anyone having problems with usage computing removed items? > We have a user that reported the problem. > The volume was deleted since 2022-05-06 > > mysql> select id,account_id,created,removed,state from volumes where > id=3246; > > +--++-+-+--+ > | id | account_id | created | removed | state > | > > +--++-+-+--+ > | 3246 |545 | 2021-04-08 17:03:16 | 2022-05-06 15:06:52 | Expunged > | > > +--++-+-+--+ > > > In usage process we can see it on parsing volume call: > 2022-07-26 00:18:17,471 DEBUG [usage.parser.VolumeUsageParser] > (Usage-Job-1:null) (logid:) Parsing all Volume usage events for account: 545 > 2022-07-26 00:18:17,472 DEBUG [usage.parser.VolumeUsageParser] > (Usage-Job-1:null) (logid:) Total running time 8640ms > 2022-07-26 00:18:17,472 DEBUG [usage.parser.VolumeUsageParser] > (Usage-Job-1:null) (logid:) Creating Volume usage record for vol: 3246, > usage: 24, startDate: Mon Jul 25 00:00:00 BRT 2022, endDate: Mon Jul 25 > 23:59:59 BRT 2022, for account: 545 > 2022-07-26 00:18:17,484 DEBUG [usage.parser.VolumeUsageParser] > (Usage-Job-1:null) (logid:) Total running time 8640ms > 2022-07-26 00:18:17,484 DEBUG [usage.parser.VolumeUsageParser] > (Usage-Job-1:null) (logid:) Creating Volume usage record for vol: 3246, > usage: 24, startDate: Mon Jul 25 00:00:00 BRT 2022, endDate: Mon Jul 25 > 23:59:59 BRT 2022, for account: 545 > > > And it are being computed to account: > (ascenty) # > list usagerecords domainid=XXX > accountid=d14c8cb9-fd92-43c8-9ddb-f2ed4e9af5a8 type=6 startdate=2022-07-25 > enddate=2022-07-25 filter=account,rawusage,size,startdate,usage,usagetype, > { > "count": 2, > "usagerecord": [ > { > "accountid": "d14c8cb9-fd92-43c8-9ddb-f2ed4e9af5a8", > "rawusage": "24", > "size": 53687091200, > "startdate": "2022-07-25'T'00:00:00-03:00", > "usage": "24 Hrs", > "usagetype": 6 > }, > { > "accountid": "d14c8cb9-fd92-43c8-9ddb-f2ed4e9af5a8", > "rawusage": "24", > "size": 53687091200, > "startdate": "2022-07-25'T'00:00:00-03:00", > "usage": "24 Hrs", > "usagetype": 6 > } > ] > } -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
How to configure link domaintoldap to define admin and user roles?
Hi all! In documentation I can see: cloudmonkey link domaintoldap domainid=12345678-90ab-cdef-fedc-ba0987654321\ accounttype=2\ ldapdomain="ou=people,dc=cloudstack,dc=apache,dc=org"\ type=OU So, for each member of ou=people,dc=cloudstack,dc=apache,dc=org I'll have one account with domain admin role (accounttype=2). How to do the same configuration for both user and admin roles? For example: To define admins: cloudmonkey link domaintoldap domainid=12345678-90ab-cdef-fedc-ba0987654321\ accounttype=2\ ldapdomain="ou=admins,dc=cloudstack,dc=apache,dc=org"\ type=OU To define users: cloudmonkey link domaintoldap domainid=12345678-90ab-cdef-fedc-ba0987654321\ accounttype=0\ ldapdomain="ou=users,dc=cloudstack,dc=apache,dc=org"\ type=OU When I tried to do that the second command failed with: Error: (HTTP 530, error code ) Entity already exists As I couldn't configure in that way, I tried just one command with accounttype=0 and passing the parameter admin= cloudmonkey link domaintoldap domainid=12345678-90ab-cdef-fedc-ba0987654321\ accounttype=0\ ldapdomain="ou=users,dc=cloudstack,dc=apache,dc=org"\ type=OU\ admin=adminuser So, all members of LDAP group can be a normal user and adminuser will be the domain admin. But, if I need to have more than one domain admin, how can I configure? I've tried put two admin= parameters but just the first is used. Thank you! -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Ubuntu 22.04 Release
Hi! I've set up some servers with Ubuntu 22.04 LTS: 2 management servers, 2 databases, 2 secondary storage and 4 processor nodes. In my tests I've used the repository from focal: http://download.cloudstack.org/ubuntu focal 4.16 As cloudstack packages do not require a lot of dependencies, the installation was well finished. After all configuration, the cloud is up. But, I have a main problem, already reported and fixed. https://github.com/apache/cloudstack/pull/6244 Until now, no System VMs can be deployed. Although the issue was fixed, I'm waiting until the repository package gets updated. Or, a new one specific to jammy be created. I've seen that the last updates to cloudstack packages were done on April 4 (in my servers). The issue was fixed a little bit later in April. MTC tks! Em qua., 1 de jun. de 2022 às 04:14, Loth escreveu: > Hello Users, > > Has any work been done regarding testing Cloudstack with Ubuntu 22.04, > and if so, which versions? > > Thanks for any news. > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Problem with libvirt 8 and domain VNC passwords.
Just to confirm the incompatibility. When Zone was enabled, the CS manager started to try to launch some system VMs like s--VM and v--VM. At hypervisors, all attempts were failing because the libvirtd didn't accept a vnc_password bigger than 8 chars. libvirtd[44140]: unsupported configuration: VNC password is 22 characters long, only 8 permitted Then, I changed the vnc_passwords directly in the database. In manager, generate the string for password 12345678: java -cp /usr/share/cloudstack-common/lib/jasypt-1.9.3.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input="12345678" password="DATABASE_KEY" OUTPUT-- ohM+JhNfT0xFJC3HtveMGTI5CJCjkcN5 In database, update to new value: update vm_instance set vnc_password = "ohM+JhNfT0xFJC3HtveMGTI5CJCjkcN5=" where name like "s-%" or name like "v-%"; After that, using an 8 chars password, all system VMs started fine! In https://qemu-project.gitlab.io/qemu/system/vnc-security.html we can see: *The VNC protocol has limited support for password based authentication. Since the protocol limits passwords to 8 characters it should not be considered to provide high security.* Before my tests with Libvirt 8 I was using Libvirt 6 with Ubuntu 20.04. It looks like Libvirt 6 just drops what is after 8 chars in passwords. So, sending a bigger password does not increase the security because the protocol has the limitation, right? In Libvirt 8 some modification is generating a Warning/Error. This shows something about that modification: https://www.mail-archive.com/libvir-list@redhat.com/msg224586.html That warning/error is causing System VMs to not start! So, to use Libvirt 8 with CloudStack I think vnc_password length needs to be 8 in some way because Libvirt 8 is not dropping anymore what is bigger than that. Thanks! :) -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Problem with libvirt 8 and domain VNC passwords.
Hi! I'm testing CS with Ubuntu 22.04 LTS that uses libvirt 8.0.0-1ubuntu6 (using CS repository from focal 20.04). After all the installation process, when the manager tries to start some system VMs (like SSVM), the hypervisor hosts can't do that. In the agent logs I can see: 2022-04-08 16:17:30,142 WARN [resource.wrapper.LibvirtStartCommandWrapper] (agentRequest-Handler-5:null) (logid:f1b7f404) LibvirtException org.libvirt.LibvirtException: unsupported configuration: VNC password is 22 characters long, only 8 permitted at org.libvirt.ErrorHandler.processError(Unknown Source) at org.libvirt.ErrorHandler.processError(Unknown Source) at org.libvirt.Connect.domainCreateXML(Unknown Source) at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.startVM(LibvirtComputingResource.java:1736) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStartCommandWrapper.execute(LibvirtStartCommandWrapper.java:86) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtStartCommandWrapper.execute(LibvirtStartCommandWrapper.java:46) at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78) at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1768) at com.cloud.agent.Agent.processRequest(Agent.java:661) at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079) at com.cloud.utils.nio.Task.call(Task.java:83) at com.cloud.utils.nio.Task.call(Task.java:29) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) In the libvirt logs I can see: ● libvirtd.service - Virtualization daemon Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2022-04-08 16:05:56 -03; 10min ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 44140 (libvirtd) Tasks: 19 (limit: 32768) Memory: 17.5M CPU: 3.342s CGroup: /system.slice/libvirtd.service └─44140 /usr/sbin/libvirtd --listen Apr 08 16:14:00 hpc-p01c01h01 libvirtd[44140]: unsupported configuration: VNC password is 22 characters long, only 8 permitted Apr 08 16:14:01 hpc-p01c01h01 libvirtd[44140]: unsupported configuration: VNC password is 22 characters long, only 8 permitted Apr 08 16:14:29 hpc-p01c01h01 libvirtd[44140]: unsupported configuration: VNC password is 22 characters long, only 8 permitted Apr 08 16:14:30 hpc-p01c01h01 libvirtd[44140]: unsupported configuration: VNC password is 22 characters long, only 8 permitted Looking for something about that I realized that older versions of libvirt could just ignore VNC passwords bigger than 8 chars. Now it looks like an error is triggered. I tried to find where CS stores the .xml file for the SSVM domain to see if the password is really a 22 chars password, but I didn't find it. I think the problem is when generating the .xml file for the new domains. Probably CS generates a long password. Is there any way to configure the size of VNC password that CS generates? Thank you! -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: How to control resource limits when account is linked to LDAP?
Thank you Daan ! I was looking in the wrong place. If I go to Domains, click the account name and look at Resources everything is being correctly updated. Tks!! Em seg., 13 de dez. de 2021 às 06:17, Daan Hoogland escreveu: > Jorge, > It seems like a bug as you describe it, but maybe you are looking at the > wrong figures. When you try to create more resources than the total account > limit, do they still get created? If so, please log a bug at > https://github.com/apache/cloudstack/issues/new/choose > > On Mon, Nov 22, 2021 at 8:22 PM Jorge Luiz Correa > wrote: > > > When we have an account UserA with a user UserA inside it, we can see and > > control usage limits configuring the UserA "account". > > > > I'm testing the link accounttoldap feature. > > > > cmk -p ad...@www.hpc link accounttoldap account='DomainAdmins' > > accounttype=2 ldapdomain='cn=cs_hpc_domain_admins,ou=grupos,...' > type=GROUP > > domainid=$DOMAINUD > > cmk -p ad...@www.hpc link accounttoldap account='Users' accounttype=0 > > ldapdomain='cn=cs_hpc_users,ou=grupos,...' type=GROUP domainid=$DOMAINUD > > > > So, I got two accounts: DomainAdmins and Users. Each user in > > cs_hpc_domain_admins LDAP group is created as a user inside DomainAdmins > > account and each user in cs_hpc_users is created as a user inside Users > > account. > > > > Both DomainAdmins and Users accounts have resource limits configured > (like > > UserA). But, when users create virtual machines these limits don't > change! > > I can't define limits to users inside accounts, only to accounts. So, I > > couldn't find a way to limit usage when accounts are linked to LDAP > groups. > > > > I was hoping that all the resources created by all the users inside the > > account would be discounted from the limits of the account. But the > account > > total usage never changes. > > > > Am I doing something wrong or this is a bug? > > > > CloudStack 4.15.2.0 > > > > Tks! > > > > -- > > Jorge Luiz Corrêa > > Embrapa Agricultura Digital > > > > echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu > > YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm > > NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln > > aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW > > xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD > > RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF > > NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 > > Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm > > JyCgo="|base64 -d > > > > -- > > __ > > Aviso de confidencialidade > > > > Esta mensagem da > > Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica > > federal regida pelo disposto na Lei Federal no. 5.851, de 7 de > dezembro > > de 1972, e enviada exclusivamente a seu destinatario e pode conter > > informacoes confidenciais, protegidas por sigilo profissional. Sua > > utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. > > Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao > > emitente, esclarecendo o equivoco. > > > > Confidentiality note > > > > This message from > > Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government > > company established under Brazilian law (5.851/72), is directed > > exclusively to its addressee and may contain confidential data, > > protected under professional secrecy rules. Its unauthorized use is > > illegal and may subject the transgressor to the law's penalties. If you > > are not the addressee, please send it back, elucidating the failure. > > > > > -- > Daan > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
RE: How to control resource limits when account is linked to LDAP?
I've found the problem, it wasn't a bug. It was a configuration issue. I'm using 2 management servers with 2 database servers (HA) and HAProxy to load balance access. My second management server was configured with an error so it couldn't connect correctly with the database. When I created some virtual machines I was probably using the second management server because HAProxy selected it. So, when accessing the web interface as ROOT admin I couldn't see resource usage values changing for the Account because some data in the database wasn't being updated. After correct configuration I can see usage information for all the accounts, including those linked to LDAP groups. Tks! On 2021/11/22 19:21:32 Jorge Luiz Correa wrote: > When we have an account UserA with a user UserA inside it, we can see and > control usage limits configuring the UserA "account". > > I'm testing the link accounttoldap feature. > > cmk -p ad...@www.hpc link accounttoldap account='DomainAdmins' > accounttype=2 ldapdomain='cn=cs_hpc_domain_admins,ou=grupos,...' type=GROUP > domainid=$DOMAINUD > cmk -p ad...@www.hpc link accounttoldap account='Users' accounttype=0 > ldapdomain='cn=cs_hpc_users,ou=grupos,...' type=GROUP domainid=$DOMAINUD > > So, I got two accounts: DomainAdmins and Users. Each user in > cs_hpc_domain_admins LDAP group is created as a user inside DomainAdmins > account and each user in cs_hpc_users is created as a user inside Users > account. > > Both DomainAdmins and Users accounts have resource limits configured (like > UserA). But, when users create virtual machines these limits don't change! > I can't define limits to users inside accounts, only to accounts. So, I > couldn't find a way to limit usage when accounts are linked to LDAP groups. > > I was hoping that all the resources created by all the users inside the > account would be discounted from the limits of the account. But the account > total usage never changes. > > Am I doing something wrong or this is a bug? > > CloudStack 4.15.2.0 > > Tks! -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
How to control resource limits when account is linked to LDAP?
When we have an account UserA with a user UserA inside it, we can see and control usage limits configuring the UserA "account". I'm testing the link accounttoldap feature. cmk -p ad...@www.hpc link accounttoldap account='DomainAdmins' accounttype=2 ldapdomain='cn=cs_hpc_domain_admins,ou=grupos,...' type=GROUP domainid=$DOMAINUD cmk -p ad...@www.hpc link accounttoldap account='Users' accounttype=0 ldapdomain='cn=cs_hpc_users,ou=grupos,...' type=GROUP domainid=$DOMAINUD So, I got two accounts: DomainAdmins and Users. Each user in cs_hpc_domain_admins LDAP group is created as a user inside DomainAdmins account and each user in cs_hpc_users is created as a user inside Users account. Both DomainAdmins and Users accounts have resource limits configured (like UserA). But, when users create virtual machines these limits don't change! I can't define limits to users inside accounts, only to accounts. So, I couldn't find a way to limit usage when accounts are linked to LDAP groups. I was hoping that all the resources created by all the users inside the account would be discounted from the limits of the account. But the account total usage never changes. Am I doing something wrong or this is a bug? CloudStack 4.15.2.0 Tks! -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
RE: ldaps config settings
Same difficulty here. The way it worked was defining the truststore globally. Just after that I defined the ldap configuration inside a domain. Using API: cmk -p user@myprofile update configuration name='ldap.truststore' value='/etc/cloudstack/management/cloud.jks' cmk -p user@myprofile update configuration name='ldap.truststore.password' value=PASSWORD cmk -p user@myprofile add ldapconfiguration hostname=ldapserver.mydomain port=636 domainid="domain uuid here" cmk -p user@myprofile update configuration name='ldap.basedn' value='...' domainid="domain uuid here" . . . Realize that API accepts configure the ldap.truststore for one domain, but this has no effect. cmk -p user@myprofile update configuration name='ldap.truststore' value='/etc/cloudstack/management/cloud.jks' domainid="domain uuid here" <--- When I configured ldap.truststore in one domain, the connection didn't use SSL. Tks! On 2021/06/07 20:56:18 Yordan Kostov wrote: > Dear community, > > Currently trying to reconfigure working ACS LDAP authentication to LDAPs but I believe something of importance may be missing in the guide ( https://docs.cloudstack.apache.org/en/latest/adminguide/accounts.html#ldap-ssl ). > It says that if ldap.truststore and ldap.truststore.password are configured it will switch working to LDAPS but that is not the case. > The logs confirm LDAP protocol is used when adding host after updating the config - "(logid:aafbef8a) initializing ldap with provider url: ldap://X.X.X.X:636"; > > Here are a few questions to round the issue: > > * API docs (LDAPCONFIG - https://cloudstack.apache.org/api/apidocs-4.15/apis/ldapConfig.html) mention the ability to enable SSL and bind certificate for an ldap host but there is no option to define the domain for the specific ldap configuration. > * What if multiple domains are present and their configs use the same ldap server. Can the SSL of one domain ldap config be changed one at a time or is this based on ldap host level > * ldap.truststore - is syntax something like /opt/CAROOT.crt going to work or it originates from a default directory? > * ldap.truststore.password - what if the certificate is without password, is it going to work? > > Any example commands on how this can be done through cloudmonkey will be much appreciated! > > Best regards, > Jordan > > > -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Re: Is possible to use CloudStack and LDAP with posixGroup and memberUid?
Hi! In my tests I couldn't use posixGroups, even changing the ldap.group.object configuration. The query is always in the format: (&(objectClass=inetOrgPerson)(uid=userone)(|(memberOf=cn=groupaccount1,ou=groups,dc=domain))) Looking for the memberOf attribute in the user entity is the problem. I'm using inetOrgPerson and no memberOf attribute exists. The only way I found to make this configuration work was to enable the RFC2307bis schema (replacing NIS schema), so my groups could be made of type posixGroup AND groupOfNames. This RFC permits that groups can be of these two types. Then, I had to enable the LDAP "overlay module" with member: attribute to keep referential integrity between groups and users. Groups now have the member: attribute synchronized with users memberOf: attribute. With these changes my LDAP server can answer queries with memberOf= filters. To Cloustack work with posixGroups I think the code should make different queries when the administrator configures ldap.group.object: posixGroup, not using memberOf. Thank you! :) -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.
Is possible to use CloudStack and LDAP with posixGroup and memberUid?
Hi! This is just my first post here and I'm looking for some help to understand more about LDAP use. I'm using CloudStack 4.15.2.0 and an OpenLDAP server. I need to configure autosync to map an account to a LDAP group. My LDAP uses as group entity the posixGroup type. Could CloudStack use groups of that type? If yes, how can I configure it in this way? My tests just work if I create a group of type groupOfNames (objectClass=groupOfNames with entries like member=userone member=usertwo). But, I already have an OpenLDAP server with a lot of groups using objectClass=posixGroup (with entries like memberUid=userone memberUid=usertwo). I would like to use them. Looking the slapd log I see a query with the following filter: (&(objectClass=inetOrgPerson)(uid=userone)(|(memberOf=cn=groupaccount1,ou=groups,dc=domain))) Reading about LDAP groups (in general), to use posixGroup it looks like the client should implement this, a way to check for users inside posixGroups. The log above appears to check users in groups using the memberof scheme. I didn't understand yet if CloudStack could operate like this. Is there a way to delete a "link accounttoldap" configuration? I always have to delete the account to make new testes, didn't find a way to delete this mapping. Thank you! :) -- Jorge Luiz Corrêa Embrapa Agricultura Digital echo "CkpvcmdlIEx1aXogQ29ycmVhCkFu YWxpc3RhIGRlIFJlZGVzIGUgU2VndXJhbm NhCkVtYnJhcGEgQWdyaWN1bHR1cmEgRGln aXRhbCAtIE5USQpBdi4gQW5kcmUgVG9zZW xsbywgMjA5IChCYXJhbyBHZXJhbGRvKQpD RVAgMTMwODMtODg2IC0gQ2FtcGluYXMsIF NQClRlbGVmb25lOiAoMTkpIDMyMTEtNTg4 Mgpqb3JnZS5sLmNvcnJlYUBlbWJyYXBhLm JyCgo="|base64 -d -- __ Aviso de confidencialidade Esta mensagem da Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), empresa publica federal regida pelo disposto na Lei Federal no. 5.851, de 7 de dezembro de 1972, e enviada exclusivamente a seu destinatario e pode conter informacoes confidenciais, protegidas por sigilo profissional. Sua utilizacao desautorizada e ilegal e sujeita o infrator as penas da lei. Se voce a recebeu indevidamente, queira, por gentileza, reenvia-la ao emitente, esclarecendo o equivoco. Confidentiality note This message from Empresa Brasileira de Pesquisa Agropecuaria (Embrapa), a government company established under Brazilian law (5.851/72), is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the law's penalties. If you are not the addressee, please send it back, elucidating the failure.