Hi Sudhansu, Thanks! The fix doesn’t appear to be in the master branch or the 4.7.x branch. I had created the following bug yesterday: https://issues.apache.org/jira/browse/CLOUDSTACK-9363, but I just noticed you created https://issues.apache.org/jira/browse/CLOUDSTACK-9367. -- Simon
> On Apr 22, 2016, at 05:56, Sudhansu Sahu <sudhansu.s...@accelerite.com> wrote: > > Hi Simon, > > We have faced similar issue in past. I think there is fix available for > this. > Let me check if its fixed in ACS, if not I will create a pull request for > this. > > Thanks > Sudhansu > > On 21/04/16 6:06 pm, "Koushik Das" <koushik....@accelerite.com > <mailto:koushik....@accelerite.com>> wrote: > >> Once the VM is in running state, is "allowed devices" showing proper >> list? After that if auto detect is working properly then maybe the issue >> is somewhere else. >> >> -Koushik >> >> ________________________________________ >> From: Simon Godard <sgod...@cloudops.com> >> Sent: Thursday, April 21, 2016 12:41 AM >> To: CloudStack Users Mailing list >> Subject: Re: Unable to start a HVM VM with more than 2 volumes attached >> using XenServer 6.5 and ACS 4.7.1 >> >> After more investigation, I can confirm that the problem is only for HVM. >> I tried with a PV vm and everything is fine. >> >> For some reason, performing the call: VM.get_allowed_VBD_devices on a HVM >> while the VM is still in starting state only returns a subset of device >> Ids: [1, 2] instead of [1,2,3,Š,15] on PV. The logic then reverts to >> Œautodetect¹ which also seems invalid for a HVM VM. >> >> -- >> Simon >> >>> On Apr 20, 2016, at 08:20, Simon Godard <sgod...@cloudops.com> wrote: >>> >>> Hi, >>> >>> We are getting a weird error when trying to start a VM (based on a HVM >>> template) when attaching more than 2 volumes. We are using XenServer 6.5 >>> and CloudStack 4.7.1. Here are the logs: >>> >>> ACS >>> Unable to start i-152-612-VM due to >>> The device name is invalid >>> at com.xensource.xenapi.Types.checkResponse(Types.java:1169) >>> at com.xensource.xenapi.Connection.dispatch(Connection.java:395) >>> at >>> com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServer >>> Connection.dispatch(XenServerConnectionPool.java:457) >>> at com.xensource.xenapi.VBD.create(VBD.java:322) >>> at >>> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(Citr >>> ixResourceBase.java:1148) >>> at >>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartComman >>> dWrapper.execute(CitrixStartCommandWrapper.java:119) >>> at >>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartComman >>> dWrapper.execute(CitrixStartCommandWrapper.java:53) >>> at >>> com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrap >>> per.execute(CitrixRequestWrapper.java:122) >>> at >>> com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest >>> (CitrixResourceBase.java:1678) >>> >>> XenServer >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:3c596e35d366|audit] VBD.create: VM = >>> '23da57b8-cc94-3edd-66af-97397c1e9f89 (i-152-614-VM)'; VDI = 'invalid' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:3c596e35d366|xapi] Checking whether there's a migrate >>> in progress... >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:3c596e35d366|xapi] VBD.create (device = 3; uuid = >>> d32bd2fb-68d9-a20b-3632-43c5cffd5ca1; ref = >>> OpaqueRef:d5d36ea6-2b79-2efd-c2ad-27403ffbdbf8) >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18677027 UNIX >>> /var/xapi/xapi||dummytaskhelper] task dispatch:SR.get_other_config >>> D:764bb880e1f7 created by task D:9e603ca8e24b >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18677028 UNIX >>> /var/xapi/xapi||dummytaskhelper] task dispatch:SR.get_sm_config >>> D:b30be613f286 created by task D:9e603ca8e24b >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VM.get_allowed_VBD_devices D:65d1aeb0e377|audit] >>> VM.get_allowed_VBD_devices: VM = '23da57b8-cc94-3edd-66af-97397c1e9f89 >>> (i-152-614-VM)' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:f5513cd51b7d|audit] VBD.create: VM = >>> '23da57b8-cc94-3edd-66af-97397c1e9f89 (i-152-614-VM)'; VDI = >>> '7c882c4f-eaed-49c9-952d-53f2db998ecd' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:f5513cd51b7d|xapi] Checking whether there's a migrate >>> in progress... >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:f5513cd51b7d|xapi] VBD.create (device = 2; uuid = >>> b83f771a-c796-8f5b-26de-fb648327e305; ref = >>> OpaqueRef:f1477777-8d60-52b6-8041-72cf95fa7f44) >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VM.get_allowed_VBD_devices D:c180118513de|audit] >>> VM.get_allowed_VBD_devices: VM = '23da57b8-cc94-3edd-66af-97397c1e9f89 >>> (i-152-614-VM)' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:3a50f805fbb1|audit] VBD.create: VM = >>> '23da57b8-cc94-3edd-66af-97397c1e9f89 (i-152-614-VM)'; VDI = >>> '78ab2cc5-a7ec-4709-9556-a1d6bfbc2b65' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:3a50f805fbb1|xapi] Checking whether there's a migrate >>> in progress... >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:3a50f805fbb1|xapi] VBD.create (device = 1; uuid = >>> 6556fa97-356f-65ec-fcda-d41205804b46; ref = >>> OpaqueRef:c71b9afd-7069-5caf-f67b-befc83de722b) >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VM.get_allowed_VBD_devices D:0cdd9a03fdea|audit] >>> VM.get_allowed_VBD_devices: VM = '23da57b8-cc94-3edd-66af-97397c1e9f89 >>> (i-152-614-VM)' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:b52da7a69374|audit] VBD.create: VM = >>> '23da57b8-cc94-3edd-66af-97397c1e9f89 (i-152-614-VM)'; VDI = >>> '7634db62-6f48-4136-aac1-4db7a5ad77d6' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:b52da7a69374|xapi] Checking whether there's a migrate >>> in progress... >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:b52da7a69374|xapi] VBD.create (device = 0; uuid = >>> bbbd2eac-24ce-7c5d-315e-72eaaa9ec161; ref = >>> OpaqueRef:9dd9c448-2718-9849-908c-78b2c394f6ac) >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VM.get_allowed_VBD_devices D:30446069f7d1|audit] >>> VM.get_allowed_VBD_devices: VM = '23da57b8-cc94-3edd-66af-97397c1e9f89 >>> (i-152-614-VM)' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:a285caa8ad4a|audit] VBD.create: VM = >>> '23da57b8-cc94-3edd-66af-97397c1e9f89 (i-152-614-VM)'; VDI = >>> '8ed37e28-10e7-45f9-85d1-f1443b673701' >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:a285caa8ad4a|xapi] Checking whether there's a migrate >>> in progress... >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:a285caa8ad4a|backtrace] Raised at >>> xapi_vbd.ml:135.9-75 -> threadext.ml:20.20-24 -> threadext.ml:20.62-65 >>> -> message_forwarding.ml:3480.3-150 -> server.ml:24430.82-282 -> >>> rbac.ml:229.16-23 >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:a285caa8ad4a|backtrace] Raised at rbac.ml:238.10-15 >>> -> server_helpers.ml:79.11-41 >>> Apr 19 14:00:33 cca-t2-xen01 xapi: [debug|cca-t2-xen01|18676540 INET >>> :::80|VBD.create R:a285caa8ad4a|dispatcher] Server_helpers.exec >>> exception_handler: Got exception INVALID_DEVICE: [ autodetect ] >>> >>> >>> Could it be a side-effect of >>> https://github.com/apache/cloudstack/pull/792 >>> <https://github.com/apache/cloudstack/pull/792> ? >>> >>> Thanks, >>> >>> Simon GODARD >>> Développeur Principal | Lead Developer >>> t 514.880.3777 >>> >>> CloudOps Votre partenaire infonuagique | Cloud Solutions Experts >>> 420 rue Guy | Montreal | Quebec | H3J 1S6 >>> w cloudops.com <http://cloudops.com/> | tw @CloudOps_ >>> >> >> >> >> DISCLAIMER >> ========== >> This e-mail may contain privileged and confidential information which is >> the property of Accelerite, a Persistent Systems business. It is intended >> only for the use of the individual or entity to which it is addressed. If >> you are not the intended recipient, you are not authorized to read, >> retain, copy, print, distribute or use this message. If you have received >> this communication in error, please notify the sender and delete all >> copies of this message. Accelerite, a Persistent Systems business does >> not accept any liability for virus infected mails. > > > > > DISCLAIMER > ========== > This e-mail may contain privileged and confidential information which is the > property of Accelerite, a Persistent Systems business. It is intended only > for the use of the individual or entity to which it is addressed. If you are > not the intended recipient, you are not authorized to read, retain, copy, > print, distribute or use this message. If you have received this > communication in error, please notify the sender and delete all copies of > this message. Accelerite, a Persistent Systems business does not accept any > liability for virus infected mails.