Hello there,

I'm working on integrating CloudStack 4.0.1 with XenServer 6.0.2. 

I have managed setting up a basic network configuration, launching zone,
launching guest instances as well as creating volume snapshots, but
afterward I'm encountering following issue:

 

The 'create template from snapshot' operation always fail. 

The management server log indicates that following Exception has been thrown
during the operation process:


com.cloud.utils.exception.CloudRuntimeException:
create_privatetemplate_from_snapshot failed due to failed to coalesce
/var/run/cloud_mount/d865f1ad-a164-443f-8c05-d4c56e690f25/b1a56afd-a943-4a43
-87b8-c2e2ab10126f.vhd to
/var/run/cloud_mount/58d3242f-d4fb-41e9-a20e-0686ffda9eaa/85bd1f75-1d10-48c4
-ae79-d2e5ece8b6f1.vhd
        at
com.cloud.hypervisor.xen.resource.CitrixResourceBase.createTemplateFromSnaps
hot(CitrixResourceBase.java:2618)
        at
com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceB
ase.java:6398)
        at
com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixRe
sourceBase.java:475)
        at
com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServ
er56Resource.java:73)
        at
com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:
191)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$
101(ScheduledThreadPoolExecutor.java:165)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Sch
eduledThreadPoolExecutor.java:266)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
10)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
03)
        at java.lang.Thread.run(Thread.java:679)



With some effort on debugging the management server code, I figured out that
the operation will finally lead to an invocation on the
'create_privatetemplate_from_snapshot.sh' script on the XenServer host. So I
logged into the XenServer host and manually invoked the script with the same
parameter from cloudstack, and it ended up as follow:


[root@xenserver-modaxvnu bin]# sh -x
/opt/xensource/bin/create_privatetemplate_from_snapshot.sh
20.10.97.182:/export/secondary/snapshots/2/10/85bd1f75-1d10-48c4-ae79-d2e5ec
e8b6f1.vhd 20.10.97.182:/export/secondary/template/tmpl/2/218
7a51626c-dd10-43ca-b65a-162e88cf1188
+ options=tcp,soft,timeo=133,retrans=1
+ '[' -z
20.10.97.182:/export/secondary/snapshots/2/10/85bd1f75-1d10-48c4-ae79-d2e5ec
e8b6f1.vhd ']'
+ snapshoturl=20.10.97.182:/export/secondary/snapshots/2/10
+ vhdfilename=85bd1f75-1d10-48c4-ae79-d2e5ece8b6f1.vhd
+ '[' -z 20.10.97.182:/export/secondary/template/tmpl/2/218 ']'
+ templateurl=20.10.97.182:/export/secondary/template/tmpl/2/218
+ '[' -z 7a51626c-dd10-43ca-b65a-162e88cf1188 ']'
+ tmpltLocalDir=7a51626c-dd10-43ca-b65a-162e88cf1188
++ uuidgen -r
+ snapshotdir=/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b
+ mkdir -p /var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b
+ '[' 0 -ne 0 ']'
+ mount -o tcp,soft,timeo=133,retrans=1
20.10.97.182:/export/secondary/snapshots/2/10
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b
+ '[' 0 -ne 0 ']'
+ templatedir=/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188
+ mkdir -p /var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188
+ '[' 0 -ne 0 ']'
+ mount -o tcp,soft,timeo=133,retrans=1
20.10.97.182:/export/secondary/template/tmpl/2/218
/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188
+ '[' 0 -ne 0 ']'
+ VHDUTIL=/opt/xensource/bin/vhd-util
++ uuidgen -r
+ templateuuid=3d1cf0b6-3eb5-4a8d-867b-ae16d14ce3ca
+
desvhd=/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3e
b5-4a8d-867b-ae16d14ce3ca.vhd
+
srcvhd=/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d
10-48c4-ae79-d2e5ece8b6f1.vhd
+ copyvhd
/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3eb5-4a8d
-867b-ae16d14ce3ca.vhd
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d10-48c4
-ae79-d2e5ece8b6f1.vhd
+ local
desvhd=/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3e
b5-4a8d-867b-ae16d14ce3ca.vhd
+ local
srcvhd=/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d
10-48c4-ae79-d2e5ece8b6f1.vhd
+ local parent=
++ /opt/xensource/bin/vhd-util query -p -n
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d10-48c4
-ae79-d2e5ece8b6f1.vhd
+
parent=/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98
a5-4ac3-89d7-8c4ed1af18f9.vhd
+ '[' 0 -ne 0 ']'
+ [[
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98a5-4ac3
-89d7-8c4ed1af18f9.vhd =~  no parent ]]
+ copyvhd
/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3eb5-4a8d
-867b-ae16d14ce3ca.vhd
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98a5-4ac3
-89d7-8c4ed1af18f9.vhd
+ local
desvhd=/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3e
b5-4a8d-867b-ae16d14ce3ca.vhd
+ local
srcvhd=/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98
a5-4ac3-89d7-8c4ed1af18f9.vhd
+ local parent=
++ /opt/xensource/bin/vhd-util query -p -n
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98a5-4ac3
-89d7-8c4ed1af18f9.vhd
+
parent='/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-9
8a5-4ac3-89d7-8c4ed1af18f9.vhd has no parent'
+ '[' 0 -ne 0 ']'
+ [[
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98a5-4ac3
-89d7-8c4ed1af18f9.vhd has no parent =~  no parent ]]
+ dd
if=/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/8c410e93-98a5-4
ac3-89d7-8c4ed1af18f9.vhd
of=/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3eb5-4
a8d-867b-ae16d14ce3ca.vhd bs=2M
470+1 records in
470+1 records out
987628032 bytes (988 MB) copied, 375.973 seconds, 2.6 MB/s
+ '[' 0 -ne 0 ']'
+ /opt/xensource/bin/vhd-util coalesce -p
/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3eb5-4a8d
-867b-ae16d14ce3ca.vhd -n
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d10-48c4
-ae79-d2e5ece8b6f1.vhd
options: <-n name> [-a ancestor] [-o output] [-s sparse] [-p progress] [-h
help]
+ '[' 22 -ne 0 ']'
+ echo '32#failed to coalesce
/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3eb5-4a8d
-867b-ae16d14ce3ca.vhd to
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d10-48c4
-ae79-d2e5ece8b6f1.vhd'
32#failed to coalesce
/var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188/3d1cf0b6-3eb5-4a8d
-867b-ae16d14ce3ca.vhd to
/var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b/85bd1f75-1d10-48c4
-ae79-d2e5ece8b6f1.vhd
+ cleanup
+ '[' '!' -z /var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b ']'
+ umount /var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b
+ '[' 0 -eq 0 ']'
+ rmdir /var/run/cloud_mount/0be144dd-782a-45a0-9e88-ba4779b5c86b
+ '[' '!' -z /var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188 ']'
+ umount /var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188
+ '[' 0 -eq 0 ']'
+ rmdir /var/run/cloud_mount/7a51626c-dd10-43ca-b65a-162e88cf1188
+ exit 0


The 'vhd-util coalesce -p <vhd> -n <vhd>' command failed. It seems that this
is due to incorrect parameter.

Can anybody help me figure out this issue? 

Any insight would be appreciated. And, thanks for your patient on reading
this!

 

Yours sincerely,

Yan Ke

 

Reply via email to