[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2014-11-15 Thread Ram Ganesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ram Ganesh updated CLOUDSTACK-5482:
---
Assignee: edison su  (was: Sateesh Chodapuneedi)

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: edison su
> Fix For: 4.4.0, 4.5.0
>
> Attachments: nfs12down.rar, vmware.rar, vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 2013-12-1

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2014-11-17 Thread edison su (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

edison su updated CLOUDSTACK-5482:
--
Assignee: Sateesh Chodapuneedi  (was: edison su)

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.4.0, 4.5.0
>
> Attachments: nfs12down.rar, vmware.rar, vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2013-12-12 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5482:


Description: 
Set up :
Advanced Zone with 2 5.1 ESXI hosts.

Steps to reproduce the problem:

1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
2. Start concurrent snapshots for ROOT volumes of all the Vms.
3. Shutdown the Secondary storage server when the snapshots are in the progress.
4. Bring the Secondary storage server up after 1 hour.

When the secondary storage was down , 2 of the  snapshots were already 
completed. 5 of them were in progress and the other 4 had not started yet.

Once the secondary store was brought up , I see the snapshots that were in 
progress actually continue to download to secondary and succeed. But the other 
4 snapshots error out. 

mysql> select volume_id,status,created from snapshots;
+---+--+-+
| volume_id | status   | created |
+---+--+-+
|22 | BackedUp | 2013-12-12 23:24:13 |
|21 | Destroyed| 2013-12-12 23:24:13 |
|20 | BackedUp | 2013-12-12 23:24:14 |
|19 | Destroyed| 2013-12-12 23:24:14 |
|18 | BackedUp | 2013-12-12 23:24:14 |
|17 | BackedUp | 2013-12-12 23:24:14 |
|16 | BackedUp | 2013-12-12 23:24:14 |
|14 | BackedUp | 2013-12-12 23:24:15 |
|25 | BackedUp | 2013-12-12 23:24:15 |
|24 | BackedUp | 2013-12-12 23:24:15 |
|23 | BackedUp | 2013-12-12 23:24:15 |
|22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
|21 | BackedUp | 2013-12-12 23:53:38 |
|20 | BackedUp | 2013-12-12 23:53:38 |
|19 | BackedUp | 2013-12-12 23:53:39 |
|18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
|17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
|16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
|14 | BackedUp | 2013-12-12 23:53:40 |
|25 | BackedUp | 2013-12-12 23:53:41 |
|24 | BackedUp | 2013-12-12 23:53:41 |
|23 | BackedUp | 2013-12-12 23:53:42 |
|21 | BackedUp | 2013-12-13 00:53:37 |
|19 | BackedUp | 2013-12-13 00:53:38 |
+---+--+-+
24 rows in set (0.00 sec)

Following exception seen in the management server logs:

2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: v1, 
Flags: 10, 
[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
 snapshot exception: Exception: java.lang.Exception\nMessage: Unable to finish 
the whole process to package as a OVA file\n","wait":0}}] }
2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, via: 
5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
(Job-Executor-1:ctx-83fb69a5 ctx-51e56052) Failed to create snapshot
com.cloud.utils.exception.CloudRuntimeException: backup snapshot exception: 
Exception: java.lang.Exception
Message: Unable to finish the whole process to package as a OVA file

at 
org.apache.cloudstack.storage.snapshot.SnapshotServiceImpl.backupSnapshot(SnapshotServiceImpl.java:275)
at 
org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.backupSnapshot(XenserverSnapshotStrategy.java:135)
at 
org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.takeSnapshot(XenserverSnapshotStrategy.java:294)
at 
com.cloud.storage.snapshot.SnapshotManagerImpl.takeSnapshot(SnapshotManagerImpl.java:951)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Prox

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2013-12-12 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5482:


Description: 
Set up :
Advanced Zone with 2 5.1 ESXI hosts.

Steps to reproduce the problem:

1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
2. Start concurrent snapshots for ROOT volumes of all the Vms.
3. Shutdown the Secondary storage server when the snapshots are in the progress.
4. Bring the Secondary storage server up after 1 hour.

When the secondary storage was down , 2 of the  snapshots were already 
completed. 5 of them were in progress and the other 4 had not started yet.

Once the secondary store was brought up , I see the snapshots that were in 
progress actually continue to download to secondary and succeed. But the other 
4 snapshots error out. 

mysql> select volume_id,status,created from snapshots;
+---+--+-+
| volume_id | status   | created |
+---+--+-+
|22 | BackedUp | 2013-12-12 23:24:13 |
|21 | Destroyed| 2013-12-12 23:24:13 |
|20 | BackedUp | 2013-12-12 23:24:14 |
|19 | Destroyed| 2013-12-12 23:24:14 |
|18 | BackedUp | 2013-12-12 23:24:14 |
|17 | BackedUp | 2013-12-12 23:24:14 |
|16 | BackedUp | 2013-12-12 23:24:14 |
|14 | BackedUp | 2013-12-12 23:24:15 |
|25 | BackedUp | 2013-12-12 23:24:15 |
|24 | BackedUp | 2013-12-12 23:24:15 |
|23 | BackedUp | 2013-12-12 23:24:15 |
|22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
|21 | BackedUp | 2013-12-12 23:53:38 |
|20 | BackedUp | 2013-12-12 23:53:38 |
|19 | BackedUp | 2013-12-12 23:53:39 |
|18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
|17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
|16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
|14 | BackedUp | 2013-12-12 23:53:40 |
|25 | BackedUp | 2013-12-12 23:53:41 |
|24 | BackedUp | 2013-12-12 23:53:41 |
|23 | BackedUp | 2013-12-12 23:53:42 |
|21 | BackedUp | 2013-12-13 00:53:37 |
|19 | BackedUp | 2013-12-13 00:53:38 |
+---+--+-+
24 rows in set (0.00 sec)

This leaves behind incomplete snapshots. The directory does not have a ovf file 
and has incomplete vmdk file.

[root@Rack3Host8 18]# ls -ltR
.:
total 12
drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2

./36d7964c-e545-41d7-b303-96359a88dcef:
total 403256
-rw-r--r--. 1 root root 412524288 Dec 13 00:20 
36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk

./68802f5f-84b1-42ad-8dca-4de7e83324e2:
total 448860
-rw-r--r--. 1 root root 459168256 Dec 12 22:30 
68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
-rw-r--r--. 1 root root  6454 Dec 12 22:30 
68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
[root@Rack3Host8 18]#


Following exception seen in the management server logs:

2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: v1, 
Flags: 10, 
[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
 snapshot exception: Exception: java.lang.Exception\nMessage: Unable to finish 
the whole process to package as a OVA file\n","wait":0}}] }
2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, via: 
5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
(Job-Executor-1:ctx-83fb69a5 ctx-51e56052) Failed to create snapshot
com.cloud.utils.exception.CloudRuntimeException: backup snapshot exception: 
Exception: java.lang.Exception
Message: Unable to finish the whole process to package as a OVA file

at 
org.apache.cloudstack.storage.snapshot.SnapshotServiceImpl.backupSnapshot(SnapshotServiceImpl.java:275)
at 
org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.backupSnapshot(XenserverSnapshotStrategy.java:135)
at 
org.apache.cloudstack.storage.snapshot.XenserverSnapshotStrategy.takeSnapshot(XenserverSnapshotStrategy.java:294)
at 
com.cloud.storage.snapshot.SnapshotManagerImpl.takeSnapshot(SnapshotManagerImpl.java:951)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.i

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2013-12-12 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5482:


Attachment: vmware.rar

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
> Fix For: 4.3.0
>
> Attachments: vmware.rar, vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
> (Job-Exe

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2013-12-12 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5482:


Attachment: vmware.rar

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
> Fix For: 4.3.0
>
> Attachments: vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManagerImpl] 
> (Job-Executor-1:ctx-

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2013-12-13 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5482:


Attachment: nfs12down.rar

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
> Fix For: 4.3.0
>
> Attachments: nfs12down.rar, vmware.rar, vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 2013-12-12 20:23:13,041 DEBUG [c.c.s.s.SnapshotManage

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2014-10-08 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5482:
---
Assignee: Sateesh Chodapuneedi

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.4.0
>
> Attachments: nfs12down.rar, vmware.rar, vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 2013-12

[jira] [Updated] (CLOUDSTACK-5482) Vmware - When nfs was down for about 1 hour , when snapshots were in progress , snapshot job failed when nfs was brought up leaving behind snaphots in "CreatedOnPri

2014-11-02 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5482:
---
Fix Version/s: 4.5.0

> Vmware - When nfs was down for about 1 hour , when snapshots were in progress 
> , snapshot job failed when nfs was brought up leaving behind  snaphots in 
> "CreatedOnPrimary" state.
> -
>
> Key: CLOUDSTACK-5482
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5482
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Sateesh Chodapuneedi
> Fix For: 4.4.0, 4.5.0
>
> Attachments: nfs12down.rar, vmware.rar, vmware.rar
>
>
> Set up :
> Advanced Zone with 2 5.1 ESXI hosts.
> Steps to reproduce the problem:
> 1. Deploy 5 Vms in each of the hosts , so we start with 11 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress.
> 4. Bring the Secondary storage server up after 1 hour.
> When the secondary storage was down , 2 of the  snapshots were already 
> completed. 5 of them were in progress and the other 4 had not started yet.
> Once the secondary store was brought up , I see the snapshots that were in 
> progress actually continue to download to secondary and succeed. But the 
> other 4 snapshots error out. 
> mysql> select volume_id,status,created from snapshots;
> +---+--+-+
> | volume_id | status   | created |
> +---+--+-+
> |22 | BackedUp | 2013-12-12 23:24:13 |
> |21 | Destroyed| 2013-12-12 23:24:13 |
> |20 | BackedUp | 2013-12-12 23:24:14 |
> |19 | Destroyed| 2013-12-12 23:24:14 |
> |18 | BackedUp | 2013-12-12 23:24:14 |
> |17 | BackedUp | 2013-12-12 23:24:14 |
> |16 | BackedUp | 2013-12-12 23:24:14 |
> |14 | BackedUp | 2013-12-12 23:24:15 |
> |25 | BackedUp | 2013-12-12 23:24:15 |
> |24 | BackedUp | 2013-12-12 23:24:15 |
> |23 | BackedUp | 2013-12-12 23:24:15 |
> |22 | CreatedOnPrimary | 2013-12-12 23:53:38 |
> |21 | BackedUp | 2013-12-12 23:53:38 |
> |20 | BackedUp | 2013-12-12 23:53:38 |
> |19 | BackedUp | 2013-12-12 23:53:39 |
> |18 | CreatedOnPrimary | 2013-12-12 23:53:39 |
> |17 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |16 | CreatedOnPrimary | 2013-12-12 23:53:40 |
> |14 | BackedUp | 2013-12-12 23:53:40 |
> |25 | BackedUp | 2013-12-12 23:53:41 |
> |24 | BackedUp | 2013-12-12 23:53:41 |
> |23 | BackedUp | 2013-12-12 23:53:42 |
> |21 | BackedUp | 2013-12-13 00:53:37 |
> |19 | BackedUp | 2013-12-13 00:53:38 |
> +---+--+-+
> 24 rows in set (0.00 sec)
> This leaves behind incomplete snapshots. The directory does not have a ovf 
> file and has incomplete vmdk file.
> [root@Rack3Host8 18]# ls -ltR
> .:
> total 12
> drwxr-xr-x. 2 root root 4096 Dec 12 22:56 36d7964c-e545-41d7-b303-96359a88dcef
> drwxr-xr-x. 2 root root 4096 Dec 12 22:30 68802f5f-84b1-42ad-8dca-4de7e83324e2
> ./36d7964c-e545-41d7-b303-96359a88dcef:
> total 403256
> -rw-r--r--. 1 root root 412524288 Dec 13 00:20 
> 36d7964c-e545-41d7-b303-96359a88dcef-disk0.vmdk
> ./68802f5f-84b1-42ad-8dca-4de7e83324e2:
> total 448860
> -rw-r--r--. 1 root root 459168256 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2-disk0.vmdk
> -rw-r--r--. 1 root root  6454 Dec 12 22:30 
> 68802f5f-84b1-42ad-8dca-4de7e83324e2.ovf
> [root@Rack3Host8 18]#
> Following exception seen in the management server logs:
> 2013-12-12 20:23:13,021 DEBUG [c.c.a.t.Request] (AgentManager-Handler-2:null) 
> Seq 5-813367309: Processing:  { Ans: , MgmtId: 95307354844397, via: 5, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"backup
>  snapshot exception: Exception: java.lang.Exception\nMessage: Unable to 
> finish the whole process to package as a OVA file\n","wait":0}}] }
> 2013-12-12 20:23:13,022 DEBUG [c.c.a.t.Request] (Job-Executor-1:ctx-83fb69a5 
> ctx-51e56052) Seq 5-813367309: Received:  { Ans: , MgmtId: 95307354844397, 
> via: 5, Ver: v1, Flags: 10, { CopyCmdAnswer } }
> 2013-12-12