[jira] [Created] (CLOUDSTACK-9103) Missing OS Mappings for VMware 6.0

2015-12-03 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-9103:


 Summary: Missing OS Mappings for VMware 6.0
 Key: CLOUDSTACK-9103
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9103
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Maneesha






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8829) Consecutive cold migration fails

2015-09-24 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha reassigned CLOUDSTACK-8829:


Assignee: Maneesha

> Consecutive cold migration fails
> 
>
> Key: CLOUDSTACK-8829
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8829
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> The following scenario is broken:
> 1.Deploy VM and stop it
> 2.Migrate stopped VM to a different primary storage pool
> 3.Again migrate the same VM to another/same storage pool. Fails with NPE.
> java.lang.NullPointerException
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8830) VM snapshot fails for 12 min after instance creation

2015-09-24 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha updated CLOUDSTACK-8830:
-
Status: Reviewable  (was: In Progress)

> VM snapshot fails for 12 min after instance creation
> 
>
> Key: CLOUDSTACK-8830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> ISSUE
> 
> VM snapshot fails for 12 min after instance creation
> Environment
> ==
> Product Name: Cloudstack
> Hypervisor: VMWare VSphere 6
> VM DETAILS
> ==
> i-84987-16119-VM
> STORAGE CONFIGURATION
> ==
> NA
> TROUBLESHOOTING
> ==
> I see that the following failure and immediate success result for the 
> CreateVMSnapshot call
> {noformat}
> 2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
> Executing request
> 2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
> (DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, 
> job-64835/job-64836, cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to 
> create snapshot for vm:i-84987-16119-VM due to null
> 2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:8b87ab8a) Seq 80-6161487240196259878: 
> Response Received: 
> 2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] (DirectAgent-66:ctx-5fbdccd8) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Processing:  { Ans: , MgmtId: 
> 345051581208, via: 80, Ver: v1, Flags: 10, 
> [{"com.cloud.agent.api.CreateVMSnapshotAnswer":{"result":false,"

[jira] [Updated] (CLOUDSTACK-8829) Consecutive cold migration fails

2015-09-24 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha updated CLOUDSTACK-8829:
-
Status: Reviewable  (was: In Progress)

> Consecutive cold migration fails
> 
>
> Key: CLOUDSTACK-8829
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8829
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> The following scenario is broken:
> 1.Deploy VM and stop it
> 2.Migrate stopped VM to a different primary storage pool
> 3.Again migrate the same VM to another/same storage pool. Fails with NPE.
> java.lang.NullPointerException
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8866) restart.retry.interval is being used instead of migrate.retry.interval during host maintenance

2015-09-24 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha updated CLOUDSTACK-8866:
-
Status: Reviewable  (was: In Progress)

> restart.retry.interval is being used instead of migrate.retry.interval during 
> host maintenance
> --
>
> Key: CLOUDSTACK-8866
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8866
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> The frequency at which Cloudstack tries to migrate the VMs is currently 
> controlled by the global parameter "restart.retry.interval" which has a 
> default value of 600 seconds or 10 minutes.This has to be changed to use 
> "migrate.retry.interval" which by default is 120 seconds or 2 minutes . 
> Cloudstack uses restart.retry.interval for all 
> operations-migrate,restart,stop,destroy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8830) VM snapshot fails for 12 min after instance creation

2015-09-24 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha reassigned CLOUDSTACK-8830:


Assignee: Maneesha

> VM snapshot fails for 12 min after instance creation
> 
>
> Key: CLOUDSTACK-8830
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8830
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> ISSUE
> 
> VM snapshot fails for 12 min after instance creation
> Environment
> ==
> Product Name: Cloudstack
> Hypervisor: VMWare VSphere 6
> VM DETAILS
> ==
> i-84987-16119-VM
> STORAGE CONFIGURATION
> ==
> NA
> TROUBLESHOOTING
> ==
> I see that the following failure and immediate success result for the 
> CreateVMSnapshot call
> {noformat}
> 2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
> (Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
> 345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
> [{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
>  i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
> 49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
>  }
> 2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
> Executing request
> 2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
> (DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, 
> job-64835/job-64836, cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to 
> create snapshot for vm:i-84987-16119-VM due to null
> 2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-66:ctx-5fbdccd8) (logid:8b87ab8a) Seq 80-6161487240196259878: 
> Response Received: 
> 2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] (DirectAgent-66:ctx-5fbdccd8) 
> (logid:8b87ab8a) Seq 80-6161487240196259878: Processing:  { Ans: , MgmtId: 
> 345051581208, via: 80, Ver: v1, Flags: 10, 
> [{"com.cloud.agent.api.CreateVMSnapshotAnswer":{"result":false,"wait":0}}] }

[jira] [Created] (CLOUDSTACK-8892) If VR enters out-of band context, routers lose their private/link-local IP.

2015-09-21 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8892:


 Summary: If VR enters out-of band context, routers lose their 
private/link-local IP.
 Key: CLOUDSTACK-8892
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8892
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Maneesha


If VR enters out-of band context ( here by mis-match in power state report from 
hypervisor ),  routers lose their link-local IP (private IP in VMware) nor is 
there any information of IP in the nics tab.


{noformat}

2015-02-11 06:48:22,090 INFO  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(DirectAgent-320:ctx-b767cd3d) Schedule a router reboot task as router 1838 is 
powered-on out-of-band. we need to reboot to refresh network rules

2015-02-11 06:48:22,091 INFO  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(DirectAgent-320:ctx-b767cd3d) Schedule a router reboot task as router 1838 is 
powered-on out-of-band. we need to reboot to refresh network rules

2015-02-11 07:20:11,616 INFO  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterMonitor-1:ctx-c16552e1) Reboot router 1838 to refresh network rules
2015-02-11 07:20:11,624 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterMonitor-1:ctx-c16552e1) Stopping and starting router 
VM[DomainRouter|r-1838-VM] as a part of router reboot
2015-02-11 07:20:11,624 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterMonitor-1:ctx-c16552e1) Stopping router VM[DomainRouter|r-1838-VM]

2015-02-11 07:20:14,692 WARN  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterMonitor-1:ctx-c16552e1) Error while rebooting the router
java.lang.RuntimeException: Job failed due to exception Unable to stop 
VM[DomainRouter|r-1838-VM]
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:543)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:50)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:47)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:500)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)


{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8892) If VR enters out-of band context, routers lose their private/link-local IP.

2015-09-21 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha reassigned CLOUDSTACK-8892:


Assignee: Maneesha

> If VR enters out-of band context, routers lose their private/link-local IP.
> ---
>
> Key: CLOUDSTACK-8892
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8892
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> If VR enters out-of band context ( here by mis-match in power state report 
> from hypervisor ),  routers lose their link-local IP (private IP in VMware) 
> nor is there any information of IP in the nics tab.
> {noformat}
> 2015-02-11 06:48:22,090 INFO  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (DirectAgent-320:ctx-b767cd3d) Schedule a router reboot task as router 1838 
> is powered-on out-of-band. we need to reboot to refresh network rules
> 2015-02-11 06:48:22,091 INFO  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (DirectAgent-320:ctx-b767cd3d) Schedule a router reboot task as router 1838 
> is powered-on out-of-band. we need to reboot to refresh network rules
> 2015-02-11 07:20:11,616 INFO  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterMonitor-1:ctx-c16552e1) Reboot router 1838 to refresh network rules
> 2015-02-11 07:20:11,624 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterMonitor-1:ctx-c16552e1) Stopping and starting router 
> VM[DomainRouter|r-1838-VM] as a part of router reboot
> 2015-02-11 07:20:11,624 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterMonitor-1:ctx-c16552e1) Stopping router VM[DomainRouter|r-1838-VM]
> 2015-02-11 07:20:14,692 WARN  [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterMonitor-1:ctx-c16552e1) Error while rebooting the router
> java.lang.RuntimeException: Job failed due to exception Unable to stop 
> VM[DomainRouter|r-1838-VM]
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
> at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:543)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:50)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:47)
> at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:500)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CLOUDSTACK-8866) restart.retry.interval is being used instead of migrate.retry.interval during host maintenance

2015-09-21 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha reassigned CLOUDSTACK-8866:


Assignee: Maneesha

> restart.retry.interval is being used instead of migrate.retry.interval during 
> host maintenance
> --
>
> Key: CLOUDSTACK-8866
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8866
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>Assignee: Maneesha
>
> The frequency at which Cloudstack tries to migrate the VMs is currently 
> controlled by the global parameter "restart.retry.interval" which has a 
> default value of 600 seconds or 10 minutes.This has to be changed to use 
> "migrate.retry.interval" which by default is 120 seconds or 2 minutes . 
> Cloudstack uses restart.retry.interval for all 
> operations-migrate,restart,stop,destroy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8866) restart.retry.interval is being used instead of migrate.retry.interval during host maintenance

2015-09-15 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8866:


 Summary: restart.retry.interval is being used instead of 
migrate.retry.interval during host maintenance
 Key: CLOUDSTACK-8866
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8866
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Maneesha


The frequency at which Cloudstack tries to migrate the VMs is currently 
controlled by the global parameter "restart.retry.interval" which has a default 
value of 600 seconds or 10 minutes.This has to be changed to use 
"migrate.retry.interval" which by default is 120 seconds or 2 minutes . 
Cloudstack uses restart.retry.interval for all 
operations-migrate,restart,stop,destroy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CLOUDSTACK-8829) Consecutive cold migration fails

2015-09-14 Thread Maneesha (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738492#comment-14738492
 ] 

Maneesha edited comment on CLOUDSTACK-8829 at 9/14/15 11:45 AM:


Author - Likitha Shetty 
Issue
Consecutive VM cold migration fails.
Root Cause Analysis
In case of VMware, if VM is being cold migrated between clusters belonging to 
two different VMware DCs, Cloudstack unregisters the VM from the source host 
and cleans up the associated VM files. The check if a VM is being cold migrated 
across DCs is made using the source host id. In case of consecutive cold 
migrations since the source host id of a VM is NULL and no VM exists, 
Cloudstack should skip the check
Proposed Solution
Attempt to unregister a VM in another DC, only if there is a host associated 
with a VM.


was (Author: maneeshap):
Author - Likitha Shetty 
Issue
Consecutive VM cold migration fails.
Root Cause Analysis
In case of VMware, if VM is being cold migrated between clusters belonging to 
two different VMware DCs, CCP unregisters the VM from the source host and 
cleans up the associated VM files. The check if a VM is being cold migrated 
across DCs is made using the source host id. In case of consecutive cold 
migrations since the source host id of a VM is NULL and no VM exists, CCP 
should skip the check
Proposed Solution
Attempt to unregister a VM in another DC, only if there is a host associated 
with a VM.

> Consecutive cold migration fails
> 
>
> Key: CLOUDSTACK-8829
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8829
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>
> The following scenario is broken:
> 1.Deploy VM and stop it
> 2.Migrate stopped VM to a different primary storage pool
> 3.Again migrate the same VM to another/same storage pool. Fails with NPE.
> java.lang.NullPointerException
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8830) VM snapshot fails for 12 min after instance creation

2015-09-10 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha updated CLOUDSTACK-8830:
-
Description: 
SSUE

VM snapshot fails for 12 min after instance creation
Environment
==
Product Name: CloudPlatform
Version: 4.5.1
Hypervisor: VMWare
VSphere 6
VM DETAILS
==
i-84987-16119-VM
STORAGE CONFIGURATION
==
NA
TROUBLESHOOTING
==
I see that the following failure and immediate success result for the 
CreateVMSnapshot call
{noformat}
2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
[{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
 i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
 }
2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
[{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
 i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
 }
2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
Executing request
2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
(DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, job-64835/job-64836, 
cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to create snapshot for 
vm:i-84987-16119-VM due to null
2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-66:ctx-5fbdccd8) (logid:8b87ab8a) Seq 80-6161487240196259878: 
Response Received: 
2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] (DirectAgent-66:ctx-5fbdccd8) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Processing:  { Ans: , MgmtId: 
345051581208, via: 80, Ver: v1, Flags: 10, 
[{"com.cloud.agent.api.CreateVMSnapshotAnswer":{"result":false,"wait":0}}] }
2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Received:  { Ans: , MgmtId: 
345051581208, via: 80, Ver: v1, Flags: 10, { CreateVMSnapshotAnswer } }
2015-07-24 08:20:55,525 ERROR [o.a.c.s.v.DefaultVMSnapshotStrategy] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Creating VM snapshot: i-84987-16119-VM_VS_20150724152053 failed
2015-07-24 08:20:55,531 DEBUG [c.c.v.s.VMSnapsh

[jira] [Created] (CLOUDSTACK-8830) VM snapshot fails for 12 min after instance creation

2015-09-10 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8830:


 Summary: VM snapshot fails for 12 min after instance creation
 Key: CLOUDSTACK-8830
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8830
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Maneesha


SSUE

VM snapshot fails for 12 min after instance creation
Environment
==
Product Name: CloudPlatform
Version: 4.5.1
Hypervisor: VMWare
VSphere 6
VM DETAILS
==
i-84987-16119-VM
STORAGE CONFIGURATION
==
NA
TROUBLESHOOTING
==
I see that the following failure and immediate success result for the 
CreateVMSnapshot call
2015-07-24 08:20:55,363 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Sending  { Cmd , MgmtId: 
345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
[{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
 i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
 }
2015-07-24 08:20:55,373 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Executing:  { Cmd , MgmtId: 
345051581208, via: 80(ussfoldcsesx112.adslab.local), Ver: v1, Flags: 100011, 
[{"com.cloud.agent.api.CreateVMSnapshotCommand":{"volumeTOs":[{"uuid":"a89b4ad5-f23f-4df6-84a8-89c4f40b2edb","volumeType":"ROOT","volumeState":"Ready","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"346b381a-8543-3f7b-9eff-fa909ad243c7","id":205,"poolType":"NetworkFilesystem","host":"10.144.35.110","path":"/tintri/ECS-SR-CLD200","port":2049,"url":"NetworkFilesystem://10.144.35.110/tintri/ECS-SR-CLD200/?ROLE=Primary&STOREUUID=346b381a-8543-3f7b-9eff-fa909ad243c7"}},"name":"ROOT-16119","size":1073741824,"path":"ROOT-16119","volumeId":19311,"vmName":"i-84987-16119-VM","vmState":"Running","accountId":84987,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[346b381a85433f7b9efffa909ad243c7]
 i-84987-16119-VM/ROOT-16119.vmdk\",\"[346b381a85433f7b9efffa909ad243c7] 
49f59e1a4ce23fec8890c8b9e5891d56/49f59e1a4ce23fec8890c8b9e5891d56.vmdk\"]}","format":"OVA","provisioningType":"THIN","id":19311,"deviceId":0,"cacheMode":"NONE","hypervisorType":"VMware"}],"target":{"id":962,"snapshotName":"i-84987-16119-VM_VS_20150724152053","type":"Disk","current":false,"description":"unit-test-instance-snapshot","quiescevm":false},"vmName":"i-84987-16119-VM","guestOSType":"None","wait":1800}}]
 }
2015-07-24 08:20:55,374 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-66:ctx-5fbdccd8) (logid:710814a5) Seq 80-6161487240196259878: 
Executing request
2015-07-24 08:20:55,523 ERROR [c.c.h.v.m.VmwareStorageManagerImpl] 
(DirectAgent-66:ctx-5fbdccd8 ussfoldcsesx112.adslab.local, job-64835/job-64836, 
cmd: CreateVMSnapshotCommand) (logid:8b87ab8a) failed to create snapshot for 
vm:i-84987-16119-VM due to null
2015-07-24 08:20:55,524 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgent-66:ctx-5fbdccd8) (logid:8b87ab8a) Seq 80-6161487240196259878: 
Response Received: 
2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] (DirectAgent-66:ctx-5fbdccd8) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Processing:  { Ans: , MgmtId: 
345051581208, via: 80, Ver: v1, Flags: 10, 
[{"com.cloud.agent.api.CreateVMSnapshotAnswer":{"result":false,"wait":0}}] }
2015-07-24 08:20:55,525 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-61:ctx-03fad7f2 job-64835/job-64836 ctx-746f3965) 
(logid:8b87ab8a) Seq 80-6161487240196259878: Received:  { Ans: , MgmtId: 
345051581208, via: 80, Ver: v1, Flags: 10, { CreateVMSnapshotAnswer } }
2015-07-24 08:20:55,525 ERROR [o.a.c.s.v.DefaultVMSnapshotStrat

[jira] [Commented] (CLOUDSTACK-8829) Consecutive cold migration fails

2015-09-10 Thread Maneesha (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738492#comment-14738492
 ] 

Maneesha commented on CLOUDSTACK-8829:
--

Author - Likitha Shetty 
Issue
Consecutive VM cold migration fails.
Root Cause Analysis
In case of VMware, if VM is being cold migrated between clusters belonging to 
two different VMware DCs, CCP unregisters the VM from the source host and 
cleans up the associated VM files. The check if a VM is being cold migrated 
across DCs is made using the source host id. In case of consecutive cold 
migrations since the source host id of a VM is NULL and no VM exists, CCP 
should skip the check
Proposed Solution
Attempt to unregister a VM in another DC, only if there is a host associated 
with a VM.

> Consecutive cold migration fails
> 
>
> Key: CLOUDSTACK-8829
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8829
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>
> The following scenario is broken:
> 1.Deploy VM and stop it
> 2.Migrate stopped VM to a different primary storage pool
> 3.Again migrate the same VM to another/same storage pool. Fails with NPE.
> java.lang.NullPointerException
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8829) Consecutive cold migration fails

2015-09-10 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha updated CLOUDSTACK-8829:
-
Description: 
The following scenario is broken:
1.  Deploy VM and stop it
2.  Migrate stopped VM to a different primary storage pool
3.  Again migrate the same VM to another/same storage pool. Fails with NPE.


java.lang.NullPointerException
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)

  was:
The following scenario is broken:
1.  Deploy VM and stop it
2.  Migrate stopped VM to a different primary storage pool
3.  Again migrate the same VM to another/same storage pool. Fails with NPE.
java.lang.NullPointerException
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)


> Consecutive cold migration fails
> 
>
> Key: CLOUDSTACK-8829
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8829
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Maneesha
>
> The following scenario is broken:
> 1.Deploy VM and stop it
> 2.Migrate stopped VM to a different primary storage pool
> 3.Again migrate the same VM to another/same storage pool. Fails with NPE.
> java.lang.NullPointerException
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8829) Consecutive cold migration fails

2015-09-10 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8829:


 Summary: Consecutive cold migration fails
 Key: CLOUDSTACK-8829
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8829
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Maneesha


The following scenario is broken:
1.  Deploy VM and stop it
2.  Migrate stopped VM to a different primary storage pool
3.  Again migrate the same VM to another/same storage pool. Fails with NPE.
java.lang.NullPointerException
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:1745)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStorageMigration(VirtualMachineManagerImpl.java:4716)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4723)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-8800) Improve the listVirtualMachines API call to include memory utilization information for a VM

2015-09-03 Thread Maneesha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maneesha updated CLOUDSTACK-8800:
-
Assignee: Maneesha

> Improve the listVirtualMachines API call to include memory utilization 
> information for a VM
> ---
>
> Key: CLOUDSTACK-8800
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8800
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.5.2
>Reporter: Maneesha
>Assignee: Maneesha
> Fix For: 4.6.0
>
>
> Currently the feature of memory utilization is not available via API call 
> (listVirtualMachines).
> https://cloudstack.apache.org/api/apidocs-4.5/root_admin/listVirtualMachines.html
>  
> The listVirtualMachine get its values from the "user_vm_view" table in the 
> database. Currently it shows the CPU utilization of the VM's.
> The only way to find out the memory utilization of VM's running on XenServer, 
> is to run the "xentop" command on the pool master of the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8800) Improve the listVirtualMachines API call to include memory utilization information for a VM

2015-09-03 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8800:


 Summary: Improve the listVirtualMachines API call to include 
memory utilization information for a VM
 Key: CLOUDSTACK-8800
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8800
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.2
Reporter: Maneesha
 Fix For: 4.6.0


Currently the feature of memory utilization is not available via API call 
(listVirtualMachines).
https://cloudstack.apache.org/api/apidocs-4.5/root_admin/listVirtualMachines.html
 
The listVirtualMachine get its values from the "user_vm_view" table in the 
database. Currently it shows the CPU utilization of the VM's.
The only way to find out the memory utilization of VM's running on XenServer, 
is to run the "xentop" command on the pool master of the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8714) Restore VM (Re-install VM) with enable.storage.migration set to false fails, later fails to start up VM too

2015-08-06 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8714:


 Summary: Restore VM (Re-install VM) with enable.storage.migration 
set to false fails, later fails to start up VM too
 Key: CLOUDSTACK-8714
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8714
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.1
Reporter: Maneesha
 Fix For: 4.6.0


Environment: 
===
Advanced zone, 
Hypervisor: XS, 
Shared storage - multiple pools
API: restoreVirtualMachine
When we fire a Re-install VM, the allocator logic is kicking in for data disks 
as well causing the data disks to possibly get migrated to a different storage. 
If global config enable.storage.migration is set to false, the migration would 
fail and Reset VM would also fail, although there's enough space left in the 
existing storage pools to run the data disks.
Later, when I try to start up the VM, (which has now gone into stopped state), 
that also fails.
Question is, why should we move around the data disks when we do Re-install VM? 
Only the ROOT disk should be re-installed and re-deployed. The data disks 
should remain as is. But the allocator logic is kicking in for all the disks 
attached to the VM as well and in effect, may get migrated to different pools 
in the cluster. We also add new entries in the DB for the data disks that got 
migrated.
If there are many number of data disks attached to the VM, we are spending a 
lot of time just trying to un-necessarily move around the disks to different 
pools. While we actually need only the ROOT disk to be re-installed.
Finally, the VM also becomes un-recoverable since start VM fails.
Steps:
=
Have multiple pools in the cluster.
Set enable.storage.migration = false
Deploy a VM with data disk
Re-install VM
Watch for result. The data disks might get migrated to different storage if the 
allocator decides to deploy them in different pool. It may take couple of 
attempts to reproduce the issue, may not see it at the first time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8711) public_ip type resource count for an account is not decremented upon IP range deletion

2015-08-06 Thread Maneesha (JIRA)
Maneesha created CLOUDSTACK-8711:


 Summary: public_ip type resource count for an account is not 
decremented upon IP range deletion
 Key: CLOUDSTACK-8711
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8711
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.5.1
Reporter: Maneesha
 Fix For: 4.6.0


When deleting the IP range which is associated to an account the resource count 
for public_ip is not decremented accordingly which is causing not to add any 
new ranges to that account further once we reach max limit.
Repro Steps.
-
1. Add an IP range and associate it to a particular account. This will 
increment your resource count for public_ip to that range count.
2. Now try to delete this range and check the resource count for public_ip of 
that account. it will not be decreased.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)