[jira] [Updated] (CLOUDSTACK-6024) template copy to primary storage uses a random source secstorage from any zone

2014-03-26 Thread Daan Hoogland (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daan Hoogland updated CLOUDSTACK-6024:
--

Fix Version/s: 4.4.0

> template copy to primary storage uses a random source secstorage from any zone
> --
>
> Key: CLOUDSTACK-6024
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6024
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.1.2, 4.4.0
> Environment: Multiple zones where the secstorage of a zone is not 
> accessible to hosts from the other zone.
>Reporter: Joris van Lieshout
>Assignee: Daan Hoogland
>Priority: Blocker
> Fix For: 4.3.0, 4.4.0
>
>
> 2014-02-04 15:19:07,674 DEBUG [cloud.storage.VolumeManagerImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> Checking if we need to prepare 1 volumes for VM[User|xx-app01]
> 2014-02-04 15:19:07,693 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> template 467 is already in store:117, type:Image
> // store 117 is not accessible from the zone where this hypervisor lives
> 2014-02-04 15:19:07,705 DEBUG [storage.datastore.PrimaryDataStoreImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Not 
> found (templateId:467poolId:208) in template_spool_ref, persisting it
> 2014-02-04 15:19:07,718 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> template 467 is already in store:208, type:Primary
> 2014-02-04 15:19:07,722 DEBUG [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Found 
> template 467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938 in storage pool 208 with 
> VMTemplateStoragePool id: 36433
> 2014-02-04 15:19:07,732 DEBUG [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> Acquire lock on VMTemplateStoragePool 36433 with timeout 3600 seconds
> 2014-02-04 15:19:07,737 INFO  [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) lock 
> is acquired for VMTemplateStoragePool 36433
> 2014-02-04 15:19:07,748 DEBUG [storage.motion.AncientDataMotionStrategy] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> 2014-02-04 15:19:07,775 DEBUG [agent.manager.ClusteredAgentAttache] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Seq 
> 93-1862347354: Forwarding Seq 93-1862347354:  { Cmd , MgmtId: 345052370018, 
> via: 93, Ver: v1, Flags: 100111, 
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/2/467/c263eb76-3d72-3732-8cc6-42b0dad55c4d.vhd","origUrl":"http://x.x.com/image/centos64x64-daily-v1b104.vhd","uuid":"ca5e3f26-e9b6-41c8-a85b-df900be5673c","id":467,"format":"VHD","accountId":2,"checksum":"604a8327bd83850ed621ace2ea84402a","hvm":true,"displayText":"centos
>  template created by hans.pl from machine name 
> centos-daily-b104","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://.storage..xx.xxx/volumes/pool0/--1-1","_role":"Image"}},"name":"467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://xx.xx.com/image/centos64x64-daily-v1b104.vhd","uuid":"ca5e3f26-e9b6-41c8-a85b-df900be5673c","id":467,"format":"VHD","accountId":2,"checksum":"604a8327bd83850ed621ace2ea84402a","hvm":true,"displayText":"centos
>  template created by hans.pl from machine name 
> centos-daily-b104","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b290385b-466d-3243-a939-3d242164e034","id":208,"poolType":"NetworkFilesystem","host":"..x.net","path":"/volumes/pool0/xx-XEN-1","port":2049}},"name":"467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938","hypervisorType":"XenServer"}},"executeInSequence":true,"wait":10800}}]
>  } to 345052370017
> ===FILE: server/src/com/cloud/storage/VolumeManagerImpl.java
> public void prepare(VirtualMachineProfile vm,
> DeployDestination dest) throws StorageUnavailableException,
> InsufficientStorageCapacityException, 
> ConcurrentOperationException {
> if (dest == null) {
> if (s_logger.isDebugEnabled()) {
> s_logger.debug("DeployDestination cannot be null, cannot 
> prepare Volumes for the vm: "
>

[jira] [Resolved] (CLOUDSTACK-6024) template copy to primary storage uses a random source secstorage from any zone

2014-03-26 Thread Daan Hoogland (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daan Hoogland resolved CLOUDSTACK-6024.
---

Resolution: Fixed

> template copy to primary storage uses a random source secstorage from any zone
> --
>
> Key: CLOUDSTACK-6024
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6024
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.1.2, 4.4.0
> Environment: Multiple zones where the secstorage of a zone is not 
> accessible to hosts from the other zone.
>Reporter: Joris van Lieshout
>Assignee: Daan Hoogland
>Priority: Blocker
> Fix For: 4.3.0, 4.4.0
>
>
> 2014-02-04 15:19:07,674 DEBUG [cloud.storage.VolumeManagerImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> Checking if we need to prepare 1 volumes for VM[User|xx-app01]
> 2014-02-04 15:19:07,693 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> template 467 is already in store:117, type:Image
> // store 117 is not accessible from the zone where this hypervisor lives
> 2014-02-04 15:19:07,705 DEBUG [storage.datastore.PrimaryDataStoreImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Not 
> found (templateId:467poolId:208) in template_spool_ref, persisting it
> 2014-02-04 15:19:07,718 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> template 467 is already in store:208, type:Primary
> 2014-02-04 15:19:07,722 DEBUG [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Found 
> template 467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938 in storage pool 208 with 
> VMTemplateStoragePool id: 36433
> 2014-02-04 15:19:07,732 DEBUG [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> Acquire lock on VMTemplateStoragePool 36433 with timeout 3600 seconds
> 2014-02-04 15:19:07,737 INFO  [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) lock 
> is acquired for VMTemplateStoragePool 36433
> 2014-02-04 15:19:07,748 DEBUG [storage.motion.AncientDataMotionStrategy] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> 2014-02-04 15:19:07,775 DEBUG [agent.manager.ClusteredAgentAttache] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Seq 
> 93-1862347354: Forwarding Seq 93-1862347354:  { Cmd , MgmtId: 345052370018, 
> via: 93, Ver: v1, Flags: 100111, 
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/2/467/c263eb76-3d72-3732-8cc6-42b0dad55c4d.vhd","origUrl":"http://x.x.com/image/centos64x64-daily-v1b104.vhd","uuid":"ca5e3f26-e9b6-41c8-a85b-df900be5673c","id":467,"format":"VHD","accountId":2,"checksum":"604a8327bd83850ed621ace2ea84402a","hvm":true,"displayText":"centos
>  template created by hans.pl from machine name 
> centos-daily-b104","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://.storage..xx.xxx/volumes/pool0/--1-1","_role":"Image"}},"name":"467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://xx.xx.com/image/centos64x64-daily-v1b104.vhd","uuid":"ca5e3f26-e9b6-41c8-a85b-df900be5673c","id":467,"format":"VHD","accountId":2,"checksum":"604a8327bd83850ed621ace2ea84402a","hvm":true,"displayText":"centos
>  template created by hans.pl from machine name 
> centos-daily-b104","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b290385b-466d-3243-a939-3d242164e034","id":208,"poolType":"NetworkFilesystem","host":"..x.net","path":"/volumes/pool0/xx-XEN-1","port":2049}},"name":"467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938","hypervisorType":"XenServer"}},"executeInSequence":true,"wait":10800}}]
>  } to 345052370017
> ===FILE: server/src/com/cloud/storage/VolumeManagerImpl.java
> public void prepare(VirtualMachineProfile vm,
> DeployDestination dest) throws StorageUnavailableException,
> InsufficientStorageCapacityException, 
> ConcurrentOperationException {
> if (dest == null) {
> if (s_logger.isDebugEnabled()) {
> s_logger.debug("DeployDestination cannot be null, cannot 
> prepare Volumes for the vm: "
> 

[jira] [Closed] (CLOUDSTACK-4506) In a mixed hypervisor setup, destroying a VM whose host has been removed, throws a NPE and the ROOT volume of that VM also is not deleted from the primary.

2014-03-26 Thread Abhinav Roy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinav Roy closed CLOUDSTACK-4506.
---


Closing the issue as it is not seen anymore.

> In a mixed hypervisor setup, destroying a VM whose host has been removed, 
> throws a NPE and the ROOT volume of that VM also is not deleted from the 
> primary.
> ---
>
> Key: CLOUDSTACK-4506
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4506
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
> Environment: Advanced zone setup having clusters of different 
> hypervisor types. Ex KVM and VMWARE
>Reporter: Abhinav Roy
>Assignee: edison su
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: CS-4506.zip
>
>
> Steps :
> =
> 1. Deploy a CS 4.2 setup with KVM and VMWARE clusters having one host each.
> 2. Create some VMs on KVM (ex- kvm1 and kvm2)
> 3. Put the KVM host in maintenance mode and then remove the Host. Now kvm1 
> and kvm2 are in stopped state.
> 4. Destroy kvm1 
> Observations :
> 
> 1. when kvm1 is destroyed it fails with the following exception 
> 2013-08-26 13:10:09,205 DEBUG [cloud.api.ApiServlet] (catalina-exec-19:null) 
> ===START===  10.144.6.17 -- GET  
> command=destroyVirtualMachine&id=d613f6e5-c53a-4f6b-be31-32f101eb6c99&response=json&sessionkey=bU49tWdfVUJUrq65PbMmzGbe0PE%3D&_=1377502661180
> 2013-08-26 13:10:09,242 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (catalina-exec-19:null) submit async job-78 = [ 
> 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ], details: AsyncJobVO {id:78, userId: 
> 2, accountId: 2, sessionKey: null, instanceType: VirtualMachine, instanceId: 
> 3, cmd: org.apache.cloudstack.api.command.user.vm.DestroyVMCmd, 
> cmdOriginator: null, cmdInfo: 
> {"response":"json","id":"d613f6e5-c53a-4f6b-be31-32f101eb6c99","sessionkey":"bU49tWdfVUJUrq65PbMmzGbe0PE\u003d","cmdEventType":"VM.DESTROY","ctxUserId":"2","httpmethod":"GET","_":"1377502661180","ctxAccountId":"2","ctxStartEventId":"245"},
>  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
> processStatus: 0, resultCode: 0, result: null, initMsid: 226870599129537, 
> completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
> 2013-08-26 13:10:09,245 DEBUG [cloud.api.ApiServlet] (catalina-exec-19:null) 
> ===END===  10.144.6.17 -- GET  
> command=destroyVirtualMachine&id=d613f6e5-c53a-4f6b-be31-32f101eb6c99&response=json&sessionkey=bU49tWdfVUJUrq65PbMmzGbe0PE%3D&_=1377502661180
> 2013-08-26 13:10:09,249 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (Job-Executor-50:job-78 = [ 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ]) Executing 
> org.apache.cloudstack.api.command.user.vm.DestroyVMCmd for job-78 = [ 
> 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ]
> 2013-08-26 13:10:09,282 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
> (Job-Executor-50:job-78 = [ 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ]) 
> Destroying vm VM[User|v1]
> 2013-08-26 13:10:09,282 DEBUG [cloud.vm.VirtualMachineManagerImpl] 
> (Job-Executor-50:job-78 = [ 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ]) VM is 
> already stopped: VM[User|v1]
> 2013-08-26 13:10:09,308 DEBUG [cloud.capacity.CapacityManagerImpl] 
> (Job-Executor-50:job-78 = [ 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ]) VM state 
> transitted from :Stopped to Destroyed with event: DestroyRequestedvm's 
> original host id: 1 new host id: null host id before state transition: null
> 2013-08-26 13:10:09,324 ERROR [cloud.async.AsyncJobManagerImpl] 
> (Job-Executor-50:job-78 = [ 8018e79c-a47e-4a54-b3a4-68963f3c11a9 ]) 
> Unexpected exception while executing 
> org.apache.cloudstack.api.command.user.vm.DestroyVMCmd
> java.lang.NullPointerException
> at 
> com.cloud.capacity.CapacityManagerImpl.releaseVmCapacity(CapacityManagerImpl.java:187)
> at 
> com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> at 
> com.cloud.capacity.CapacityManagerImpl.postStateTransitionEvent(CapacityManagerImpl.java:718)
> at 
> com.cloud.capacity.CapacityManagerImpl.postStateTransitionEvent(CapacityManagerImpl.java:101)
> at com.cloud.utils.fsm.StateMachine2.transitTo(StateMachine2.java:117)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.stateTransitTo(VirtualMachineManagerImpl.java:1324)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.destroy(VirtualMachineManagerImpl.java:1355)
> at 
> org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.destroyVirtualMac

[jira] [Commented] (CLOUDSTACK-6024) template copy to primary storage uses a random source secstorage from any zone

2014-03-26 Thread Daan Hoogland (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947687#comment-13947687
 ] 

Daan Hoogland commented on CLOUDSTACK-6024:
---

it has been fixed in 4.3 and cherry-picked to 4.4/master

> template copy to primary storage uses a random source secstorage from any zone
> --
>
> Key: CLOUDSTACK-6024
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6024
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.1, 4.1.2, 4.4.0
> Environment: Multiple zones where the secstorage of a zone is not 
> accessible to hosts from the other zone.
>Reporter: Joris van Lieshout
>Assignee: Daan Hoogland
>Priority: Blocker
> Fix For: 4.3.0, 4.4.0
>
>
> 2014-02-04 15:19:07,674 DEBUG [cloud.storage.VolumeManagerImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> Checking if we need to prepare 1 volumes for VM[User|xx-app01]
> 2014-02-04 15:19:07,693 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> template 467 is already in store:117, type:Image
> // store 117 is not accessible from the zone where this hypervisor lives
> 2014-02-04 15:19:07,705 DEBUG [storage.datastore.PrimaryDataStoreImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Not 
> found (templateId:467poolId:208) in template_spool_ref, persisting it
> 2014-02-04 15:19:07,718 DEBUG [storage.image.TemplateDataFactoryImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> template 467 is already in store:208, type:Primary
> 2014-02-04 15:19:07,722 DEBUG [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Found 
> template 467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938 in storage pool 208 with 
> VMTemplateStoragePool id: 36433
> 2014-02-04 15:19:07,732 DEBUG [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> Acquire lock on VMTemplateStoragePool 36433 with timeout 3600 seconds
> 2014-02-04 15:19:07,737 INFO  [storage.volume.VolumeServiceImpl] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) lock 
> is acquired for VMTemplateStoragePool 36433
> 2014-02-04 15:19:07,748 DEBUG [storage.motion.AncientDataMotionStrategy] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) 
> copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type TEMPLATE
> 2014-02-04 15:19:07,775 DEBUG [agent.manager.ClusteredAgentAttache] 
> (Job-Executor-92:job-221857 = [ 6f2d5dbb-575e-49b9-89dd-d7567869849e ]) Seq 
> 93-1862347354: Forwarding Seq 93-1862347354:  { Cmd , MgmtId: 345052370018, 
> via: 93, Ver: v1, Flags: 100111, 
> [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"template/tmpl/2/467/c263eb76-3d72-3732-8cc6-42b0dad55c4d.vhd","origUrl":"http://x.x.com/image/centos64x64-daily-v1b104.vhd","uuid":"ca5e3f26-e9b6-41c8-a85b-df900be5673c","id":467,"format":"VHD","accountId":2,"checksum":"604a8327bd83850ed621ace2ea84402a","hvm":true,"displayText":"centos
>  template created by hans.pl from machine name 
> centos-daily-b104","imageDataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://.storage..xx.xxx/volumes/pool0/--1-1","_role":"Image"}},"name":"467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"origUrl":"http://xx.xx.com/image/centos64x64-daily-v1b104.vhd","uuid":"ca5e3f26-e9b6-41c8-a85b-df900be5673c","id":467,"format":"VHD","accountId":2,"checksum":"604a8327bd83850ed621ace2ea84402a","hvm":true,"displayText":"centos
>  template created by hans.pl from machine name 
> centos-daily-b104","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"b290385b-466d-3243-a939-3d242164e034","id":208,"poolType":"NetworkFilesystem","host":"..x.net","path":"/volumes/pool0/xx-XEN-1","port":2049}},"name":"467-2-6c05b599-95ed-34c3-b8f0-fd9c30bac938","hypervisorType":"XenServer"}},"executeInSequence":true,"wait":10800}}]
>  } to 345052370017
> ===FILE: server/src/com/cloud/storage/VolumeManagerImpl.java
> public void prepare(VirtualMachineProfile vm,
> DeployDestination dest) throws StorageUnavailableException,
> InsufficientStorageCapacityException, 
> ConcurrentOperationException {
> if (dest == null) {
> if (s_logger.isDebugEnabled()) {
> s_logger.d

[jira] [Comment Edited] (CLOUDSTACK-6228) Some action confirm dialogs show incorrect icon

2014-03-26 Thread Mihaela Stoica (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13933413#comment-13933413
 ] 

Mihaela Stoica edited comment on CLOUDSTACK-6228 at 3/26/14 10:14 AM:
--

Most confirmation dialogs (rendered via dialog.confirm) have a 'confirm' (green 
tick (/)) icon. However, for delete operations we shall have a 'warning' 
(exclamation mark) icon. 


was (Author: mihaelas):
I think that confirmation dialogs (rendered via dialog.confirm) have a 
'confirm' (green tick (/)) icon, not a 'warning' (exclamation mark) icon. 
Please confirm that is the green tick icon that we want to use for 
dialog.createForm rendered in confirmation style mode.

> Some action confirm dialogs show incorrect icon
> ---
>
> Key: CLOUDSTACK-6228
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6228
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.3.0
>Reporter: Brian Federle
>Assignee: Mihaela Stoica
> Fix For: 4.4.0
>
> Attachments: incorrect-icon.png
>
>
> On some confirmation dialogs before performing an action, the incorrect icon 
> is shown on the header. The icon should be 'warning' (exclamation mark icon) 
> instead of 'add'



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CLOUDSTACK-6286) [Automation] VM deployment is failing in simulator

2014-03-26 Thread Srikanteswararao Talluri (JIRA)
Srikanteswararao Talluri created CLOUDSTACK-6286:


 Summary: [Automation] VM deployment is failing in simulator
 Key: CLOUDSTACK-6286
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6286
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Simulator
Affects Versions: 4.5.0
 Environment: simulator
Reporter: Srikanteswararao Talluri
Priority: Blocker
 Fix For: 4.5.0


VM deployment is failing on simulator for various reasons


WARN  [c.c.a.d.ParamGenericValidationWorker] 
(1776782853@qtp-639315233-4:ctx-eba7435f ctx-70cf4009) Received unknown 
parameters for command listSystemVms. Unknown parameters : listall
ERROR [c.c.a.m.SimulatorManagerImpl] (DirectAgent-5:ctx-64065a9d) Simulator 
does not implement command of type 
com.cloud.agent.api.routing.AggregationControlCommand
ERROR [c.c.a.m.SimulatorManagerImpl] (DirectAgent-17:ctx-f03c00cf) Simulator 
does not implement command of type 
com.cloud.agent.api.routing.AggregationControlCommand
ERROR [c.c.a.m.SimulatorManagerImpl] (DirectAgent-9:ctx-5cb2aaed) Simulator 
does not implement command of type 
com.cloud.agent.api.routing.AggregationControlCommand
WARN  [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-2:Job-97/Job-98 
ctx-b42ce022) Failed to re-program the network as a part of network 
Ntwk[210|Guest|8] implement due to aggregated commands execution failure!
ERROR [c.c.a.m.SimulatorManagerImpl] (DirectAgent-7:ctx-b44ea1f7) Simulator 
does not implement command of type 
com.cloud.agent.api.routing.AggregationControlCommand
INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-2:Job-97/Job-98 
ctx-b42ce022) Unable to contact resource.
com.cloud.exception.ResourceUnavailableException: Resource [DataCenter:1] is 
unreachable: Unable to apply network rules as a part of network 
Ntwk[210|Guest|8] implement
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.implementNetworkElementsAndResources(NetworkOrchestrator.java:1110)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.implementNetwork(NetworkOrchestrator.java:992)
at 
org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.prepare(NetworkOrchestrator.java:1272)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:982)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5149)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5294)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
=







--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-5432) [Automation] Libvtd getting crashed and agent going to alert start

2014-03-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947923#comment-13947923
 ] 

ASF subversion and git services commented on CLOUDSTACK-5432:
-

Commit 7db6ba0c5fff8a771017ec8bce124fba698efb4e in cloudstack's branch 
refs/heads/4.4 from [~zhou2324]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=7db6ba0 ]

CLOUDSTACK-5432: potential bugs in case of stop mgt server while
template is downloading, template_store_ref has leftover not in ready
state, when create vm from that template, the code doesn't check either
zone id, nor template_store_ref state.

Conflicts:

engine/orchestration/src/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java


> [Automation] Libvtd getting crashed and agent going to alert start 
> ---
>
> Key: CLOUDSTACK-5432
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5432
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.3.0
> Environment: KVM (RHEL 6.3)
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Marcus Sorensen
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5432_Jan_06.rar, KVM_Automation_Dec_11.rar, 
> agent1.rar, agent2.rar, management-server.rar
>
>
> This issue is observed in  4.3 automation environment;  libvirt crashed and 
> cloudstack agent went to alert start;
> Please see the agent log; connection between agent and MS lost with error 
> "Connection closed with -1 on reading size."  @ 2013-12-09 19:47:06,969
> 2013-12-09 19:43:41,495 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-2:null) Processing command: 
> com.cloud.agent.api.GetStorageStatsCommand
> 2013-12-09 19:47:06,969 DEBUG [utils.nio.NioConnection] (Agent-Selector:null) 
> Location 1: Socket Socket[addr=/10.223.49.195,port=8250,localport=40801] 
> closed on read.  Probably -1 returned: Connection closed with -1 on reading 
> size.
> 2013-12-09 19:47:06,969 DEBUG [utils.nio.NioConnection] (Agent-Selector:null) 
> Closing socket Socket[addr=/10.223.49.195,port=8250,localport=40801]
> 2013-12-09 19:47:06,969 DEBUG [cloud.agent.Agent] (Agent-Handler-3:null) 
> Clearing watch list: 2
> 2013-12-09 19:47:11,969 INFO  [cloud.agent.Agent] (Agent-Handler-3:null) Lost 
> connection to the server. Dealing with the remaining commands...
> 2013-12-09 19:47:11,970 INFO  [cloud.agent.Agent] (Agent-Handler-3:null) 
> Cannot connect because we still have 5 commands in progress.
> 2013-12-09 19:47:16,970 INFO  [cloud.agent.Agent] (Agent-Handler-3:null) Lost 
> connection to the server. Dealing with the remaining commands...
> 2013-12-09 19:47:16,990 INFO  [cloud.agent.Agent] (Agent-Handler-3:null) 
> Cannot connect because we still have 5 commands in progress.
> 2013-12-09 19:47:21,990 INFO  [cloud.agent.Agent] (Agent-Handler-3:null) Lost 
> connection to the server. Dealing with the remaining commands.. 
> Please see the lib virtd log at same time (please see the attached complete 
> log, there is a 5 hour  difference in agent log and libvirt log ) 
> 2013-12-10 02:45:45.563+: 5938: error : qemuMonitorIO:574 : internal 
> error End of file from monitor
> 2013-12-10 02:45:47.663+: 5942: error : virCommandWait:2308 : internal 
> error Child process (/bin/umount /mnt/41b632b5-40b3-3024-a38b-ea259c72579f) 
> status unexpected: exit status 16
> 2013-12-10 02:45:53.925+: 5943: error : virCommandWait:2308 : internal 
> error Child process (/sbin/tc qdisc del dev vnet14 root) status unexpected: 
> exit status 2
> 2013-12-10 02:45:53.929+: 5943: error : virCommandWait:2308 : internal 
> error Child process (/sbin/tc qdisc del dev vnet14 ingress) status 
> unexpected: exit status 2
> 2013-12-10 02:45:54.011+: 5943: warning : qemuDomainObjTaint:1297 : 
> Domain id=71 name='i-45-97-QA' uuid=7717ba08-be84-4b63-a674-1534f9dc7bef is 
> tainted: high-privileges
> 2013-12-10 02:46:33.070+: 5940: error : virCommandWait:2308 : internal 
> error Child process (/sbin/tc qdisc del dev vnet12 root) status unexpected: 
> exit status 2
> 2013-12-10 02:46:33.081+: 5940: error : virCommandWait:2308 : internal 
> error Child process (/sbin/tc qdisc del dev vnet12 ingress) status 
> unexpected: exit status 2
> 2013-12-10 02:46:33.197+: 5940: warning : qemuDomainObjTaint:1297 : 
> Domain id=72 name='i-47-111-QA' uuid=7fcce58a-96dc-4207-9998-b8fb72b446ac is 
> tainted: high-privileges
> 2013-12-10 02:46:36.394+: 5938: error : qemuMonitorIO:574 : internal 
> error End of file from monitor
> 2013-12-10 02:46:37.685+: 5940: error : virCommandWait:2308 : internal 
> error Child process (/bin/um

[jira] [Commented] (CLOUDSTACK-2908) An automatic start setup is not carried out after cloudstack-usage installation.

2014-03-26 Thread Alex Hitchins (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948003#comment-13948003
 ] 

Alex Hitchins commented on CLOUDSTACK-2908:
---

Tested by Geoff Higginbottom - restarts on reboot - as expected.

This issue will be closed.

> An automatic start setup is not carried out after cloudstack-usage 
> installation. 
> -
>
> Key: CLOUDSTACK-2908
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2908
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.1.0
> Environment: cloudstack-usage-4.1.0-0.el6.x86_64
>Reporter: satoru nakaya
>Assignee: Alex Hitchins
>Priority: Trivial
>
> An automatic start setup is not carried out after cloudstack-usage 
> installation. 
> # yum install cloudstack-usage -y --nogpgcheck
> # chkconfig --list | grep cloud
> cloudstack-management   0:off   1:off   2:off   3:on4:on5:on6:off
> #
> You have to carry out an automatic start setup manually. 
> # chkconfig --add cloudstack-usage
> In version 4.0.x, it was set up automatically.
> # chkconfig --list | grep cloud
> cloud-management0:off   1:off   2:on3:on4:on5:on6:off
> cloud-usage 0:off   1:off   2:off   3:on4:on5:on6:off
> #



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CLOUDSTACK-2908) An automatic start setup is not carried out after cloudstack-usage installation.

2014-03-26 Thread Alex Hitchins (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Hitchins resolved CLOUDSTACK-2908.
---

   Resolution: Fixed
Fix Version/s: 4.3.0

Tested by Geoff Higgenbottom - works as expected.

> An automatic start setup is not carried out after cloudstack-usage 
> installation. 
> -
>
> Key: CLOUDSTACK-2908
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2908
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.1.0
> Environment: cloudstack-usage-4.1.0-0.el6.x86_64
>Reporter: satoru nakaya
>Assignee: Alex Hitchins
>Priority: Trivial
> Fix For: 4.3.0
>
>
> An automatic start setup is not carried out after cloudstack-usage 
> installation. 
> # yum install cloudstack-usage -y --nogpgcheck
> # chkconfig --list | grep cloud
> cloudstack-management   0:off   1:off   2:off   3:on4:on5:on6:off
> #
> You have to carry out an automatic start setup manually. 
> # chkconfig --add cloudstack-usage
> In version 4.0.x, it was set up automatically.
> # chkconfig --list | grep cloud
> cloud-management0:off   1:off   2:on3:on4:on5:on6:off
> cloud-usage 0:off   1:off   2:off   3:on4:on5:on6:off
> #



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-5674) [Automation]: Enhancements to accommodate multiple regression runs on a single CS server

2014-03-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948191#comment-13948191
 ] 

ASF subversion and git services commented on CLOUDSTACK-5674:
-

Commit e4053bc32b6a522f94e69d12406fca46f51e03cf in cloudstack's branch 
refs/heads/marvin from [~santhoshe]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e4053bc ]

CLOUDSTACK-5674: Added few fixes for CLOUDSTACK-5674


> [Automation]: Enhancements to accommodate multiple regression runs on a 
> single CS server
> 
>
> Key: CLOUDSTACK-5674
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5674
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, marvin
>Reporter: Santhosh Kumar Edukulla
>Assignee: Santhosh Kumar Edukulla
>
> 1. Currently, we will not be able to run multiple regression runs on a given 
> CS server either sequentially\parallelly reason being few hard codings done 
> at various places. 
> 2. So, the idea is to run complete regression\automation test suites at one 
> stretch on a given setup post deployDC. We deploy multiple zones with 
> different hypervisor type in each zone.
> 3. Tests should cut down time and run across multiple zones with different 
> hypervisor type in each zone, post deployment
> 3. The fixes and new changes should incorporate to take care to run suites 
> parallelly if we wanted or sequentially for various test suites like 
> vmware,xen,kvm etc on single CS machine without disturbing\redeploying and 
> installing the new CS instance. 
> Phase1: We will make framework\marvin changes.
> Phase2: Incorporate test module changes for these.
> Adding this ticket to track these changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-5674) [Automation]: Enhancements to accommodate multiple regression runs on a single CS server

2014-03-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948190#comment-13948190
 ] 

ASF subversion and git services commented on CLOUDSTACK-5674:
-

Commit e4053bc32b6a522f94e69d12406fca46f51e03cf in cloudstack's branch 
refs/heads/marvin from [~santhoshe]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e4053bc ]

CLOUDSTACK-5674: Added few fixes for CLOUDSTACK-5674


> [Automation]: Enhancements to accommodate multiple regression runs on a 
> single CS server
> 
>
> Key: CLOUDSTACK-5674
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5674
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation, marvin
>Reporter: Santhosh Kumar Edukulla
>Assignee: Santhosh Kumar Edukulla
>
> 1. Currently, we will not be able to run multiple regression runs on a given 
> CS server either sequentially\parallelly reason being few hard codings done 
> at various places. 
> 2. So, the idea is to run complete regression\automation test suites at one 
> stretch on a given setup post deployDC. We deploy multiple zones with 
> different hypervisor type in each zone.
> 3. Tests should cut down time and run across multiple zones with different 
> hypervisor type in each zone, post deployment
> 3. The fixes and new changes should incorporate to take care to run suites 
> parallelly if we wanted or sequentially for various test suites like 
> vmware,xen,kvm etc on single CS machine without disturbing\redeploying and 
> installing the new CS instance. 
> Phase1: We will make framework\marvin changes.
> Phase2: Incorporate test module changes for these.
> Adding this ticket to track these changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Closed] (CLOUDSTACK-669) Better VM Sync

2014-03-26 Thread Sudha Ponnaganti (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudha Ponnaganti closed CLOUDSTACK-669.
---


Already done in 4.3

> Better VM Sync
> --
>
> Key: CLOUDSTACK-669
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-669
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Reporter: Hari Kannan
>Assignee: Kelven Yang
> Fix For: 4.4.0
>
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VMWare+Enhancements+-+Support+for+DRS+and+VM+HA



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-5922) Incorrect handling RHEL guests on Vmware

2014-03-26 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen updated CLOUDSTACK-5922:
-

Summary: Incorrect handling RHEL guests on Vmware  (was: Incorrect handling 
RHEL guests )

> Incorrect handling RHEL guests on Vmware
> 
>
> Key: CLOUDSTACK-5922
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5922
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller
>Affects Versions: 4.2.1
>Reporter: Min Chen
>Assignee: Min Chen
> Fix For: 4.3.0
>
>
> The issue is easily reproducible by deploying a VM with Rhel 6.0 template. 
> Once the VM is deployed you can see that the OS type is sent to "Other" in 
> the Vcenter.  When selecting 'Ubuntu 12.04 64-bit' the VM get created 
> similarly to RHEL with incorrect OS type.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-5922) Incorrect handling RHEL guests on Vmware

2014-03-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948654#comment-13948654
 ] 

ASF subversion and git services commented on CLOUDSTACK-5922:
-

Commit ee72450dbf41b63d7442ead4f1db69f3ce10408a in cloudstack's branch 
refs/heads/4.4 from [~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ee72450 ]

CLOUDSTACK-5922:Incorrect handling RHEL guests for 5.5 to 5.9


> Incorrect handling RHEL guests on Vmware
> 
>
> Key: CLOUDSTACK-5922
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5922
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller
>Affects Versions: 4.2.1
>Reporter: Min Chen
>Assignee: Min Chen
> Fix For: 4.3.0
>
>
> The issue is easily reproducible by deploying a VM with Rhel 6.0 template. 
> Once the VM is deployed you can see that the OS type is sent to "Other" in 
> the Vcenter.  When selecting 'Ubuntu 12.04 64-bit' the VM get created 
> similarly to RHEL with incorrect OS type.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-5922) Incorrect handling RHEL guests on Vmware

2014-03-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948669#comment-13948669
 ] 

ASF subversion and git services commented on CLOUDSTACK-5922:
-

Commit 05c6b455ae89480d84ba5dfa4dc79d3336ebd458 in cloudstack's branch 
refs/heads/master from [~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=05c6b45 ]

CLOUDSTACK-5922:Incorrect handling RHEL guests for 5.5 to 5.9


> Incorrect handling RHEL guests on Vmware
> 
>
> Key: CLOUDSTACK-5922
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5922
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Hypervisor Controller
>Affects Versions: 4.2.1
>Reporter: Min Chen
>Assignee: Min Chen
> Fix For: 4.3.0
>
>
> The issue is easily reproducible by deploying a VM with Rhel 6.0 template. 
> Once the VM is deployed you can see that the OS type is sent to "Other" in 
> the Vcenter.  When selecting 'Ubuntu 12.04 64-bit' the VM get created 
> similarly to RHEL with incorrect OS type.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-6278) Baremetal Advanced Networking support

2014-03-26 Thread frank zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank zhang updated CLOUDSTACK-6278:


Description: functional spec link: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Baremetal+Advanced+Networking+Support

> Baremetal Advanced Networking support
> -
>
> Key: CLOUDSTACK-6278
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6278
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.4.0
>Reporter: frank zhang
>Assignee: frank zhang
> Fix For: 4.4.0
>
>
> functional spec link: 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Baremetal+Advanced+Networking+Support



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CLOUDSTACK-6287) While adding Secondary storage as SMB/CIFS in CS 4.3 Domain controller password appears in plan text in key/pair value.

2014-03-26 Thread Tejas (JIRA)
Tejas created CLOUDSTACK-6287:
-

 Summary: While adding Secondary storage as SMB/CIFS in CS 4.3 
Domain controller password appears in plan text in key/pair value.
 Key: CLOUDSTACK-6287
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6287
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Hypervisor Controller, Storage Controller
Affects Versions: 4.3.0
 Environment: CentOS 6.3 x64-64, Hyperv hypervisor
Reporter: Tejas
Priority: Critical


While adding Secondary storage as SMB/CIFS in CS 4.3 Domain controller password 
appears in plan text in key/pair value.

Logs are as below,

2014-03-27 09:49:47,611 INFO  [o.a.c.s.d.l.CloudStackImageStoreLifeCycleImpl] 
(catalina-exec-12:ctx-bd85f47b ctx-df8f3444) Trying to add a new data store at 
cifs://10.129.151.61/Secondary to data center 1
2014-03-27 09:49:47,977 DEBUG [c.c.a.ApiServlet] (catalina-exec-12:ctx-bd85f47b 
ctx-df8f3444) ===END===  10.129.150.62 -- GET  
command=addImageStore&response=json&sessionkey=pjC%2B%2FjnddbFmQI7MtdDgo%2Bf5JmQ%3D&name=Secondary&provider=SMB&zoneid=5e5a7fee-9e4e-47df-86fa-c19da8240e84&url=cifs%3A%2F%2F10.129.151.61%2FSecondary&details%5B0%5D.key=user&details%5B0%5D.value=administrator&details%5B1%5D.key=password&details%5B1%5D.value=C1sco123&details%5B2%5D.key=domain&details%5B2%5D.value=nw.com&_=1395893875835




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CLOUDSTACK-6288) [Hyper-v] Change default ImageFormat to vhdx for hyper-v and allow registration of vhdx format templates

2014-03-26 Thread Anshul Gangwar (JIRA)
Anshul Gangwar created CLOUDSTACK-6288:
--

 Summary: [Hyper-v] Change default ImageFormat to vhdx for hyper-v 
and allow registration of vhdx format templates 
 Key: CLOUDSTACK-6288
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6288
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar


Currently Hyper-V supports vhd Image Format for templates and volumes. Now 
change default ImageFormat to vhdx for hyper-v and allow registration of vhdx 
format templates. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CLOUDSTACK-6289) [Hyper-V] Storage migration failing in case of hyper-v if there are multiple disks attached to VM

2014-03-26 Thread Anshul Gangwar (JIRA)
Anshul Gangwar created CLOUDSTACK-6289:
--

 Summary: [Hyper-V] Storage migration failing in case of hyper-v if 
there are multiple disks attached to VM
 Key: CLOUDSTACK-6289
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6289
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Anshul Gangwar
Assignee: Anshul Gangwar


[Hyper-V] Storage migration failing in case of hyper-v if there are multiple 
disks attached to VM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-6288) [Hyper-v] Change default ImageFormat to vhdx for hyper-v and allow registration of vhdx format templates

2014-03-26 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-6288:
---

Affects Version/s: 4.3.0

> [Hyper-v] Change default ImageFormat to vhdx for hyper-v and allow 
> registration of vhdx format templates 
> -
>
> Key: CLOUDSTACK-6288
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6288
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
>
> Currently Hyper-V supports vhd Image Format for templates and volumes. Now 
> change default ImageFormat to vhdx for hyper-v and allow registration of vhdx 
> format templates. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-6289) [Hyper-V] Storage migration failing in case of hyper-v if there are multiple disks attached to VM

2014-03-26 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-6289:
---

Fix Version/s: 4.3.0

> [Hyper-V] Storage migration failing in case of hyper-v if there are multiple 
> disks attached to VM
> -
>
> Key: CLOUDSTACK-6289
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6289
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
> Fix For: 4.3.0
>
>
> [Hyper-V] Storage migration failing in case of hyper-v if there are multiple 
> disks attached to VM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-6289) [Hyper-V] Storage migration failing in case of hyper-v if there are multiple disks attached to VM

2014-03-26 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-6289:
---

Affects Version/s: 4.3.0

> [Hyper-V] Storage migration failing in case of hyper-v if there are multiple 
> disks attached to VM
> -
>
> Key: CLOUDSTACK-6289
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6289
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
> Fix For: 4.3.0
>
>
> [Hyper-V] Storage migration failing in case of hyper-v if there are multiple 
> disks attached to VM



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-6288) [Hyper-v] Change default ImageFormat to vhdx for hyper-v and allow registration of vhdx format templates

2014-03-26 Thread Anshul Gangwar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshul Gangwar updated CLOUDSTACK-6288:
---

Fix Version/s: 4.3.0

> [Hyper-v] Change default ImageFormat to vhdx for hyper-v and allow 
> registration of vhdx format templates 
> -
>
> Key: CLOUDSTACK-6288
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6288
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Anshul Gangwar
>Assignee: Anshul Gangwar
> Fix For: 4.3.0
>
>
> Currently Hyper-V supports vhd Image Format for templates and volumes. Now 
> change default ImageFormat to vhdx for hyper-v and allow registration of vhdx 
> format templates. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CLOUDSTACK-6290) [Windows] Generate SSL Keys at the time of installation

2014-03-26 Thread Damodar Reddy T (JIRA)
Damodar Reddy T created CLOUDSTACK-6290:
---

 Summary: [Windows] Generate SSL Keys at the time of installation
 Key: CLOUDSTACK-6290
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6290
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Install and Setup
Affects Versions: 4.4.0
Reporter: Damodar Reddy T
Assignee: Damodar Reddy T
 Fix For: 4.4.0


Currently the SSL keys will be generated during server start up and on windows 
it is not happening due to Sudo command. 
If we can generate SSL keys at the time of installation in hand before starting 
the server then server will skip this step. Later we can do similar changes for 
linux based OSes as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-5859) [HA] Shared storage failure results in reboot loop; VMs with Local storage brought offline

2014-03-26 Thread Bjoern Teipel (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948922#comment-13948922
 ] 

Bjoern Teipel commented on CLOUDSTACK-5859:
---

I personally don't see any reason for rebooting a hyper visor if NFS is 
unavailable or timing out due to IO/Net issues, especially if you have VMs on 
local or CLVM storage.
I'll patch our installation to not reboot the Hypervisor, since I had a pool of 
10 servers happily rebooting after a VLAN configuration error which ran also 
CLVM with fencing on top. Was not fun to fix. And those behavior does't exist 
on Xenserver to my knowledge

> [HA] Shared storage failure results in reboot loop; VMs with Local storage 
> brought offline
> --
>
> Key: CLOUDSTACK-5859
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5859
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.2.0
> Environment: RHEL/CentOS 6.4 with KVM
>Reporter: Dave Garbus
>Priority: Critical
>
> We have a group of 13 KVM servers added to a single cluster within 
> CloudStack. All VMs use local hypervisor storage, with the exception of one 
> that was configured to use NFS-based primary storage with a HA service 
> offering.
> An issue occurred with the SAN responsible for serving the NFS mount (primary 
> storage for HA VM) and the mount was put into a read-only state. Shortly 
> after, each host in the cluster rebooted and continued to stay in a reboot 
> loop until I put the primary storage into maintenance. These messages were in 
> the agent.log on each of the KVM hosts:
> 2014-01-12 02:40:20,953 WARN  [kvm.resource.KVMHAMonitor] 
> (Thread-137180:null) write heartbeat failed: timeout, retry: 4
> 2014-01-12 02:40:20,953 WARN  [kvm.resource.KVMHAMonitor] 
> (Thread-137180:null) write heartbeat failed: timeout; reboot the host
> In essence, a single HA-enabled VM was able to bring down an entire KVM 
> cluster that was hosting a number of VMs with local storage. It would seem 
> that the fencing script needs to be improved to account for cases where both 
> local and shared storage is used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CLOUDSTACK-5859) [HA] Shared storage failure results in reboot loop; VMs with Local storage brought offline

2014-03-26 Thread Bjoern Teipel (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948922#comment-13948922
 ] 

Bjoern Teipel edited comment on CLOUDSTACK-5859 at 3/27/14 5:48 AM:


I personally don't see any reason for rebooting a hypervisor if NFS is 
unavailable or timing out due to IO/Net issues, especially if you have VMs on 
local or CLVM storage.
I'll patch our installation to not reboot the hypervisor, since I had a pool of 
10 servers happily rebooting after a VLAN configuration error which ran also 
CLVM with fencing on top. Was not fun to fix. And those behavior does't exist 
on Xenserver to my knowledge


was (Author: bjoernt):
I personally don't see any reason for rebooting a hyper visor if NFS is 
unavailable or timing out due to IO/Net issues, especially if you have VMs on 
local or CLVM storage.
I'll patch our installation to not reboot the Hypervisor, since I had a pool of 
10 servers happily rebooting after a VLAN configuration error which ran also 
CLVM with fencing on top. Was not fun to fix. And those behavior does't exist 
on Xenserver to my knowledge

> [HA] Shared storage failure results in reboot loop; VMs with Local storage 
> brought offline
> --
>
> Key: CLOUDSTACK-5859
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5859
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.2.0
> Environment: RHEL/CentOS 6.4 with KVM
>Reporter: Dave Garbus
>Priority: Critical
>
> We have a group of 13 KVM servers added to a single cluster within 
> CloudStack. All VMs use local hypervisor storage, with the exception of one 
> that was configured to use NFS-based primary storage with a HA service 
> offering.
> An issue occurred with the SAN responsible for serving the NFS mount (primary 
> storage for HA VM) and the mount was put into a read-only state. Shortly 
> after, each host in the cluster rebooted and continued to stay in a reboot 
> loop until I put the primary storage into maintenance. These messages were in 
> the agent.log on each of the KVM hosts:
> 2014-01-12 02:40:20,953 WARN  [kvm.resource.KVMHAMonitor] 
> (Thread-137180:null) write heartbeat failed: timeout, retry: 4
> 2014-01-12 02:40:20,953 WARN  [kvm.resource.KVMHAMonitor] 
> (Thread-137180:null) write heartbeat failed: timeout; reboot the host
> In essence, a single HA-enabled VM was able to bring down an entire KVM 
> cluster that was hosting a number of VMs with local storage. It would seem 
> that the fencing script needs to be improved to account for cases where both 
> local and shared storage is used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CLOUDSTACK-5859) [HA] Shared storage failure results in reboot loop; VMs with Local storage brought offline

2014-03-26 Thread Bjoern Teipel (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948922#comment-13948922
 ] 

Bjoern Teipel edited comment on CLOUDSTACK-5859 at 3/27/14 5:49 AM:


I personally don't see any reason for rebooting a hypervisor if NFS is 
unavailable or timing out due to IO/Net issues, especially if you have VMs on 
local or CLVM storage.
I'll patch our installation to not reboot the hypervisor, since I had a pool of 
10 servers happily rebooting after a VLAN configuration error which ran also 
CLVM with fencing on top. Was not fun to fix. And this behavior does't exist on 
Xenserver to my knowledge


was (Author: bjoernt):
I personally don't see any reason for rebooting a hypervisor if NFS is 
unavailable or timing out due to IO/Net issues, especially if you have VMs on 
local or CLVM storage.
I'll patch our installation to not reboot the hypervisor, since I had a pool of 
10 servers happily rebooting after a VLAN configuration error which ran also 
CLVM with fencing on top. Was not fun to fix. And those behavior does't exist 
on Xenserver to my knowledge

> [HA] Shared storage failure results in reboot loop; VMs with Local storage 
> brought offline
> --
>
> Key: CLOUDSTACK-5859
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5859
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM
>Affects Versions: 4.2.0
> Environment: RHEL/CentOS 6.4 with KVM
>Reporter: Dave Garbus
>Priority: Critical
>
> We have a group of 13 KVM servers added to a single cluster within 
> CloudStack. All VMs use local hypervisor storage, with the exception of one 
> that was configured to use NFS-based primary storage with a HA service 
> offering.
> An issue occurred with the SAN responsible for serving the NFS mount (primary 
> storage for HA VM) and the mount was put into a read-only state. Shortly 
> after, each host in the cluster rebooted and continued to stay in a reboot 
> loop until I put the primary storage into maintenance. These messages were in 
> the agent.log on each of the KVM hosts:
> 2014-01-12 02:40:20,953 WARN  [kvm.resource.KVMHAMonitor] 
> (Thread-137180:null) write heartbeat failed: timeout, retry: 4
> 2014-01-12 02:40:20,953 WARN  [kvm.resource.KVMHAMonitor] 
> (Thread-137180:null) write heartbeat failed: timeout; reboot the host
> In essence, a single HA-enabled VM was able to bring down an entire KVM 
> cluster that was hosting a number of VMs with local storage. It would seem 
> that the fencing script needs to be improved to account for cases where both 
> local and shared storage is used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CLOUDSTACK-6239) [Automation] jasypt decryption error is thrown after restarting console proxy VM

2014-03-26 Thread Ram Ganesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ram Ganesh updated CLOUDSTACK-6239:
---

Assignee: Rajesh Battala

> [Automation] jasypt decryption error is thrown after restarting console proxy 
> VM
> 
>
> Key: CLOUDSTACK-6239
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6239
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.4.0
>Reporter: Srikanteswararao Talluri
>Assignee: Rajesh Battala
>Priority: Blocker
> Fix For: 4.4.0
>
>
> STEPS TO REPRODUCE:
> 
> 1. create a zone and let SSVM and CPVM come up.
> 2. restart CPVM.
> i am hitting the following error while CPVM is being restarted.
> 2014-03-14 05:48:55,917 DEBUG [cloud.resource.ResourceState] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Resource state update: [id = 8; 
> name = v-6-VM; old state = Creating; event = InternalCreated; new state = 
> Enabled]
> 2014-03-14 05:48:55,917 DEBUG [cloud.host.Status] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Transition:[Resource state = 
> Enabled, Agent event = AgentConnected, Host id = 8, name = v-6-VM]
> 2014-03-14 05:48:55,922 DEBUG [cloud.host.Status] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Agent status update: [id = 8; name 
> = v-6-VM; old status = Creating; event = AgentConnected; new status = 
> Connecting; old update count = 0; new update count = 1]
> 2014-03-14 05:48:55,922 DEBUG [agent.manager.ClusteredAgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) create ClusteredAgentAttache for 8
> 2014-03-14 05:48:55,923 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> XcpServerDiscoverer
> 2014-03-14 05:48:55,923 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> HypervServerDiscoverer
> 2014-03-14 05:48:55,923 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> DeploymentPlanningManagerImpl
> 2014-03-14 05:48:55,923 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> ClusteredVirtualMachineManagerImpl
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> NetworkOrchestrator
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> StoragePoolMonitor
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> SecurityGroupListener
> 2014-03-14 05:48:55,924 INFO  [network.security.SecurityGroupListener] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Received a host startup notification
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> SecondaryStorageListener
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> UploadListener
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> BehindOnPingListener
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> DirectNetworkStatsListener
> 2014-03-14 05:48:55,924 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Sending Connect to listener: 
> ConsoleProxyListener
> 2014-03-14 05:48:55,928 DEBUG [utils.crypt.DBEncryptionUtil] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Error while decrypting: 
> A1i0Flrc5LPgCsx3V7cOVQ
> 2014-03-14 05:48:55,929 ERROR [agent.manager.AgentManagerImpl] 
> (AgentConnectTaskPool-11423:ctx-abee3e50) Monitor ConsoleProxyListener says 
> there is an error in the connect process for 8 due to null
> org.jasypt.exceptions.EncryptionOperationNotPossibleException
>   at 
> org.jasypt.encryption.pbe.StandardPBEByteEncryptor.decrypt(StandardPBEByteEncryptor.java:981)
>   at 
> org.jasypt.encryption.pbe.StandardPBEStringEncryptor.decrypt(StandardPBEStringEncryptor.java:725)
>   at 
> com.cloud.utils.crypt.DBEncryptionUtil.decrypt(DBEncryptionUtil.java:63)
>   at 
> org.apache.cloudstack.framework.config.impl.ConfigurationVO.getValue(ConfigurationVO.java:125)
>   at 
> org.apache.cloudstack.framework.config.ConfigKey.value(ConfigKey.java:136)
>   at 
> org

[jira] [Updated] (CLOUDSTACK-5219) Cannot create a template from an existing Snapshot (Simulator)

2014-03-26 Thread Meghna Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Meghna Kale updated CLOUDSTACK-5219:


Assignee: Meghna Kale

> Cannot create a template from an existing Snapshot (Simulator)
> --
>
> Key: CLOUDSTACK-5219
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5219
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: David Grizzanti
>Assignee: Meghna Kale
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CLOUDSTACK-5150) Creating template from a VM in Simulator results in incorrect size

2014-03-26 Thread Girish Chaudhari (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Girish Chaudhari reassigned CLOUDSTACK-5150:


Assignee: Girish Chaudhari

> Creating template from a VM in Simulator results in incorrect size
> --
>
> Key: CLOUDSTACK-5150
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5150
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Simulator
>Affects Versions: 4.2.0
>Reporter: David Grizzanti
>Assignee: Girish Chaudhari
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)