[jira] [Resolved] (CLOUDSTACK-5216) "delete volume failed due to Exception: java.lang.Exception" While destroying the SSVM.

2013-12-05 Thread Likitha Shetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Likitha Shetty resolved CLOUDSTACK-5216.


Resolution: Fixed

> "delete volume failed due to Exception: java.lang.Exception" While destroying 
> the SSVM. 
> 
>
> Key: CLOUDSTACK-5216
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5216
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.3.0
>Reporter: manasaveloori
>Assignee: Likitha Shetty
> Fix For: 4.3.0
>
> Attachments: management-server.zip
>
>
> 1. Deploy CS with 4.3 build using ESXi5.1 HV.
> 2. After the CS is up with system VMs,destroy the SSVM.
> Observing the following ERROR message in log:
> 2013-11-20 21:09:07,721 INFO  [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Executing resource DestroyCommand: 
> {"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"98ad13cd-5b71-4f69-9b28-680430ca4c84","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"f672eae0-b400-3767-808f-b787a5c04d5f","id":1,"poolType":"NetworkFilesystem","host":"10.147.28.7","path":"/export/home/manasa/primaryVMw","port":2049,"url":"NetworkFilesystem://10.147.28.7//export/home/manasa/primaryVMw/?ROLE=Primary&STOREUUID=f672eae0-b400-3767-808f-b787a5c04d5f"}},"name":"ROOT-3","size":2097152000,"path":"ROOT-3","volumeId":3,"vmName":"s-3-VM","accountId":1,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[f672eae0b4003767808fb787a5c04d5f]
>  
> s-3-VM/ROOT-3.vmdk\"]}","format":"OVA","id":3,"deviceId":0,"hypervisorType":"VMware"}},"wait":0}
> 2013-11-20 21:09:07,788 INFO  [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Destroy root volume and VM itself. 
> vmName s-3-VM
> 2013-11-20 21:09:07,811 DEBUG [c.c.h.v.m.VirtualMachineMO] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Retrieved 3 networks with key : 2
> 2013-11-20 21:09:10,023 ERROR [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) delete volume failed due to 
> Exception: java.lang.Exception
> Message: An iSCSI HBA must be configured before a host can use iSCSI storage.
> java.lang.Exception: An iSCSI HBA must be configured before a host can use 
> iSCSI storage.
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.addRemoveInternetScsiTargetsToAllHosts(VmwareResource.java:4699)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.removeManagedTargetsFromCluster(VmwareResource.java:4647)
> at 
> com.cloud.storage.resource.VmwareStorageProcessor.deleteVolume(VmwareStorageProcessor.java:1520)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:120)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:54)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:538)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-11-20 21:09:10,026 DEBUG [c.c.a.m.DirectAgentAttache] 
> (DirectAgent-45:ctx-810a6ab1) Seq 1-1640300621: Response Received:
> 2013-11-20

[jira] [Commented] (CLOUDSTACK-5216) "delete volume failed due to Exception: java.lang.Exception" While destroying the SSVM.

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841033#comment-13841033
 ] 

ASF subversion and git services commented on CLOUDSTACK-5216:
-

Commit e6127a7c00cb663ae10a1f19b9c2c0b0310768d2 in branch refs/heads/master 
from [~likithas]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e6127a7 ]

CLOUDSTACK-5216. delete volume failed due to Exception: java.lang.Exception" 
while destroying Vms


> "delete volume failed due to Exception: java.lang.Exception" While destroying 
> the SSVM. 
> 
>
> Key: CLOUDSTACK-5216
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5216
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.3.0
>Reporter: manasaveloori
>Assignee: Likitha Shetty
> Fix For: 4.3.0
>
> Attachments: management-server.zip
>
>
> 1. Deploy CS with 4.3 build using ESXi5.1 HV.
> 2. After the CS is up with system VMs,destroy the SSVM.
> Observing the following ERROR message in log:
> 2013-11-20 21:09:07,721 INFO  [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Executing resource DestroyCommand: 
> {"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"98ad13cd-5b71-4f69-9b28-680430ca4c84","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"f672eae0-b400-3767-808f-b787a5c04d5f","id":1,"poolType":"NetworkFilesystem","host":"10.147.28.7","path":"/export/home/manasa/primaryVMw","port":2049,"url":"NetworkFilesystem://10.147.28.7//export/home/manasa/primaryVMw/?ROLE=Primary&STOREUUID=f672eae0-b400-3767-808f-b787a5c04d5f"}},"name":"ROOT-3","size":2097152000,"path":"ROOT-3","volumeId":3,"vmName":"s-3-VM","accountId":1,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[f672eae0b4003767808fb787a5c04d5f]
>  
> s-3-VM/ROOT-3.vmdk\"]}","format":"OVA","id":3,"deviceId":0,"hypervisorType":"VMware"}},"wait":0}
> 2013-11-20 21:09:07,788 INFO  [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Destroy root volume and VM itself. 
> vmName s-3-VM
> 2013-11-20 21:09:07,811 DEBUG [c.c.h.v.m.VirtualMachineMO] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Retrieved 3 networks with key : 2
> 2013-11-20 21:09:10,023 ERROR [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) delete volume failed due to 
> Exception: java.lang.Exception
> Message: An iSCSI HBA must be configured before a host can use iSCSI storage.
> java.lang.Exception: An iSCSI HBA must be configured before a host can use 
> iSCSI storage.
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.addRemoveInternetScsiTargetsToAllHosts(VmwareResource.java:4699)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.removeManagedTargetsFromCluster(VmwareResource.java:4647)
> at 
> com.cloud.storage.resource.VmwareStorageProcessor.deleteVolume(VmwareStorageProcessor.java:1520)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:120)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:54)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:538)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecut

[jira] [Commented] (CLOUDSTACK-5216) "delete volume failed due to Exception: java.lang.Exception" While destroying the SSVM.

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841029#comment-13841029
 ] 

ASF subversion and git services commented on CLOUDSTACK-5216:
-

Commit bc86103c2b1afc4b0bb437d75487bdda26a5958f in branch refs/heads/4.3 from 
[~likithas]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=bc86103 ]

CLOUDSTACK-5216. delete volume failed due to Exception: java.lang.Exception" 
while destroying Vms


> "delete volume failed due to Exception: java.lang.Exception" While destroying 
> the SSVM. 
> 
>
> Key: CLOUDSTACK-5216
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5216
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup
>Affects Versions: 4.3.0
>Reporter: manasaveloori
>Assignee: Likitha Shetty
> Fix For: 4.3.0
>
> Attachments: management-server.zip
>
>
> 1. Deploy CS with 4.3 build using ESXi5.1 HV.
> 2. After the CS is up with system VMs,destroy the SSVM.
> Observing the following ERROR message in log:
> 2013-11-20 21:09:07,721 INFO  [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Executing resource DestroyCommand: 
> {"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"98ad13cd-5b71-4f69-9b28-680430ca4c84","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"f672eae0-b400-3767-808f-b787a5c04d5f","id":1,"poolType":"NetworkFilesystem","host":"10.147.28.7","path":"/export/home/manasa/primaryVMw","port":2049,"url":"NetworkFilesystem://10.147.28.7//export/home/manasa/primaryVMw/?ROLE=Primary&STOREUUID=f672eae0-b400-3767-808f-b787a5c04d5f"}},"name":"ROOT-3","size":2097152000,"path":"ROOT-3","volumeId":3,"vmName":"s-3-VM","accountId":1,"chainInfo":"{\"diskDeviceBusName\":\"ide0:1\",\"diskChain\":[\"[f672eae0b4003767808fb787a5c04d5f]
>  
> s-3-VM/ROOT-3.vmdk\"]}","format":"OVA","id":3,"deviceId":0,"hypervisorType":"VMware"}},"wait":0}
> 2013-11-20 21:09:07,788 INFO  [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Destroy root volume and VM itself. 
> vmName s-3-VM
> 2013-11-20 21:09:07,811 DEBUG [c.c.h.v.m.VirtualMachineMO] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) Retrieved 3 networks with key : 2
> 2013-11-20 21:09:10,023 ERROR [c.c.s.r.VmwareStorageProcessor] 
> (DirectAgent-45:ctx-810a6ab1 10.147.40.31) delete volume failed due to 
> Exception: java.lang.Exception
> Message: An iSCSI HBA must be configured before a host can use iSCSI storage.
> java.lang.Exception: An iSCSI HBA must be configured before a host can use 
> iSCSI storage.
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.addRemoveInternetScsiTargetsToAllHosts(VmwareResource.java:4699)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.removeManagedTargetsFromCluster(VmwareResource.java:4647)
> at 
> com.cloud.storage.resource.VmwareStorageProcessor.deleteVolume(VmwareStorageProcessor.java:1520)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:120)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:54)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:538)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.

[jira] [Updated] (CLOUDSTACK-5370) Xenserver - Snapshots - vhd entries get accumulated on the primary store when snapshot creation fails becasue of not being able to reach the secondary store.

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5370:
---

Assignee: Sanjay Tripathi

> Xenserver - Snapshots - vhd entries get accumulated on the primary store when 
> snapshot creation fails becasue of not being able to reach the secondary 
> store.
> -
>
> Key: CLOUDSTACK-5370
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5370
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Sanjay Tripathi
>Priority: Critical
> Fix For: 4.3.0
>
>
> Set up:
> Advanced Zone with 2 Xenserver 6.2 hosts:
> 1. Deploy 5 Vms in each of the hosts with 10 GB ROOT volume size , so we 
> start with 10 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress. ( In my case i stopped the nfs server)
> 4. Bring the Secondary storage server up after 12 hours. ( In my case started 
> the nfs server).
> When secondary server was down (NFS server down) for about 12 hours , I see 
> that hourly snapshots get attempted every hour and fail with 
> “CreatedOnPrimary" state . I see many entries being created on the primary 
> store ( I see 120 entries , but I have only 14 vms).
> We accumulate  2 vhd files on the primary store for  every  snapshot that is 
> attempted.
> When secondary store is brought up , and when  another snapshot is attempted 
> and it succeeds, we see the vhd files are all being cleared out.
> This is a problem that we accumulate so many vhd files ( In case of vmware 
> and kvm where there are no delta snapshots this size would be significantly 
> higher) on primary store. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5371) Maitenance mode for secondary store.

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5371:
---

Assignee: edison su

> Maitenance mode for secondary store.
> 
>
> Key: CLOUDSTACK-5371
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5371
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: edison su
>Priority: Critical
> Fix For: 4.3.0
>
>
> Set up:
> Advanced Zone with 2 Xenserver 6.2 hosts:
> 1. Deploy 5 Vms in each of the hosts with 10 GB ROOT volume size , so we 
> start with 10 Vms.
> 2. Start concurrent snapshots for ROOT volumes of all the Vms by creating 
> hourly snapshots.
> 3. Shutdown the Secondary storage server when the snapshots are in the 
> progress. ( In my case i stopped the nfs server)
> 4. Bring the Secondary storage server up after 12 hours. ( In my case started 
> the nfs server).
> When secondary server was down (NFS server down) for about 12 hours , I see 
> that hourly snapshots get attempted every hour and fail with 
> “CreatedOnPrimary" state . I see many entries ( 2 per failed snapshot 
> attempt) being created on the primary store.
> In such cases , if the admin is aware of  connectivity issues/ running out of 
> disk space on secondary store , he should be able to set  “Maintenance Mode” 
> for secondary store , so that we have to ability to not even attempt 
> snapshots instead of attempting and failing and leaving behind snapshots in 
> failed state.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5372) Xenserver - SR not being recreated when the Primary storage is brought down and brought back up again resulting in not being able to start the Vms that have their vo

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5372:
---

Assignee: Sanjay Tripathi

> Xenserver - SR not being recreated when the Primary storage is brought down 
> and brought back up again resulting in not being able to start the Vms that 
> have their volumes in this primary store.  
> ---
>
> Key: CLOUDSTACK-5372
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5372
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Sanjay Tripathi
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: primarydown.rar
>
>
> Xenserver - SR not being recreated when the Primary storage is brought down 
> and brought back up again resulting in not being able to start the Vms that 
> have their volumes in this primary store.
> Set up:
> 1 cluster with 2 hosts and 2 Primary storages (PS1 and PS2).
> Start snapshot for couple of VMs which have primary store in PS1.
> Reboot PS2. ( After this nfs server was still down)
> I see that host1 and host2 reboot. But HA is not triggered , since host 
> status remains as “UP”.
> After about 10 mts – Vmsync kicks in and stops all the Vms.
> After about 12 mts (from the time snaoshots were created) , I see the 
> snapshot job failing (read timeouts).
> I start nfs server now.
> Issues:
> I was not able to take another snapshot for the ROOT volume of the Vms that 
> reside in PS2.
> I try to start the VM , Vm also fails to start.
> I see the SR for PS2 is still in not in Connected state in Xenserver side.
> Following exception seen when attempting to take a snapshot:
> 2013-12-04 15:48:19,502 WARN  [c.c.h.x.r.XenServerStorageProcessor] 
> (DirectAgent-311:ctx-59768803) create snapshot operation Failed for 
> snapshotId: 251, reason: The SR has no attached PBDs
> The SR has no attached PBDs
> at com.xensource.xenapi.Types.checkResponse(Types.java:510)
> at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
> at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:909)
> at com.xensource.xenapi.VDI.miamiSnapshot(VDI.java:1217)
> at com.xensource.xenapi.VDI.snapshot(VDI.java:1192)
> at 
> com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.createSnapshot(XenServerStorageProcessor.java:426)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:107)
> at 
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:52)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:613)
> at 
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:59)
> at 
> com.cloud.hypervisor.xen.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:106)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>

[jira] [Updated] (CLOUDSTACK-5385) Management server not able start when there ~15 snapshot policies.

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5385:
---

Assignee: Koushik Das

> Management server not able start when there ~15 snapshot policies.
> --
>
> Key: CLOUDSTACK-5385
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5385
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Koushik Das
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: 1205logs.rar, test.rar
>
>
> Management server not able start when there ~15 snapshot policies.
> Management server was up and running fine.
> I had snapshot policies configured for 15 ROOT volumes.
> Stopped and started the management server.
> Management server does not start up successfully.
> Following is what I see in management serve logs:
> It is stuck after this:
> 2013-12-04 20:35:24,132 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server 112516401760401 is being started
> 2013-12-04 20:35:24,138 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server (host id : 1) is being started at 10.223.49.5:9090
> 2013-12-04 20:35:24,152 INFO  [c.c.c.ClusterManagerImpl] (main:null) Cluster 
> manager was started successfully
> 2013-12-04 20:35:24,153 INFO  [c.c.s.s.SecondaryStorageManagerImpl] 
> (main:null) Start secondary storage vm manager
> 2013-12-04 20:35:24,159 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-0:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-1:null) Starting work
> 2013-12-04 20:35:24,165 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-4:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-3:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-2:null) Starting work
> 2013-12-04 20:35:24,236 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 1 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,297 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 2 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,314 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 3 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,334 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 4 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,354 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 5 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,379 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 6 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,434 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 7 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,454 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 8 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,472 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 9 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,493 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 10 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,510 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 11 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,526 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 13 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,543 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 14 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,565 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 15 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:37,364 INFO  [c.c.u.c.ComponentContext] (main:null) 
> Configuring 
> com.cloud.bridge.persist.dao.Offering

[jira] [Updated] (CLOUDSTACK-5392) Multiple Secondary Store - There is no retry happening on snapshot failures when one of the secondary stores is not reachable.

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi updated CLOUDSTACK-5392:
---

Assignee: Min Chen

> Multiple Secondary Store - There is no retry happening on snapshot failures 
> when one of the secondary stores is not reachable.
> --
>
> Key: CLOUDSTACK-5392
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5392
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Min Chen
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: 1205logs.rar
>
>
> Multiple Secondary Store - There is no retry happening on snapshot failures 
> when one of the secondary stores is not reachable.
> Steps to reproduce the problem:
> Set up:
> Advanced zone set up with 2 Xenserver hosts.
> 2 secondary NFS stores - ss1 and ss2.
> Bring down ss1.
> Deployed 3 VMs.
> Create snapshots for ROOT volume of these 3 VMs.
> out of the 3 snapshot request , 2 were sent to ss1 and 1 to ss2.
> The 2 createSnapshot commands that were sent to ss1 , failed during 
> "org.apache.cloudstack.storage.command.CopyCommand". 
> But there was no retry done on ss2.
> Expected Behavior:
> On failure to back up on 1 secondary store , we should attempt on other 
> secondary stores.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5396) Enhance marvin to able to run specific tests

2013-12-05 Thread Girish Shilamkar (JIRA)
Girish Shilamkar created CLOUDSTACK-5396:


 Summary: Enhance marvin to able to run specific tests
 Key: CLOUDSTACK-5396
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5396
 Project: CloudStack
  Issue Type: Test
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: marvin
Affects Versions: 4.3.0, 4.4.0
Reporter: Girish Shilamkar
Assignee: Santhosh Kumar Edukulla


As of now marvin runs all the tests in a test module. There is no clean way to 
run specific tests from a test module, only option is to add skip decorator.

We could add two options. --skip-test to explicitly avoid running certain tests 
from test module and --run-only to run some of the tests.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5268) [Automation] [UI]There is no option to create snapshot from volume of running vm

2013-12-05 Thread Min Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13841006#comment-13841006
 ] 

Min Chen commented on CLOUDSTACK-5268:
--

This is an UI issue, so assign it to Jessica to take a look.

> [Automation] [UI]There is no option to create snapshot from volume of running 
> vm 
> -
>
> Key: CLOUDSTACK-5268
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5268
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.3.0
> Environment: KVM
> Branch 4.3
>Reporter: Rayees Namathponnan
>Assignee: Jessica Wang
>Priority: Blocker
> Fix For: 4.3.0
>
>
> Steps to reproduce 
> Step 1 : Deploy VM
> Step 2 : Once VM is up, select the root volume 
> Step 3 : Create snapshot
> Actual Result 
> There is no option to create snapshot from volume; you need to stop vm first 
> to create snapshot 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5268) [Automation] [UI]There is no option to create snapshot from volume of running vm

2013-12-05 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen updated CLOUDSTACK-5268:
-

Component/s: (was: Snapshot)
 UI
   Assignee: Jessica Wang  (was: Min Chen)
Summary: [Automation] [UI]There is no option to create snapshot from 
volume of running vm   (was: [Automation] There is no option to create snapshot 
from volume of running vm )

> [Automation] [UI]There is no option to create snapshot from volume of running 
> vm 
> -
>
> Key: CLOUDSTACK-5268
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5268
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.3.0
> Environment: KVM
> Branch 4.3
>Reporter: Rayees Namathponnan
>Assignee: Jessica Wang
>Priority: Blocker
> Fix For: 4.3.0
>
>
> Steps to reproduce 
> Step 1 : Deploy VM
> Step 2 : Once VM is up, select the root volume 
> Step 3 : Create snapshot
> Actual Result 
> There is no option to create snapshot from volume; you need to stop vm first 
> to create snapshot 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-4263) Unable to get git number in maven-jgit-buildnumber-plugin, while build cloudstack from outside repository "cloudstack.git"

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi resolved CLOUDSTACK-4263.


Resolution: Fixed

> Unable to get git number in maven-jgit-buildnumber-plugin, while build 
> cloudstack from outside repository "cloudstack.git"
> --
>
> Key: CLOUDSTACK-4263
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4263
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Packaging
>Affects Versions: 4.2.0
>Reporter: Rayees Namathponnan
>Assignee: Rayees Namathponnan
> Fix For: 4.3.0
>
>
> We have  plugin to get build number, its added as part of below commit 
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=96d29c7f3de5b6ad52aaecd633f9b0ef80aae7a0
> I am creating RPM build, by calling below command trough another script, 
> which is outside "cloudstack.git"
> cd $CWD/cloudstack/packaging/centos63
> echo "Performing Campo Packaging"
> ./package.sh --pack nonoss
> [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
> cloud-client-ui ---
> [INFO] No sources to compile
> [INFO] 
> [INFO] --- maven-surefire-plugin:2.12:test (default-test) @ cloud-client-ui 
> ---
> [INFO] Skipping execution of surefire because it has already been run for 
> this configuration
> [INFO] 
> [INFO] --- maven-jgit-buildnumber-plugin:1.2.6:extract-buildnumber 
> (git-buildnumber) @ cloud-client-ui ---
> [INFO] Cannot extract Git info, maybe custom build with 'pl' argument is 
> running
> [INFO] 
> [INFO] --- maven-war-plugin:2.3:war (default-war) @ cloud-client-ui ---



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-5141) [Automation] Router deployment failed due to failure in SavePasswordCommand, observed error "Unable to save password to DomR" error in KVM agent log

2013-12-05 Thread Animesh Chaturvedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Animesh Chaturvedi resolved CLOUDSTACK-5141.


Resolution: Cannot Reproduce

Reopen if it is still an issue?

> [Automation] Router deployment failed due to failure in SavePasswordCommand, 
> observed error "Unable to save password to DomR" error in KVM agent log 
> -
>
> Key: CLOUDSTACK-5141
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5141
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Virtual Router
>Affects Versions: 4.2.1
> Environment: KVM 
> Branch : 4.2.1
>Reporter: Rayees Namathponnan
>Assignee: Sheng Yang
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5141.rar
>
>
> router deployment faling inconsistently during automation run,
> Command com.cloud.agent.api.routing.SavePasswordCommand failed in KVM agent 
> KVM agent log
> 2013-11-11 11:43:36,783 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-1:null) Request:Seq 2-674103501:  { Cmd , MgmtId: 
> 29066118877352, via: 2, Ver: v
> 1, Flags: 100111, 
> [{"com.cloud.agent.api.routing.SavePasswordCommand":{"password":"fnirq_cnffjbeq","vmIpAddress":"10.1.1.17","vmName":"494fbcec-82ab-489
> 3-95dc-36cb41f8c0ff","executeInSequence":true,"accessDetails":{"router.guest.ip":"10.1.1.1","zone.network.type":"Advanced","router.name":"r-18-QA","rout
> er.ip":"169.254.1.19"},"wait":0}},{"com.cloud.agent.api.routing.VmDataCommand":{"vmIpAddress":"10.1.1.17","vmName":"494fbcec-82ab-4893-95dc-36cb41f8c0ff
> ","executeInSequence":true,"accessDetails":{"router.guest.ip":"10.1.1.1","zone.network.type":"Advanced","router.name":"r-18-QA","router.ip":"169.254.1.1
> 9"},"wait":0}}] }
> 2013-11-11 11:43:57,799 DEBUG [cloud.agent.Agent] 
> (agentRequest-Handler-1:null) Seq 2-674103501:  { Ans: , MgmtId: 
> 29066118877352, via: 2, Ver: v1, Flags: 110, 
> [{"com.cloud.agent.api.Answer":{"result":false,"details":"Unable to save 
> password to 
> DomR.","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
>  by previous failure","wait":0}}] }
> Management server log
> 2013-11-11 11:43:56,991 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-1:null) SeqA 6-270: Processing Seq 6-270:  { Cmd , 
> MgmtId: -1, via: 6, Ver: v1, Flags: 11, 
> [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":3,"_loadInfo":"{\n
>   \"connections\": []\n}","wait":0}}] }
> 2013-11-11 11:43:56,994 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-1:null) SeqA 6-270: Sending Seq 6-270:  { Ans: , 
> MgmtId: 29066118877352, via: 6, Ver: v1, Flags: 100010, 
> [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] }
> 2013-11-11 11:43:57,864 DEBUG [agent.transport.Request] 
> (AgentManager-Handler-13:null) Seq 2-674103501: Processing:  { Ans: , MgmtId: 
> 29066118877352, via: 2, Ver: v1, Flags: 110, 
> [{"com.cloud.agent.api.Answer":{"result":false,"details":"Unable to save 
> password to 
> DomR.","wait":0}},{"com.cloud.agent.api.Answer":{"result":false,"details":"Stopped
>  by previous failure","wait":0}}] }
> 2013-11-11 11:43:57,864 DEBUG [agent.manager.AgentAttache] 
> (AgentManager-Handler-13:null) Seq 2-674103504: Sending now.  is current 
> sequence.
> 2013-11-11 11:43:57,865 DEBUG [agent.transport.Request] 
> (Job-Executor-98:job-90 = [ e7f1a0ac-f28a-4d3c-b5bf-8da87e6db622 ]) Seq 
> 2-674103501: Received:  { Ans: , MgmtId: 29066118877352, via: 2, Ver: v1, 
> Flags: 110, { Answer, Answer } }
> 2013-11-11 11:43:57,865 INFO  [cloud.vm.VirtualMachineManagerImpl] 
> (Job-Executor-98:job-90 = [ e7f1a0ac-f28a-4d3c-b5bf-8da87e6db622 ]) Unable to 
> contact resource.
> com.cloud.exception.ResourceUnavailableException: Resource [DataCenter:1] is 
> unreachable: Unable to apply userdata and password entry on router
> at 
> com.cloud.network.router.VirtualNetworkApplianceManagerImpl.applyRules(VirtualNetworkApplianceManagerImpl.java:3827)
> at 
> com.cloud.network.router.VirtualNetworkApplianceManagerImpl.applyUserData(VirtualNetworkApplianceManagerImpl.java:3017)
> at 
> com.cloud.network.element.VirtualRouterElement.addPasswordAndUserdata(VirtualRouterElement.java:930)
> at 
> com.cloud.network.NetworkManagerImpl.prepareElement(NetworkManagerImpl.java:2085)
> at 
> com.cloud.network.NetworkManagerImpl.prepareNic(NetworkManagerImpl.java:2200)
> at 
> com.cloud.network.NetworkManagerImpl.prepare(NetworkManagerImpl.java:2136)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerIm

[jira] [Commented] (CLOUDSTACK-5336) [Automation] During regression automation management server hang with "out of memory error"

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840975#comment-13840975
 ] 

ASF subversion and git services commented on CLOUDSTACK-5336:
-

Commit 6df26fe5045073d611231c43f75d088bc6cfd91d in branch refs/heads/4.3 from 
[~likithas]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6df26fe ]

Since CLOUDSTACK-5336 is resolved, changing the log level to TRACE
Revert "CLOUDSTACK-5336. During regression automation management server hang 
with "out of memory error"."

This reverts commit c055417589aba55b488bcfb003af7959ff2a63f5.


> [Automation] During regression automation management server hang with "out of 
> memory error" 
> 
>
> Key: CLOUDSTACK-5336
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5336
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Vmware
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5336.rar, dump.rar
>
>
> Steps to reproduce 
> Steps to reproduce 
> Create build from 4.3 branch and install from RPM
> Run BVT and Regression automation 
> Result 
> BVT complete without any issues; after start regression MS not responding, 
> observed below "out of memory error"  in MS log 
> 2013-12-01 20:06:27,300 ERROR [c.c.h.v.r.VmwareResource] 
> (DirectAgent-398:ctx-55572862 10.223.250.131) Unable to execute NetworkUsage 
> command on DomR (10.223.250.17
> 4), domR may not be ready yet. failure due to Exception: 
> java.lang.OutOfMemoryError
> Message: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2219)
> at java.util.ArrayList.grow(ArrayList.java:242)
> at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
> at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
> at java.util.ArrayList.add(ArrayList.java:440)
> at 
> com.sun.xml.internal.ws.model.wsdl.WSDLOperationImpl.addFault(WSDLOperationImpl.java:126)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperationFault(RuntimeWSDLParser.java:740)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperation(RuntimeWSDLParser.java:727)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortType(RuntimeWSDLParser.java:697)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:340)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:301)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:677)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:336)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:157)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:120)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:257)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:220)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:168)
> at 
> com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:96)
> at javax.xml.ws.Service.(Service.java:77)
> at com.vmware.vim25.VimService.(VimService.java:46)
> at 
> com.cloud.hypervisor.vmware.util.VmwareClient.connect(VmwareClient.java:129)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.create(VmwareContextFactory.java:67)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.getContext(VmwareContextFactory.java:85)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6879)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6861)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:6578)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getNetworkStats(VmwareResource.java:6596)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:672)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:505)
> at 

[jira] [Commented] (CLOUDSTACK-5336) [Automation] During regression automation management server hang with "out of memory error"

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840974#comment-13840974
 ] 

ASF subversion and git services commented on CLOUDSTACK-5336:
-

Commit 6df26fe5045073d611231c43f75d088bc6cfd91d in branch refs/heads/4.3 from 
[~likithas]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6df26fe ]

Since CLOUDSTACK-5336 is resolved, changing the log level to TRACE
Revert "CLOUDSTACK-5336. During regression automation management server hang 
with "out of memory error"."

This reverts commit c055417589aba55b488bcfb003af7959ff2a63f5.


> [Automation] During regression automation management server hang with "out of 
> memory error" 
> 
>
> Key: CLOUDSTACK-5336
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5336
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Vmware
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5336.rar, dump.rar
>
>
> Steps to reproduce 
> Steps to reproduce 
> Create build from 4.3 branch and install from RPM
> Run BVT and Regression automation 
> Result 
> BVT complete without any issues; after start regression MS not responding, 
> observed below "out of memory error"  in MS log 
> 2013-12-01 20:06:27,300 ERROR [c.c.h.v.r.VmwareResource] 
> (DirectAgent-398:ctx-55572862 10.223.250.131) Unable to execute NetworkUsage 
> command on DomR (10.223.250.17
> 4), domR may not be ready yet. failure due to Exception: 
> java.lang.OutOfMemoryError
> Message: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2219)
> at java.util.ArrayList.grow(ArrayList.java:242)
> at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
> at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
> at java.util.ArrayList.add(ArrayList.java:440)
> at 
> com.sun.xml.internal.ws.model.wsdl.WSDLOperationImpl.addFault(WSDLOperationImpl.java:126)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperationFault(RuntimeWSDLParser.java:740)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperation(RuntimeWSDLParser.java:727)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortType(RuntimeWSDLParser.java:697)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:340)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:301)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:677)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:336)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:157)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:120)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:257)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:220)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:168)
> at 
> com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:96)
> at javax.xml.ws.Service.(Service.java:77)
> at com.vmware.vim25.VimService.(VimService.java:46)
> at 
> com.cloud.hypervisor.vmware.util.VmwareClient.connect(VmwareClient.java:129)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.create(VmwareContextFactory.java:67)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.getContext(VmwareContextFactory.java:85)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6879)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6861)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:6578)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getNetworkStats(VmwareResource.java:6596)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:672)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:505)
> at 

[jira] [Commented] (CLOUDSTACK-5278) Egress Firewall rules clarifications

2013-12-05 Thread Will Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840959#comment-13840959
 ] 

Will Stevens commented on CLOUDSTACK-5278:
--

Great thanks.  I will test the patch in my environment in the morning...

> Egress Firewall rules clarifications
> 
>
> Key: CLOUDSTACK-5278
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5278
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Will Stevens
>Assignee: Jayapal Reddy
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: diff.txt
>
>
> These issues may also exist in the 4.2 branch, but I am currently 
> testing/working on the 4.3 branch.
> I believe these bugs were introduced with the change to the Network Service 
> Offering to add the 'Default egress policy' dropdown.
> https://issues.apache.org/jira/browse/CLOUDSTACK-1578
> I am trying to resolve the bugs this change introduced in the Palo Alto 
> plugin.
> There are two types of Egress rules (from what I can tell).
> - FirewallRule.FirewallRuleType.System : this appears to be set up by the 
> system on network creation to correspond to the global network default 
> allow/deny egress rule.
> - FirewallRule.FirewallRuleType.User : any rule that a user creates through 
> the UI will get this type.
> There are bugs associated with both of the options in the dropdown (allow and 
> deny).
> Case: 'deny'
> - When the network is setup, it does not try to create the global deny rule 
> for the network, but it appears to register that it exists.  Instead, when 
> the first egress rule is created by a user, the system sees both the 'system' 
> and 'user' rules, so it will create both rules then.
> Case: both 'allow' and 'deny'
> - The clean up of the network global 'system' egress rules are never done.  
> So when a network is deleted, it will leave an orphaned egress rule 
> associated with the previous network's cidr.  This is bound to cause many 
> issues.
> - Even worse, it appears that the ID for the network global 'system' egress 
> rule is hardcoded to '0'.  Every time I try to spin up a new network it will 
> attempt to create a rule with a '0' ID, but since one already exists with 
> that ID, there is a config collision.  In my case (Palo Alto), the second 
> rule with the same ID gets ignored because it checks to see if the rule 
> exists and it gets a 'yes' back because the previous network has an egress 
> rule with that ID already.
> Let me know if you have additional questions...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Closed] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR Fail to add VPN connections

2013-12-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen closed CLOUDSTACK-5394.
-


> site-to-site VPN VR-to-VR Fail to add VPN connections
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Assignee: Sheng Yang
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> This is regression blocker from previous build  
> CloudPlatform-4.3-94-rhel6.4.tar.gz
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474
> response:
> { "listvpncustomergatewaysresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5395) When backup snapshot fails becasue of backup.snapshot.wait time exceeding , the vhd entries form the primary store is not getting cleared.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5395:


Attachment: 1205logs.rar

> When backup snapshot fails becasue of backup.snapshot.wait time exceeding , 
> the vhd entries form the primary store is not getting cleared.
> --
>
> Key: CLOUDSTACK-5395
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5395
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: edison su
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: 1205logs.rar
>
>
> Steps to reproduce the problem:
> Deploy 5 Vms in each of the hosts with 10 GB , so we start with 10 Vms.
> We will be constantly writing to the ROOT volume.
> Change the backup.snapshot.wait to 10 mts and restart management server.
> Start concurrent snapshots for ROOT volumes of all the Vms.
> After 10 mts ,  the snapshots fail. They are present in the database in 
> "CreatedOnPrimary" state.
> Vhd entries from the primary store fails to be cleaned up once the 
> backsnapshpot job fails.
> Expected Behavior:
> We should be able to clean up the  Vhd entries from the primary store when 
> the backsnapshpot job fails.
> select * from snapshot_store_ref;
>  702 |1 | 355 | 2013-12-06 01:25:43 | NULL | NULL   | 
> Primary|0 | 0 |  0 | 
> 2eedb23e-6c3f-4cae-832b-8ddb67c1fc60| Ready |
> 2 |   0 | 2013-12-06 01:25:44 |81 |
> | 703 |1 | 356 | 2013-12-06 01:25:43 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 9d88bc01-9406-41ad-a134-e74dc1457954| Ready |
> 2 |   0 | 2013-12-06 01:26:12 |80 |
> | 704 |1 | 357 | 2013-12-06 01:25:43 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 2667f2bc-6086-4ec3-a88d-20811eabde91| Ready |
> 2 |   0 | 2013-12-06 01:26:08 |79 |
> | 705 |1 | 358 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 522b2296-6960-46f2-af7d-10ddfbede1da| Ready |
> 2 |   0 | 2013-12-06 01:26:45 |78 |
> | 706 |1 | 359 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 3b94fa9d-a5a5-4441-8f9f-275dcef90368| Ready |
> 2 |   0 | 2013-12-06 01:26:04 |77 |
> | 707 |1 | 360 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 1ec1d5ef-177f-4da4-8464-f0c6d71a4e84| Ready |
> 2 |   0 | 2013-12-06 01:25:59 |76 |
> | 708 |1 | 361 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 324e7552-b42a-4660-90d6-62015a7a478e| Ready |
> 2 |   0 | 2013-12-06 01:26:21 |75 |
> | 709 |1 | 362 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 65bd522c-c2c8-471a-be37-095558d058f2| Ready |
> 2 |   0 | 2013-12-06 01:26:16 |74 |
> | 710 |1 | 363 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> d45ca6c7-7284-4150-907c-9499e9737c47| Ready |
> 2 |   0 | 2013-12-06 01:25:46 |73 |
> | 711 |1 | 364 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 4422f362-0be5-4a10-b172-45678d56f807| Ready |
> 2 |   0 | 2013-12-06 01:25:55 |72 |
> | 712 |1 | 365 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 89ffd430-3c03-45d2-9c48-9384636b9cd8| Ready |
> 2 |   0 | 2013-12-06 01:26:01 |71 |
> | 714 |1 | 366 | 2013-12-06 01:25:45 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> fca5545c-9b83-4bc1-abd2-dd1bc82b23bd| Ready |
> 2 |   0 | 2013-12-06 01:25:

[jira] [Updated] (CLOUDSTACK-5392) Multiple Secondary Store - There is no retry happening on snapshot failures when one of the secondary stores is not reachable.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5392:


Attachment: 1205logs.rar

> Multiple Secondary Store - There is no retry happening on snapshot failures 
> when one of the secondary stores is not reachable.
> --
>
> Key: CLOUDSTACK-5392
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5392
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: 1205logs.rar
>
>
> Multiple Secondary Store - There is no retry happening on snapshot failures 
> when one of the secondary stores is not reachable.
> Steps to reproduce the problem:
> Set up:
> Advanced zone set up with 2 Xenserver hosts.
> 2 secondary NFS stores - ss1 and ss2.
> Bring down ss1.
> Deployed 3 VMs.
> Create snapshots for ROOT volume of these 3 VMs.
> out of the 3 snapshot request , 2 were sent to ss1 and 1 to ss2.
> The 2 createSnapshot commands that were sent to ss1 , failed during 
> "org.apache.cloudstack.storage.command.CopyCommand". 
> But there was no retry done on ss2.
> Expected Behavior:
> On failure to back up on 1 secondary store , we should attempt on other 
> secondary stores.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5385) Management server not able start when there ~15 snapshot policies.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5385:


Attachment: 1205logs.rar

> Management server not able start when there ~15 snapshot policies.
> --
>
> Key: CLOUDSTACK-5385
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5385
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: 1205logs.rar, test.rar
>
>
> Management server not able start when there ~15 snapshot policies.
> Management server was up and running fine.
> I had snapshot policies configured for 15 ROOT volumes.
> Stopped and started the management server.
> Management server does not start up successfully.
> Following is what I see in management serve logs:
> It is stuck after this:
> 2013-12-04 20:35:24,132 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server 112516401760401 is being started
> 2013-12-04 20:35:24,138 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server (host id : 1) is being started at 10.223.49.5:9090
> 2013-12-04 20:35:24,152 INFO  [c.c.c.ClusterManagerImpl] (main:null) Cluster 
> manager was started successfully
> 2013-12-04 20:35:24,153 INFO  [c.c.s.s.SecondaryStorageManagerImpl] 
> (main:null) Start secondary storage vm manager
> 2013-12-04 20:35:24,159 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-0:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-1:null) Starting work
> 2013-12-04 20:35:24,165 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-4:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-3:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-2:null) Starting work
> 2013-12-04 20:35:24,236 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 1 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,297 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 2 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,314 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 3 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,334 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 4 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,354 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 5 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,379 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 6 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,434 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 7 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,454 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 8 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,472 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 9 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,493 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 10 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,510 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 11 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,526 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 13 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,543 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 14 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,565 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 15 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:37,364 INFO  [c.c.u.c.ComponentContext] (main:null) 
> Configuring 
> com.cloud.bridge.persist.dao.OfferingDaoImpl_EnhancerByCloudStack_e

[jira] [Updated] (CLOUDSTACK-5385) Management server not able start when there ~15 snapshot policies.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5385:


Priority: Critical  (was: Blocker)

> Management server not able start when there ~15 snapshot policies.
> --
>
> Key: CLOUDSTACK-5385
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5385
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: 1205logs.rar, test.rar
>
>
> Management server not able start when there ~15 snapshot policies.
> Management server was up and running fine.
> I had snapshot policies configured for 15 ROOT volumes.
> Stopped and started the management server.
> Management server does not start up successfully.
> Following is what I see in management serve logs:
> It is stuck after this:
> 2013-12-04 20:35:24,132 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server 112516401760401 is being started
> 2013-12-04 20:35:24,138 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server (host id : 1) is being started at 10.223.49.5:9090
> 2013-12-04 20:35:24,152 INFO  [c.c.c.ClusterManagerImpl] (main:null) Cluster 
> manager was started successfully
> 2013-12-04 20:35:24,153 INFO  [c.c.s.s.SecondaryStorageManagerImpl] 
> (main:null) Start secondary storage vm manager
> 2013-12-04 20:35:24,159 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-0:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-1:null) Starting work
> 2013-12-04 20:35:24,165 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-4:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-3:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-2:null) Starting work
> 2013-12-04 20:35:24,236 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 1 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,297 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 2 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,314 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 3 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,334 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 4 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,354 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 5 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,379 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 6 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,434 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 7 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,454 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 8 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,472 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 9 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,493 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 10 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,510 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 11 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,526 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 13 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,543 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 14 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,565 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 15 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:37,364 INFO  [c.c.u.c.ComponentContext] (main:null) 
> Configuring 
> com.cloud.bridge.persist.dao.OfferingDaoImpl_EnhancerByCl

[jira] [Updated] (CLOUDSTACK-5381) test_custom_hostname.TestInstanceNameFlagFalse test cases failed with "Vm display name should match the given name"

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan updated CLOUDSTACK-5381:


Issue Type: Test  (was: Bug)

> test_custom_hostname.TestInstanceNameFlagFalse test cases failed with "Vm 
> display name should match the given name"
> ---
>
> Key: CLOUDSTACK-5381
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5381
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.3.0
>Reporter: Ashutosk Kelkar
>Assignee: Ashutosk Kelkar
>  Labels: automation
> Fix For: 4.3.0
>
>
> Following test cases failed with the error:
> test_01_custom_hostname_instancename_false
> test_02_custom_hostname_instancename_false



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5336) [Automation] During regression automation management server hang with "out of memory error"

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840880#comment-13840880
 ] 

ASF subversion and git services commented on CLOUDSTACK-5336:
-

Commit 425723e1646695ec08c6dba1a228b0b747901e9b in branch refs/heads/master 
from [~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=425723e ]

CLOUDSTACK-5336:[Automation] During regression automation management
server hang with "out of memory error".

> [Automation] During regression automation management server hang with "out of 
> memory error" 
> 
>
> Key: CLOUDSTACK-5336
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5336
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Vmware
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5336.rar, dump.rar
>
>
> Steps to reproduce 
> Steps to reproduce 
> Create build from 4.3 branch and install from RPM
> Run BVT and Regression automation 
> Result 
> BVT complete without any issues; after start regression MS not responding, 
> observed below "out of memory error"  in MS log 
> 2013-12-01 20:06:27,300 ERROR [c.c.h.v.r.VmwareResource] 
> (DirectAgent-398:ctx-55572862 10.223.250.131) Unable to execute NetworkUsage 
> command on DomR (10.223.250.17
> 4), domR may not be ready yet. failure due to Exception: 
> java.lang.OutOfMemoryError
> Message: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2219)
> at java.util.ArrayList.grow(ArrayList.java:242)
> at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
> at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
> at java.util.ArrayList.add(ArrayList.java:440)
> at 
> com.sun.xml.internal.ws.model.wsdl.WSDLOperationImpl.addFault(WSDLOperationImpl.java:126)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperationFault(RuntimeWSDLParser.java:740)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperation(RuntimeWSDLParser.java:727)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortType(RuntimeWSDLParser.java:697)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:340)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:301)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:677)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:336)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:157)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:120)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:257)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:220)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:168)
> at 
> com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:96)
> at javax.xml.ws.Service.(Service.java:77)
> at com.vmware.vim25.VimService.(VimService.java:46)
> at 
> com.cloud.hypervisor.vmware.util.VmwareClient.connect(VmwareClient.java:129)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.create(VmwareContextFactory.java:67)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.getContext(VmwareContextFactory.java:85)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6879)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6861)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:6578)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getNetworkStats(VmwareResource.java:6596)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:672)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:505)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloud

[jira] [Commented] (CLOUDSTACK-4950) Latest VMware SDK client has a problem to support sessions towards multiple vCenter instance within the same JVM

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840861#comment-13840861
 ] 

ASF subversion and git services commented on CLOUDSTACK-4950:
-

Commit e33f8f2f44f7cbe81195a8e31f1e977c72ea4f7e in branch refs/heads/4.3 from 
[~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e33f8f2 ]

Revert "CLOUDSTACK-4950: fix the problem to support sessions to multiple 
vCenter instance"

This reverts commit ed0fbcc81c1928062054190ffcfab8bb59969cc2.


> Latest VMware SDK client has a problem to support sessions towards multiple 
> vCenter instance within the same JVM
> 
>
> Key: CLOUDSTACK-4950
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4950
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kelven Yang
>Assignee: Kelven Yang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5336) [Automation] During regression automation management server hang with "out of memory error"

2013-12-05 Thread Min Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840862#comment-13840862
 ] 

Min Chen commented on CLOUDSTACK-5336:
--

The problem is that each VmwareContext took too much memory, caused by commit 
ed0fbcc81c1928062054190ffcfab8bb59969cc2 in changing VmwareClient.vimService 
from static variable to local variable.

> [Automation] During regression automation management server hang with "out of 
> memory error" 
> 
>
> Key: CLOUDSTACK-5336
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5336
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Vmware
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5336.rar, dump.rar
>
>
> Steps to reproduce 
> Steps to reproduce 
> Create build from 4.3 branch and install from RPM
> Run BVT and Regression automation 
> Result 
> BVT complete without any issues; after start regression MS not responding, 
> observed below "out of memory error"  in MS log 
> 2013-12-01 20:06:27,300 ERROR [c.c.h.v.r.VmwareResource] 
> (DirectAgent-398:ctx-55572862 10.223.250.131) Unable to execute NetworkUsage 
> command on DomR (10.223.250.17
> 4), domR may not be ready yet. failure due to Exception: 
> java.lang.OutOfMemoryError
> Message: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2219)
> at java.util.ArrayList.grow(ArrayList.java:242)
> at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
> at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
> at java.util.ArrayList.add(ArrayList.java:440)
> at 
> com.sun.xml.internal.ws.model.wsdl.WSDLOperationImpl.addFault(WSDLOperationImpl.java:126)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperationFault(RuntimeWSDLParser.java:740)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperation(RuntimeWSDLParser.java:727)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortType(RuntimeWSDLParser.java:697)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:340)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:301)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:677)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:336)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:157)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:120)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:257)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:220)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:168)
> at 
> com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:96)
> at javax.xml.ws.Service.(Service.java:77)
> at com.vmware.vim25.VimService.(VimService.java:46)
> at 
> com.cloud.hypervisor.vmware.util.VmwareClient.connect(VmwareClient.java:129)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.create(VmwareContextFactory.java:67)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.getContext(VmwareContextFactory.java:85)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6879)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6861)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:6578)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getNetworkStats(VmwareResource.java:6596)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:672)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:505)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> 2013-12-01 20:06:27,409 DEBUG [c.c.n.r.V

[jira] [Resolved] (CLOUDSTACK-5336) [Automation] During regression automation management server hang with "out of memory error"

2013-12-05 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen resolved CLOUDSTACK-5336.
--

Resolution: Fixed

> [Automation] During regression automation management server hang with "out of 
> memory error" 
> 
>
> Key: CLOUDSTACK-5336
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5336
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Vmware
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5336.rar, dump.rar
>
>
> Steps to reproduce 
> Steps to reproduce 
> Create build from 4.3 branch and install from RPM
> Run BVT and Regression automation 
> Result 
> BVT complete without any issues; after start regression MS not responding, 
> observed below "out of memory error"  in MS log 
> 2013-12-01 20:06:27,300 ERROR [c.c.h.v.r.VmwareResource] 
> (DirectAgent-398:ctx-55572862 10.223.250.131) Unable to execute NetworkUsage 
> command on DomR (10.223.250.17
> 4), domR may not be ready yet. failure due to Exception: 
> java.lang.OutOfMemoryError
> Message: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2219)
> at java.util.ArrayList.grow(ArrayList.java:242)
> at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
> at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
> at java.util.ArrayList.add(ArrayList.java:440)
> at 
> com.sun.xml.internal.ws.model.wsdl.WSDLOperationImpl.addFault(WSDLOperationImpl.java:126)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperationFault(RuntimeWSDLParser.java:740)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperation(RuntimeWSDLParser.java:727)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortType(RuntimeWSDLParser.java:697)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:340)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:301)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:677)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:336)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:157)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:120)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:257)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:220)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:168)
> at 
> com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:96)
> at javax.xml.ws.Service.(Service.java:77)
> at com.vmware.vim25.VimService.(VimService.java:46)
> at 
> com.cloud.hypervisor.vmware.util.VmwareClient.connect(VmwareClient.java:129)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.create(VmwareContextFactory.java:67)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.getContext(VmwareContextFactory.java:85)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6879)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6861)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:6578)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getNetworkStats(VmwareResource.java:6596)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:672)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:505)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> 2013-12-01 20:06:27,409 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-32e68e7c) Found 1 routers to update status.
> 2013-12-01 20:06:27,467 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-32e68e7c) Foun

[jira] [Commented] (CLOUDSTACK-4950) Latest VMware SDK client has a problem to support sessions towards multiple vCenter instance within the same JVM

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840851#comment-13840851
 ] 

ASF subversion and git services commented on CLOUDSTACK-4950:
-

Commit b7da94f764472eda3f9326288663c292bc17c505 in branch refs/heads/4.2 from 
[~minchen07]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=b7da94f ]

Revert "CLOUDSTACK-4950: fix the problem to support sessions to multiple 
vCenter instance"

This reverts commit 7c3a7fe312bb649b6e81078e4c26a93f534ce778.


> Latest VMware SDK client has a problem to support sessions towards multiple 
> vCenter instance within the same JVM
> 
>
> Key: CLOUDSTACK-4950
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4950
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kelven Yang
>Assignee: Kelven Yang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5395) When backup snapshot fails becasue of backup.snapshot.wait time exceeding , the vhd entries form the primary store is not getting cleared.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5395:


  Component/s: Management Server
 Priority: Critical  (was: Major)
  Environment: Build from 4.3
Affects Version/s: 4.3.0
Fix Version/s: 4.3.0
 Assignee: edison su
  Summary: When backup snapshot fails becasue of 
backup.snapshot.wait time exceeding , the vhd entries form the primary store is 
not getting cleared.  (was: When )

> When backup snapshot fails becasue of backup.snapshot.wait time exceeding , 
> the vhd entries form the primary store is not getting cleared.
> --
>
> Key: CLOUDSTACK-5395
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5395
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: edison su
>Priority: Critical
> Fix For: 4.3.0
>
>
> Steps to reproduce the problem:
> Deploy 5 Vms in each of the hosts with 10 GB , so we start with 10 Vms.
> We will be constantly writing to the ROOT volume.
> Change the backup.snapshot.wait to 10 mts and restart management server.
> Start concurrent snapshots for ROOT volumes of all the Vms.
> After 10 mts ,  the snapshots fail. They are present in the database in 
> "CreatedOnPrimary" state.
> Vhd entries from the primary store fails to be cleaned up once the 
> backsnapshpot job fails.
> Expected Behavior:
> We should be able to clean up the  Vhd entries from the primary store when 
> the backsnapshpot job fails.
> select * from snapshot_store_ref;
>  702 |1 | 355 | 2013-12-06 01:25:43 | NULL | NULL   | 
> Primary|0 | 0 |  0 | 
> 2eedb23e-6c3f-4cae-832b-8ddb67c1fc60| Ready |
> 2 |   0 | 2013-12-06 01:25:44 |81 |
> | 703 |1 | 356 | 2013-12-06 01:25:43 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 9d88bc01-9406-41ad-a134-e74dc1457954| Ready |
> 2 |   0 | 2013-12-06 01:26:12 |80 |
> | 704 |1 | 357 | 2013-12-06 01:25:43 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 2667f2bc-6086-4ec3-a88d-20811eabde91| Ready |
> 2 |   0 | 2013-12-06 01:26:08 |79 |
> | 705 |1 | 358 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 522b2296-6960-46f2-af7d-10ddfbede1da| Ready |
> 2 |   0 | 2013-12-06 01:26:45 |78 |
> | 706 |1 | 359 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 3b94fa9d-a5a5-4441-8f9f-275dcef90368| Ready |
> 2 |   0 | 2013-12-06 01:26:04 |77 |
> | 707 |1 | 360 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 1ec1d5ef-177f-4da4-8464-f0c6d71a4e84| Ready |
> 2 |   0 | 2013-12-06 01:25:59 |76 |
> | 708 |1 | 361 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 324e7552-b42a-4660-90d6-62015a7a478e| Ready |
> 2 |   0 | 2013-12-06 01:26:21 |75 |
> | 709 |1 | 362 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 65bd522c-c2c8-471a-be37-095558d058f2| Ready |
> 2 |   0 | 2013-12-06 01:26:16 |74 |
> | 710 |1 | 363 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> d45ca6c7-7284-4150-907c-9499e9737c47| Ready |
> 2 |   0 | 2013-12-06 01:25:46 |73 |
> | 711 |1 | 364 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 4422f362-0be5-4a10-b172-45678d56f807| Ready |
> 2 |   0 | 2013-12-06 01:25:55 |72 |
> | 712 |1 | 365 | 2013-12-06 01:25:44 | NULL | NULL   
> | Primary|0 | 0 |  0 | 
> 89ffd430-3c03-45d2-9c48-9384636b9cd8| Re

[jira] [Created] (CLOUDSTACK-5395) When

2013-12-05 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-5395:
---

 Summary: When 
 Key: CLOUDSTACK-5395
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5395
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Sangeetha Hariharan


Steps to reproduce the problem:

Deploy 5 Vms in each of the hosts with 10 GB , so we start with 10 Vms.
We will be constantly writing to the ROOT volume.

Change the backup.snapshot.wait to 10 mts and restart management server.

Start concurrent snapshots for ROOT volumes of all the Vms.

After 10 mts ,  the snapshots fail. They are present in the database in 
"CreatedOnPrimary" state.

Vhd entries from the primary store fails to be cleaned up once the 
backsnapshpot job fails.

Expected Behavior:
We should be able to clean up the  Vhd entries from the primary store when the 
backsnapshpot job fails.

select * from snapshot_store_ref;

 702 |1 | 355 | 2013-12-06 01:25:43 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
2eedb23e-6c3f-4cae-832b-8ddb67c1fc60| Ready |2 
|   0 | 2013-12-06 01:25:44 |81 |
| 703 |1 | 356 | 2013-12-06 01:25:43 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
9d88bc01-9406-41ad-a134-e74dc1457954| Ready |2 
|   0 | 2013-12-06 01:26:12 |80 |
| 704 |1 | 357 | 2013-12-06 01:25:43 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
2667f2bc-6086-4ec3-a88d-20811eabde91| Ready |2 
|   0 | 2013-12-06 01:26:08 |79 |
| 705 |1 | 358 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
522b2296-6960-46f2-af7d-10ddfbede1da| Ready |2 
|   0 | 2013-12-06 01:26:45 |78 |
| 706 |1 | 359 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
3b94fa9d-a5a5-4441-8f9f-275dcef90368| Ready |2 
|   0 | 2013-12-06 01:26:04 |77 |
| 707 |1 | 360 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
1ec1d5ef-177f-4da4-8464-f0c6d71a4e84| Ready |2 
|   0 | 2013-12-06 01:25:59 |76 |
| 708 |1 | 361 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
324e7552-b42a-4660-90d6-62015a7a478e| Ready |2 
|   0 | 2013-12-06 01:26:21 |75 |
| 709 |1 | 362 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
65bd522c-c2c8-471a-be37-095558d058f2| Ready |2 
|   0 | 2013-12-06 01:26:16 |74 |
| 710 |1 | 363 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
d45ca6c7-7284-4150-907c-9499e9737c47| Ready |2 
|   0 | 2013-12-06 01:25:46 |73 |
| 711 |1 | 364 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
4422f362-0be5-4a10-b172-45678d56f807| Ready |2 
|   0 | 2013-12-06 01:25:55 |72 |
| 712 |1 | 365 | 2013-12-06 01:25:44 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
89ffd430-3c03-45d2-9c48-9384636b9cd8| Ready |2 
|   0 | 2013-12-06 01:26:01 |71 |
| 714 |1 | 366 | 2013-12-06 01:25:45 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
fca5545c-9b83-4bc1-abd2-dd1bc82b23bd| Ready |2 
|   0 | 2013-12-06 01:25:53 |70 |
| 715 |1 | 367 | 2013-12-06 01:25:45 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
033d8f55-8895-40b8-a120-11b28fa1f96e| Ready |2 
|   0 | 2013-12-06 01:25:50 |69 |
| 716 |1 | 368 | 2013-12-06 01:25:45 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
e4d02558-28c2-474e-a379-970b22f33f55| Ready |2 
|   0 | 2013-12-06 01:26:23 |68 |
| 717 |1 | 369 | 2013-12-06 01:25:45 | NULL | NULL   | 
Primary|0 | 0 |  0 | 
6f7c1ca0-9877-44af-9f77-4db7b8efc934| Ready |2 
|   0 | 2013-12-0

[jira] [Updated] (CLOUDSTACK-5278) Egress Firewall rules clarifications

2013-12-05 Thread Jayapal Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayapal Reddy updated CLOUDSTACK-5278:
--

Attachment: diff.txt

Hi Stevens,

Please find the attached diff which includes the changes suggested by earlier 
comments.
Attaching this for your reference. You can refer this diff and make changes for 
palo alto resource layer.

The testing of this patch in progress.

> Egress Firewall rules clarifications
> 
>
> Key: CLOUDSTACK-5278
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5278
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Will Stevens
>Assignee: Jayapal Reddy
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: diff.txt
>
>
> These issues may also exist in the 4.2 branch, but I am currently 
> testing/working on the 4.3 branch.
> I believe these bugs were introduced with the change to the Network Service 
> Offering to add the 'Default egress policy' dropdown.
> https://issues.apache.org/jira/browse/CLOUDSTACK-1578
> I am trying to resolve the bugs this change introduced in the Palo Alto 
> plugin.
> There are two types of Egress rules (from what I can tell).
> - FirewallRule.FirewallRuleType.System : this appears to be set up by the 
> system on network creation to correspond to the global network default 
> allow/deny egress rule.
> - FirewallRule.FirewallRuleType.User : any rule that a user creates through 
> the UI will get this type.
> There are bugs associated with both of the options in the dropdown (allow and 
> deny).
> Case: 'deny'
> - When the network is setup, it does not try to create the global deny rule 
> for the network, but it appears to register that it exists.  Instead, when 
> the first egress rule is created by a user, the system sees both the 'system' 
> and 'user' rules, so it will create both rules then.
> Case: both 'allow' and 'deny'
> - The clean up of the network global 'system' egress rules are never done.  
> So when a network is deleted, it will leave an orphaned egress rule 
> associated with the previous network's cidr.  This is bound to cause many 
> issues.
> - Even worse, it appears that the ID for the network global 'system' egress 
> rule is hardcoded to '0'.  Every time I try to spin up a new network it will 
> attempt to create a rule with a '0' ID, but since one already exists with 
> that ID, there is a config collision.  In my case (Palo Alto), the second 
> rule with the same ID gets ignored because it checks to see if the rule 
> exists and it gets a 'yes' back because the previous network has an egress 
> rule with that ID already.
> Let me know if you have additional questions...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CLOUDSTACK-5336) [Automation] During regression automation management server hang with "out of memory error"

2013-12-05 Thread Min Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Chen reassigned CLOUDSTACK-5336:


Assignee: Min Chen  (was: Likitha Shetty)

> [Automation] During regression automation management server hang with "out of 
> memory error" 
> 
>
> Key: CLOUDSTACK-5336
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5336
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Vmware
> Branch : 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: CLOUDSTACK-5336.rar, dump.rar
>
>
> Steps to reproduce 
> Steps to reproduce 
> Create build from 4.3 branch and install from RPM
> Run BVT and Regression automation 
> Result 
> BVT complete without any issues; after start regression MS not responding, 
> observed below "out of memory error"  in MS log 
> 2013-12-01 20:06:27,300 ERROR [c.c.h.v.r.VmwareResource] 
> (DirectAgent-398:ctx-55572862 10.223.250.131) Unable to execute NetworkUsage 
> command on DomR (10.223.250.17
> 4), domR may not be ready yet. failure due to Exception: 
> java.lang.OutOfMemoryError
> Message: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2219)
> at java.util.ArrayList.grow(ArrayList.java:242)
> at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
> at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
> at java.util.ArrayList.add(ArrayList.java:440)
> at 
> com.sun.xml.internal.ws.model.wsdl.WSDLOperationImpl.addFault(WSDLOperationImpl.java:126)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperationFault(RuntimeWSDLParser.java:740)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortTypeOperation(RuntimeWSDLParser.java:727)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parsePortType(RuntimeWSDLParser.java:697)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:340)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:301)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseImport(RuntimeWSDLParser.java:677)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parseWSDL(RuntimeWSDLParser.java:336)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:157)
> at 
> com.sun.xml.internal.ws.wsdl.parser.RuntimeWSDLParser.parse(RuntimeWSDLParser.java:120)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.parseWSDL(WSServiceDelegate.java:257)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:220)
> at 
> com.sun.xml.internal.ws.client.WSServiceDelegate.(WSServiceDelegate.java:168)
> at 
> com.sun.xml.internal.ws.spi.ProviderImpl.createServiceDelegate(ProviderImpl.java:96)
> at javax.xml.ws.Service.(Service.java:77)
> at com.vmware.vim25.VimService.(VimService.java:46)
> at 
> com.cloud.hypervisor.vmware.util.VmwareClient.connect(VmwareClient.java:129)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.create(VmwareContextFactory.java:67)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareContextFactory.getContext(VmwareContextFactory.java:85)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6879)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getServiceContext(VmwareResource.java:6861)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:6578)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.getNetworkStats(VmwareResource.java:6596)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:672)
> at 
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:505)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> 2013-12-01 20:06:27,409 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatusMonitor-1:ctx-32e68e7c) Found 1 routers to update status.
> 2013-12-01 20:06:27,467 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
> (RouterStatus

[jira] [Resolved] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR Fail to add VPN connections

2013-12-05 Thread Sheng Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sheng Yang resolved CLOUDSTACK-5394.


Resolution: Invalid

At the provided setup, there is no customer gateway.

And I've added one customer gateway, every VPC would able to see it.

Mark as INVALID.

> site-to-site VPN VR-to-VR Fail to add VPN connections
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Assignee: Sheng Yang
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> This is regression blocker from previous build  
> CloudPlatform-4.3-94-rhel6.4.tar.gz
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474
> response:
> { "listvpncustomergatewaysresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR Fail to add VPN connections

2013-12-05 Thread Sheng Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840807#comment-13840807
 ] 

Sheng Yang commented on CLOUDSTACK-5394:


I would take a look, but suspect it's caused by other's commit.

> site-to-site VPN VR-to-VR Fail to add VPN connections
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Assignee: Sheng Yang
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> This is regression blocker from previous build  
> CloudPlatform-4.3-94-rhel6.4.tar.gz
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474
> response:
> { "listvpncustomergatewaysresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR Fail to add VPN connections

2013-12-05 Thread Sheng Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sheng Yang reassigned CLOUDSTACK-5394:
--

Assignee: Sheng Yang

> site-to-site VPN VR-to-VR Fail to add VPN connections
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Assignee: Sheng Yang
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> This is regression blocker from previous build  
> CloudPlatform-4.3-94-rhel6.4.tar.gz
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474
> response:
> { "listvpncustomergatewaysresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5112) [Baremetal]Make IPMI retry times configurable

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840800#comment-13840800
 ] 

ASF subversion and git services commented on CLOUDSTACK-5112:
-

Commit d3bff27ef9c7053babb9db76b03c1506e922dfae in branch refs/heads/4.3 from 
[~frank.zhang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=d3bff27 ]

CLOUDSTACK-5112
[Baremetal]Make IPMI retry times configurable

Conflicts:

server/src/com/cloud/configuration/Config.java
setup/db/db/schema-420to421.sql


> [Baremetal]Make IPMI retry times configurable
> -
>
> Key: CLOUDSTACK-5112
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5112
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Baremetal
>Affects Versions: 4.2.1
>Reporter: frank zhang
>Assignee: frank zhang
> Fix For: 4.2.1
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4674) [baremetal] /usr/share/cloudstack-common/scripts/util/ipmi.py script need to recognize various ipmi version and BMC type of server

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840796#comment-13840796
 ] 

ASF subversion and git services commented on CLOUDSTACK-4674:
-

Commit fd5b9a278017c0d1dd9aeecbfa8a8e2ef76c5273 in branch refs/heads/4.3 from 
[~frank.zhang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fd5b9a2 ]

CLOUDSTACK-4674
[baremetal] /usr/share/cloudstack-common/scripts/util/ipmi.py script
need to recognize various ipmi version and BMC type of server


> [baremetal] /usr/share/cloudstack-common/scripts/util/ipmi.py script need to 
> recognize various ipmi version and BMC type of server
> --
>
> Key: CLOUDSTACK-4674
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4674
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: MS   4.2  campo
> baremetal 
>Reporter: angeline shen
>Assignee: frank zhang
>Priority: Critical
> Fix For: 4.2.1
>
>
> On MS:  ./usr/share/cloudstack-common/scripts/util/ipmi.py :
>  o = ipmitool("-H", hostname, "-U", usrname, "-P", password, "chassis", 
> "bootdev", dev)
> need to include  -l   lanplus option :
> o = ipmitool("-H", hostname, "-U", usrname, "-P", password, "-l", 
> "lanplus",  "chassis", "bootdev", dev)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4850) [UCS] using template instead of cloning profile

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840795#comment-13840795
 ] 

ASF subversion and git services commented on CLOUDSTACK-4850:
-

Commit ef6038f1b3fc8bcf64c54d38c0b53f0cd47d2ded in branch refs/heads/4.3 from 
[~frank.zhang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ef6038f ]

commit 8edaf63c4e4a054b17a2dfe4233d103fb2ee9e6a
Author: Frank.Zhang 
Date:   Thu Oct 10 14:45:03 2013 -0700

CLOUDSTACK-4850
[UCS] using template instead of cloning profile


> [UCS] using template instead of cloning profile
> ---
>
> Key: CLOUDSTACK-4850
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4850
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UCS
>Affects Versions: 4.2.1
>Reporter: frank zhang
>Assignee: Jessica Wang
> Fix For: 4.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR Fail to add VPN connections

2013-12-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen updated CLOUDSTACK-5394:
--

Description: 
This is regression blocker from previous build  
CloudPlatform-4.3-94-rhel6.4.tar.gz

MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
host   XS 6.210.223.51.310.223.51.4

1.Bring up CS in advanced zone
2. admin  creates VPC A and d1user creates VPC B.
3. admin/User enables VPN gateway on VPC A, and VPC B.
4. admin/User creates VPN customer gateway for VPC A and VPC B.
5. admin/User create VPN connection on VPC.  
Result:
UI FAIL to display list of VPN customer gateway created in step 4.
 
client call:
http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474

response:
{ "listvpncustomergatewaysresponse" : { } }




  was:


MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
host   XS 6.210.223.51.310.223.51.4

1.Bring up CS in advanced zone
2. admin  creates VPC A and d1user creates VPC B.
3. admin/User enables VPN gateway on VPC A, and VPC B.
4. admin/User creates VPN customer gateway for VPC A and VPC B.
5. admin/User create VPN connection on VPC.  
Result:
UI FAIL to display list of VPN customer gateway created in step 4.
 
client call:
http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474

response:
{ "listvpncustomergatewaysresponse" : { } }




Summary: site-to-site VPN VR-to-VR Fail to add VPN connections  (was: 
site-to-site VPN VR-to-VR)

> site-to-site VPN VR-to-VR Fail to add VPN connections
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> This is regression blocker from previous build  
> CloudPlatform-4.3-94-rhel6.4.tar.gz
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474
> response:
> { "listvpncustomergatewaysresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CLOUDSTACK-4490) [Automation] ssh.close not happening leading to potential failures in Netscaler test scripts

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan reopened CLOUDSTACK-4490:
-


Reopening; how we can resolve this ticket with resolution later ? 

> [Automation] ssh.close not happening leading to potential failures in 
> Netscaler test scripts
> 
>
> Key: CLOUDSTACK-4490
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4490
> Project: CloudStack
>  Issue Type: Test
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.2.0
> Environment: 4.2, Netscaler as public LB provider
>Reporter: Sowmya Krishnan
>Assignee: Girish Shilamkar
> Fix For: 4.3.0
>
>
> ssh connections through remoteSSHClient.py not closing connections causing 
> potential failures in Netscaler scripts. This is generally not a problem with 
> VMs since we cleanup up the account after the script ends, but with ssh to 
> external devices like Netscaler, we end up with too many open sessions 
> causing NS to refuse more connections.
> We end up with the following error in NS:
> Error: Connection limit to CFE exceeded 
> (This happens to be a known issue with certain versions of NS)
> In any case, I think we should either fix remoteSSHClient or add an explicit 
> close in the test scripts.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CLOUDSTACK-5387) RemoteVPNonVPC : Unable to remotely access a VM in a VPC after enabling S2S VPN on the VPC VR

2013-12-05 Thread Sheng Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sheng Yang reassigned CLOUDSTACK-5387:
--

Assignee: Sheng Yang

> RemoteVPNonVPC :  Unable to remotely access a VM in a VPC after enabling S2S 
> VPN on the VPC VR
> --
>
> Key: CLOUDSTACK-5387
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5387
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
>Reporter: Chandan Purushothama
>Assignee: Sheng Yang
>Priority: Critical
> Fix For: 4.3.0
>
>
> 
> Steps to Reproduce:
> 
> 1. Deploy a VPC with a network tier in it. Deploy a VM in the network tier. 
> Locate router/public ip for the VPC and enable Remote access vpn on it.
> 2. note preshared key
> 3. create a vpn user using addVpnUser API(using valid username and password)
> 4. from a standalone linux machine configure vpn client to point to public ip 
> address from Step 1.
> 5. Add a ALLOW ACL Rule on ALL protocols to network tier's ACL List such that 
> it blocks ssh access to the client's network.
> 6. ssh (using putty or any other terminal client) to the vm in network tier 
> provisioned earlier.
> 7 Create a S2S VPN Connection on this VPC where the VPC VR is the passive end 
> of the connection.
> 8. Establish the S2S VPN Connection from another VPC to this VPC.
> 9. Observe that the Remote Access to the VM no longer works.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR

2013-12-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen updated CLOUDSTACK-5394:
--

Description: 


MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
host   XS 6.210.223.51.310.223.51.4

1.Bring up CS in advanced zone
2. admin  creates VPC A and d1user creates VPC B.
3. admin/User enables VPN gateway on VPC A, and VPC B.
4. admin/User creates VPN customer gateway for VPC A and VPC B.
5. admin/User create VPN connection on VPC.  
Result:
UI FAIL to display list of VPN customer gateway created in step 4.
 
client call:
http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474

response:
{ "listvpncustomergatewaysresponse" : { } }




  was:


MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
host   XS 6.210.223.51.310.223.51.4

1.Bring up CS in advanced zone
2. admin  creates VPC A and d1user creates VPC B.
3. admin/User enables VPN gateway on VPC A, and VPC B.
4. admin/User creates VPN customer gateway for VPC A and VPC B.
5. admin/User create VPN connection on VPC.  
Result:
UI FAIL to display list of VPN customer gateway created in step 4.
 
client call:
http://10.223.130.107:8080/client/api?command=listVpnConnections&listAll=true&page=1&pagesize=20&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&vpcid=a2ca6837-4988-4e84-b61a-2b902100e7f9&_=1386287945687

response:
{ "listvpnconnectionsresponse" : { } }




> site-to-site VPN VR-to-VR
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnCustomerGateways&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&listAll=true&_=1386288970474
> response:
> { "listvpncustomergatewaysresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-5279) UI - Not able to list detail view of volumes.

2013-12-05 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang resolved CLOUDSTACK-5279.
--

Resolution: Fixed

It's a browser cache issue.
just confirmed with Sangeetha.
After clearing browser cahce, the bug was gone from Sangeetha's environment.

> UI - Not able to list detail view of volumes.
> -
>
> Key: CLOUDSTACK-5279
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5279
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Jessica Wang
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: test.rar
>
>
> UI - Not able to list detail view of volumes.
> From storage-> list Volume , select any volume to list detail view of the 
> volume.
> UI keeps spinning forever.
> Following error seen:
> TypeError: args.context.volumes is undefined
> url: createURL("listVolumes&id=" + args.context.volumes[0].id),



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR

2013-12-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen updated CLOUDSTACK-5394:
--

Attachment: management-server.log.gz

> site-to-site VPN VR-to-VR
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: management-server.log.gz, v10.htm, v11.htm, v12.htm, 
> v13.htm, v14.htm
>
>
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnConnections&listAll=true&page=1&pagesize=20&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&vpcid=a2ca6837-4988-4e84-b61a-2b902100e7f9&_=1386287945687
> response:
> { "listvpnconnectionsresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5354) CLONE - UI - normal users are not allowed to edit their own iso

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-5354:


Priority: Critical  (was: Major)

> CLONE - UI - normal users are not allowed to edit their own iso
> ---
>
> Key: CLOUDSTACK-5354
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5354
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Jessica Wang
>Priority: Critical
> Fix For: 4.3.0
>
>
> Repro steps:
> 1.Create a domain
> 2.create a account under that domain
> 3.create a ISO as a account under the non root domain
> 4.Edit the ISO
> BUg :
> gets message: 
> Only ROOT admins are allowed to modify this attribute.
> API:
> http://10.147.38.141:8080/client/api?command=updateIsoPermissions&response=json&sessionkey=8rczMjm4sfljFOEi6dL2xT631sc%3D&id=2b8c87a0-4325-418d-80af-ce6f691edcd7&zoneid=bfdf7ac5-16c3-491e-aabd-f7ad696612b8&ispublic=false&isfeatured=false&isextractable=false&_=1372941865923
> response:
> { "updateisopermissionsresponse" : 
> {"uuidList":[],"errorcode":431,"cserrorcode":4350,"errortext":"Only ROOT 
> admins are allowed to modify this attribute."} }
> This may be because in case of edit ISO we show  extractable and featured 
> field as editable to normal user , which normal user is not allowed to do  
> and api passes these as parameters
> In case of template these fields are shown as non editable hence API passed 
> does not contain isfeatured and isextractable fields



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR

2013-12-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen updated CLOUDSTACK-5394:
--

Attachment: v14.htm
v13.htm
v12.htm
v11.htm
v10.htm

> site-to-site VPN VR-to-VR
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: v10.htm, v11.htm, v12.htm, v13.htm, v14.htm
>
>
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnConnections&listAll=true&page=1&pagesize=20&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&vpcid=a2ca6837-4988-4e84-b61a-2b902100e7f9&_=1386287945687
> response:
> { "listvpnconnectionsresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-3364) normal users are not allowed to edit their own iso

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-3364:


Priority: Critical  (was: Major)

> normal users are not allowed to edit their own iso
> --
>
> Key: CLOUDSTACK-3364
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3364
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: API
>Affects Versions: 4.2.0
>Reporter: shweta agarwal
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> Repro steps:
> 1.Create a domain
> 2.create a account under that domain
> 3.create a ISO as a account under the non root domain
> 4.Edit the ISO
> BUg :
> gets message: 
> Only ROOT admins are allowed to modify this attribute.
> API:
> http://10.147.38.141:8080/client/api?command=updateIsoPermissions&response=json&sessionkey=8rczMjm4sfljFOEi6dL2xT631sc%3D&id=2b8c87a0-4325-418d-80af-ce6f691edcd7&zoneid=bfdf7ac5-16c3-491e-aabd-f7ad696612b8&ispublic=false&isfeatured=false&isextractable=false&_=1372941865923
> response:
> { "updateisopermissionsresponse" : 
> {"uuidList":[],"errorcode":431,"cserrorcode":4350,"errortext":"Only ROOT 
> admins are allowed to modify this attribute."} }
> This may be because in case of edit ISO we show  extractable and featured 
> field as editable to normal user , which normal user is not allowed to do  
> and api passes these as parameters
> In case of template these fields are shown as non editable hence API passed 
> does not contain isfeatured and isextractable fields



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CLOUDSTACK-5268) [Automation] There is no option to create snapshot from volume of running vm

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan reopened CLOUDSTACK-5268:
-


kvm.snapshot.enabled= true already set; still i am getting this issue

> [Automation] There is no option to create snapshot from volume of running vm 
> -
>
> Key: CLOUDSTACK-5268
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5268
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot
>Affects Versions: 4.3.0
> Environment: KVM
> Branch 4.3
>Reporter: Rayees Namathponnan
>Assignee: Min Chen
>Priority: Blocker
> Fix For: 4.3.0
>
>
> Steps to reproduce 
> Step 1 : Deploy VM
> Step 2 : Once VM is up, select the root volume 
> Step 3 : Create snapshot
> Actual Result 
> There is no option to create snapshot from volume; you need to stop vm first 
> to create snapshot 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5394) site-to-site VPN VR-to-VR

2013-12-05 Thread angeline shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angeline shen updated CLOUDSTACK-5394:
--

Summary: site-to-site VPN VR-to-VR  (was: site-to-site VPNJ VR-to-VR)

> site-to-site VPN VR-to-VR
> -
>
> Key: CLOUDSTACK-5394
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: MS   10.223.130.107build   
> CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
>Reporter: angeline shen
>Priority: Blocker
> Fix For: 4.3.0
>
>
> MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
> host   XS 6.210.223.51.310.223.51.4
> 1.Bring up CS in advanced zone
> 2. admin  creates VPC A and d1user creates VPC B.
> 3. admin/User enables VPN gateway on VPC A, and VPC B.
> 4. admin/User creates VPN customer gateway for VPC A and VPC B.
> 5. admin/User create VPN connection on VPC.  
> Result:
> UI FAIL to display list of VPN customer gateway created in step 4.
>  
> client call:
> http://10.223.130.107:8080/client/api?command=listVpnConnections&listAll=true&page=1&pagesize=20&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&vpcid=a2ca6837-4988-4e84-b61a-2b902100e7f9&_=1386287945687
> response:
> { "listvpnconnectionsresponse" : { } }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5394) site-to-site VPNJ VR-to-VR

2013-12-05 Thread angeline shen (JIRA)
angeline shen created CLOUDSTACK-5394:
-

 Summary: site-to-site VPNJ VR-to-VR
 Key: CLOUDSTACK-5394
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5394
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
 Environment: MS   10.223.130.107build   
CloudPlatform-4.3-97-rhel6.4.tar.gz
host   XS 6.210.223.51.310.223.51.4

Reporter: angeline shen
Priority: Blocker
 Fix For: 4.3.0




MS   10.223.130.107build   CloudPlatform-4.3-97-rhel6.4.tar.gz
host   XS 6.210.223.51.310.223.51.4

1.Bring up CS in advanced zone
2. admin  creates VPC A and d1user creates VPC B.
3. admin/User enables VPN gateway on VPC A, and VPC B.
4. admin/User creates VPN customer gateway for VPC A and VPC B.
5. admin/User create VPN connection on VPC.  
Result:
UI FAIL to display list of VPN customer gateway created in step 4.
 
client call:
http://10.223.130.107:8080/client/api?command=listVpnConnections&listAll=true&page=1&pagesize=20&response=json&sessionkey=D7d9EOsz2gP15yau2QD2lSHBOgc%3D&vpcid=a2ca6837-4988-4e84-b61a-2b902100e7f9&_=1386287945687

response:
{ "listvpnconnectionsresponse" : { } }





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Closed] (CLOUDSTACK-5349) [Automation] VOLUME.CREATE missing in events table

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan closed CLOUDSTACK-5349.
---


Not found this issue in latest runs 

> [Automation] VOLUME.CREATE missing in events table
> --
>
> Key: CLOUDSTACK-5349
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5349
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: ALL
> branch 4.3
>Reporter: Rayees Namathponnan
>Assignee: Nitin Mehta
>Priority: Blocker
> Fix For: 4.3.0
>
>
>  # Validate the following
> # 1. Create a VM. Verify usage_events table contains VM .create, VM.start , 
> Network.offering.assign , Volume.create events
> # 2. Stop the VM. Verify usage_events table contains network.offerings.remove 
> ,VM .stop Events for the created account.
> # 3. Destroy the VM after some time. Verify usage_events table 
> containsVM.Destroy and volume .delete Event for the created account
> # 4. Delete the account
> VOLUME.CREATE missing in events table,  below test case failing due to this
> integration.component.test_usage.TestVmUsage.test_01_vm_usage 
> mysql> select type from usage_event where account_id = '103';
> +-+
> | type|
> +-+
> | VM.CREATE   |
> | NETWORK.OFFERING.ASSIGN |
> | VM.START|
> | SG.ASSIGN   |
> | VM.STOP |
> | NETWORK.OFFERING.REMOVE |
> | SG.REMOVE   |
> | VM.DESTROY  |
> | VOLUME.DELETE   |
> +-+
> 9 rows in set (0.00 sec)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5393) [Automation] Failed to create snapshot from ROOT volume in KVM

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan updated CLOUDSTACK-5393:


Attachment: management-server.rar
ssvm.rar
agent2.rar
agent1.rar

> [Automation] Failed to create snapshot from ROOT volume in KVM
> --
>
> Key: CLOUDSTACK-5393
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5393
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Snapshot
>Affects Versions: 4.3.0
> Environment: KVM (RHEL 6.3)
> Branch : 4.3
>Reporter: Rayees Namathponnan
> Fix For: 4.3.0
>
> Attachments: agent1.rar, agent2.rar, management-server.rar, ssvm.rar
>
>
> Steps to reproduce 
> 1) Create advanced zone in KVM
> 2) Deploy VM 
> 3 ) Stop VM 
> 4 ) Create snapshot from root volume
> Snapshot creation failed with below exception
> 2013-12-05 15:39:44,194 DEBUG [c.c.a.t.Request] (Job-Executor-64:ctx-1b710e00 
> ctx-3cf3df4e) Seq 1-1244399478: Received:  { Ans: , MgmtId: 29066118877352, 
> via: 1, Ver: v1, Flags: 10, { CreateObjectAnswer } }
> 2013-12-05 15:39:44,284 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] 
> (Job-Executor-64:ctx-1b710e00 ctx-3cf3df4e) copyAsync inspecting src type 
> SNAPSHOT copyAsync inspecting dest type SNAPSHOT
> 2013-12-05 15:39:44,332 DEBUG [c.c.a.t.Request] (Job-Executor-64:ctx-1b710e00 
> ctx-3cf3df4e) Seq 2-332864214: Sending  { Cmd , MgmtId: 29066118877352, via: 
> 2(Rack2Host12.lab.vmops.com), Ver: v1, Flags: 100011, [{"org.apache.cl
> oudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"/mnt/fff90cb5-06dd-33b3-8815-d78c08ca01d9/a44ddf93-9fa2-497f-aff6-cf4951128215/8f7d268f-1a96-4d70-9c48-cb7f6cc935a8"
> ,"volume":{"uuid":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","id":1,"poolType":"NetworkFilesyst
> em","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary","port":2049,"url":"NetworkFilesystem://10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary/?ROLE=Primary&STOREUUID=fff90cb5-06dd-33b3-8815-d78c08
> ca01d9"}},"name":"ROOT-919","size":8589934592,"path":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeId":988,"vmName":"i-2-919-QA","accountId":2,"format":"QCOW2","id":988,"deviceId":0,"hypervisorType":"KVM"},"parentSnapshotPath
> ":"/mnt/fff90cb5-06dd-33b3-8815-d78c08ca01d9/a44ddf93-9fa2-497f-aff6-cf4951128215/d43b61bc-4ade-4753-b1d9-c0398388147d","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","id":1,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary","port":2049,"url":"NetworkFilesystem://10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary/?ROLE=Primary&STOREUUID=fff90cb5-06dd-33b3-8815-d78c08ca01d9"}},"vmName":"i-2-919-QA","name":"QA-eb54bbfa-12d5-49b5-808f-864dddedc1fd_ROOT-919_20131205233943","hypervisorType":"KVM","id":54,"quiescevm":false}},"destTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"snapshots/2/988","volume":{"uuid":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","id":1,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary","port":2049,"url":"NetworkFilesystem://10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary/?ROLE=Primary&STOREUUID=fff90cb5-06dd-33b3-8815-d78c08ca01d9"}},"name":"ROOT-919","size":8589934592,"path":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeId":988,"vmName":"i-2-919-QA","accountId":2,"format":"QCOW2","id":988,"deviceId":0,"hypervisorType":"KVM"},"dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.223.110.232:/export/home/rayees/SC_QA_AUTO4/secondary","_role":"Image"}},"vmName":"i-2-919-QA","name":"QA-eb54bbfa-12d5-49b5-808f-864dddedc1fd_ROOT-919_20131205233943","hypervisorType":"KVM","id":54,"quiescevm":false}},"executeInSequence":false,"wait":21600}}]
>  }
> 2013-12-05 15:39:44,501 DEBUG [c.c.a.t.Request] 
> (StatsCollector-3:ctx-3ac54662) Seq 1-1244399477: Received:  { Ans: , MgmtId: 
> 29066118877352, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } }
> 2013-12-05 15:39:44,900 DEBUG [c.c.a.t.Request] (AgentManager-Handler-1:null) 
> Seq 2-332864214: Processing:  { Ans: , MgmtId: 29066118877352, via: 2, Ver: 
> v1, Flags: 10, 
> [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"/usr/share/cloudstack-common/scripts/storage/qcow2/managesnapshot.sh:
>  

[jira] [Updated] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-5391:


Priority: Critical  (was: Major)

> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.
> Host has 4 physical CPU cores. 
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering.
> Similarly for speed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5393) [Automation] Failed to create snapshot from ROOT volume in KVM

2013-12-05 Thread Rayees Namathponnan (JIRA)
Rayees Namathponnan created CLOUDSTACK-5393:
---

 Summary: [Automation] Failed to create snapshot from ROOT volume 
in KVM
 Key: CLOUDSTACK-5393
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5393
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Snapshot
Affects Versions: 4.3.0
 Environment: KVM (RHEL 6.3)
Branch : 4.3
Reporter: Rayees Namathponnan
 Fix For: 4.3.0


Steps to reproduce 

1) Create advanced zone in KVM
2) Deploy VM 
3 ) Stop VM 
4 ) Create snapshot from root volume

Snapshot creation failed with below exception


2013-12-05 15:39:44,194 DEBUG [c.c.a.t.Request] (Job-Executor-64:ctx-1b710e00 
ctx-3cf3df4e) Seq 1-1244399478: Received:  { Ans: , MgmtId: 29066118877352, 
via: 1, Ver: v1, Flags: 10, { CreateObjectAnswer } }
2013-12-05 15:39:44,284 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] 
(Job-Executor-64:ctx-1b710e00 ctx-3cf3df4e) copyAsync inspecting src type 
SNAPSHOT copyAsync inspecting dest type SNAPSHOT
2013-12-05 15:39:44,332 DEBUG [c.c.a.t.Request] (Job-Executor-64:ctx-1b710e00 
ctx-3cf3df4e) Seq 2-332864214: Sending  { Cmd , MgmtId: 29066118877352, via: 
2(Rack2Host12.lab.vmops.com), Ver: v1, Flags: 100011, [{"org.apache.cl
oudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"/mnt/fff90cb5-06dd-33b3-8815-d78c08ca01d9/a44ddf93-9fa2-497f-aff6-cf4951128215/8f7d268f-1a96-4d70-9c48-cb7f6cc935a8"
,"volume":{"uuid":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","id":1,"poolType":"NetworkFilesyst
em","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary","port":2049,"url":"NetworkFilesystem://10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary/?ROLE=Primary&STOREUUID=fff90cb5-06dd-33b3-8815-d78c08
ca01d9"}},"name":"ROOT-919","size":8589934592,"path":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeId":988,"vmName":"i-2-919-QA","accountId":2,"format":"QCOW2","id":988,"deviceId":0,"hypervisorType":"KVM"},"parentSnapshotPath
":"/mnt/fff90cb5-06dd-33b3-8815-d78c08ca01d9/a44ddf93-9fa2-497f-aff6-cf4951128215/d43b61bc-4ade-4753-b1d9-c0398388147d","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","id":1,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary","port":2049,"url":"NetworkFilesystem://10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary/?ROLE=Primary&STOREUUID=fff90cb5-06dd-33b3-8815-d78c08ca01d9"}},"vmName":"i-2-919-QA","name":"QA-eb54bbfa-12d5-49b5-808f-864dddedc1fd_ROOT-919_20131205233943","hypervisorType":"KVM","id":54,"quiescevm":false}},"destTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"snapshots/2/988","volume":{"uuid":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"fff90cb5-06dd-33b3-8815-d78c08ca01d9","id":1,"poolType":"NetworkFilesystem","host":"10.223.110.232","path":"/export/home/rayees/SC_QA_AUTO4/primary","port":2049,"url":"NetworkFilesystem://10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary/?ROLE=Primary&STOREUUID=fff90cb5-06dd-33b3-8815-d78c08ca01d9"}},"name":"ROOT-919","size":8589934592,"path":"a44ddf93-9fa2-497f-aff6-cf4951128215","volumeId":988,"vmName":"i-2-919-QA","accountId":2,"format":"QCOW2","id":988,"deviceId":0,"hypervisorType":"KVM"},"dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.223.110.232:/export/home/rayees/SC_QA_AUTO4/secondary","_role":"Image"}},"vmName":"i-2-919-QA","name":"QA-eb54bbfa-12d5-49b5-808f-864dddedc1fd_ROOT-919_20131205233943","hypervisorType":"KVM","id":54,"quiescevm":false}},"executeInSequence":false,"wait":21600}}]
 }
2013-12-05 15:39:44,501 DEBUG [c.c.a.t.Request] (StatsCollector-3:ctx-3ac54662) 
Seq 1-1244399477: Received:  { Ans: , MgmtId: 29066118877352, via: 1, Ver: v1, 
Flags: 10, { GetVmStatsAnswer } }
2013-12-05 15:39:44,900 DEBUG [c.c.a.t.Request] (AgentManager-Handler-1:null) 
Seq 2-332864214: Processing:  { Ans: , MgmtId: 29066118877352, via: 2, Ver: v1, 
Flags: 10, 
[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"/usr/share/cloudstack-common/scripts/storage/qcow2/managesnapshot.sh:
 line 178: 26840 Floating point exception(core dumped) $qemu_img convert -f 
qcow2 -O qcow2 -s $snapshotname $disk $destPath/$destName &>/dev/nullFailed to 
backup 8f7d268f-1a96-4d70-9c48-cb7f6cc935a8 for disk 
/mnt/fff90cb5-06dd-33b3-8815-d78c08ca01d9/a44ddf93-9fa2-497f-aff6-cf4951128215 
to /mnt/ffdee37d-66e9-371d-8e6e-dee20b7f7433/snapshots/2/988","wait":0}}] }
2013-12-05 15:39:44,900 DEBUG [c.c.a.t.Request] (Job-Executor-64:ctx-1b710e00 
c

[jira] [Commented] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840693#comment-13840693
 ] 

ASF subversion and git services commented on CLOUDSTACK-5391:
-

Commit ed1f3d9ed67cf805e14821d91fc6e612bd232555 in branch refs/heads/master 
from [~nitinme]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ed1f3d9 ]

CLOUDSTACK-5391:
check for host cpu capability while stop starting a vm on the same host. Also 
changed the FirstFitAllocator to use the same method.


> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.
> Host has 4 physical CPU cores. 
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering.
> Similarly for speed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta resolved CLOUDSTACK-5391.
-

Resolution: Fixed

> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.
> Host has 4 physical CPU cores. 
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering.
> Similarly for speed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840686#comment-13840686
 ] 

ASF subversion and git services commented on CLOUDSTACK-5391:
-

Commit c06e69db1973fcabe404b3d94501bcdcea2f21b5 in branch refs/heads/4.3 from 
[~nitinme]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=c06e69d ]

CLOUDSTACK-5391:
check for host cpu capability while stop starting a vm on the same host. Also 
changed the FirstFitAllocator to use the same method.


> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.
> Host has 4 physical CPU cores. 
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering.
> Similarly for speed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-5391:


Description: 
Change service offering of a stopped vm and then starting it should check host 
cpu capability with the new service offering.
Host has 4 physical CPU cores. 
Create a service offering of 5 CPU cores and scaled up existing VM with this 
service offering.
Similarly for speed.

  was:Change service offering of a stopped vm and then starting it should check 
host cpu capability with the new service offering.


> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.
> Host has 4 physical CPU cores. 
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering.
> Similarly for speed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-4880) VM's CPUs getting scaled up above host capacity (Without any migration to other host)

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta resolved CLOUDSTACK-4880.
-

Resolution: Fixed

> VM's CPUs getting scaled up above host capacity (Without any migration to 
> other host)
> -
>
> Key: CLOUDSTACK-4880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.3.0
> Environment: Observed on XenServer yet.
>Reporter: Gaurav Aradhye
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> Host has 4 physical CPU cores.
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering. The operation was successful.
> I was even able to reboot the instance.
> However no new instance could be launched using this service offering which 
> seems to be a valid behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4880) VM's CPUs getting scaled up above host capacity (Without any migration to other host)

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840648#comment-13840648
 ] 

ASF subversion and git services commented on CLOUDSTACK-4880:
-

Commit 98ee087d310f8f84cb579730209bf212c8503808 in branch refs/heads/master 
from [~nitinme]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=98ee087 ]

CLOUDSTACK-4880:
check for host cpu capability while dynamic scaling a vm on the same host


> VM's CPUs getting scaled up above host capacity (Without any migration to 
> other host)
> -
>
> Key: CLOUDSTACK-4880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.3.0
> Environment: Observed on XenServer yet.
>Reporter: Gaurav Aradhye
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> Host has 4 physical CPU cores.
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering. The operation was successful.
> I was even able to reboot the instance.
> However no new instance could be launched using this service offering which 
> seems to be a valid behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5392) Multiple Secondary Store - There is no retry happening on snapshot failures when one of the secondary stores is not reachable.

2013-12-05 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-5392:
---

 Summary: Multiple Secondary Store - There is no retry happening on 
snapshot failures when one of the secondary stores is not reachable.
 Key: CLOUDSTACK-5392
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5392
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Priority: Critical
 Fix For: 4.3.0


Multiple Secondary Store - There is no retry happening on snapshot failures 
when one of the secondary stores is not reachable.

Steps to reproduce the problem:

Set up:
Advanced zone set up with 2 Xenserver hosts.
2 secondary NFS stores - ss1 and ss2.

Bring down ss1.

Deployed 3 VMs.

Create snapshots for ROOT volume of these 3 VMs.

out of the 3 snapshot request , 2 were sent to ss1 and 1 to ss2.

The 2 createSnapshot commands that were sent to ss1 , failed during 
"org.apache.cloudstack.storage.command.CopyCommand". 
But there was no retry done on ss2.

Expected Behavior:
On failure to back up on 1 secondary store , we should attempt on other 
secondary stores.






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4880) VM's CPUs getting scaled up above host capacity (Without any migration to other host)

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840636#comment-13840636
 ] 

ASF subversion and git services commented on CLOUDSTACK-4880:
-

Commit 25e51a571651fbcd40f7fc8f621eaf3ee8c66e40 in branch refs/heads/4.3 from 
[~nitinme]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=25e51a5 ]

CLOUDSTACK-4880:
check for host cpu capability while dynamic scaling a vm on the same host


> VM's CPUs getting scaled up above host capacity (Without any migration to 
> other host)
> -
>
> Key: CLOUDSTACK-4880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.3.0
> Environment: Observed on XenServer yet.
>Reporter: Gaurav Aradhye
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> Host has 4 physical CPU cores.
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering. The operation was successful.
> I was even able to reboot the instance.
> However no new instance could be launched using this service offering which 
> seems to be a valid behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4880) VM's CPUs getting scaled up above host capacity (Without any migration to other host)

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840631#comment-13840631
 ] 

ASF subversion and git services commented on CLOUDSTACK-4880:
-

Commit 1cdc064c438d155bedf451fab37b76a4b0a9773b in branch refs/heads/4.3 from 
[~nitinme]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=1cdc064 ]

CLOUDSTACK-4880:
check for host cpu capability while dynamic scaling a vm on the same host


> VM's CPUs getting scaled up above host capacity (Without any migration to 
> other host)
> -
>
> Key: CLOUDSTACK-4880
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4880
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Automation
>Affects Versions: 4.3.0
> Environment: Observed on XenServer yet.
>Reporter: Gaurav Aradhye
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> Host has 4 physical CPU cores.
> Create a service offering of 5 CPU cores and scaled up existing VM with this 
> service offering. The operation was successful.
> I was even able to reboot the instance.
> However no new instance could be launched using this service offering which 
> seems to be a valid behavior.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5278) Egress Firewall rules clarifications

2013-12-05 Thread Will Stevens (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840623#comment-13840623
 ] 

Will Stevens commented on CLOUDSTACK-5278:
--

@Jayapal Reddy: I would recommend that we do not attempt to fix this issues in 
4.3.  I think changing this could cause other providers to potentially break.  
We can look at this in 4.4.  I have been able to work around all of these 
issues for now, so this is no longer a blocker for me to fully support egress 
rules in 4.3.  

I will be submitting a patch to my plugin to fix it tomorrow...

> Egress Firewall rules clarifications
> 
>
> Key: CLOUDSTACK-5278
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5278
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.3.0
>Reporter: Will Stevens
>Assignee: Jayapal Reddy
>Priority: Critical
> Fix For: 4.3.0
>
>
> These issues may also exist in the 4.2 branch, but I am currently 
> testing/working on the 4.3 branch.
> I believe these bugs were introduced with the change to the Network Service 
> Offering to add the 'Default egress policy' dropdown.
> https://issues.apache.org/jira/browse/CLOUDSTACK-1578
> I am trying to resolve the bugs this change introduced in the Palo Alto 
> plugin.
> There are two types of Egress rules (from what I can tell).
> - FirewallRule.FirewallRuleType.System : this appears to be set up by the 
> system on network creation to correspond to the global network default 
> allow/deny egress rule.
> - FirewallRule.FirewallRuleType.User : any rule that a user creates through 
> the UI will get this type.
> There are bugs associated with both of the options in the dropdown (allow and 
> deny).
> Case: 'deny'
> - When the network is setup, it does not try to create the global deny rule 
> for the network, but it appears to register that it exists.  Instead, when 
> the first egress rule is created by a user, the system sees both the 'system' 
> and 'user' rules, so it will create both rules then.
> Case: both 'allow' and 'deny'
> - The clean up of the network global 'system' egress rules are never done.  
> So when a network is deleted, it will leave an orphaned egress rule 
> associated with the previous network's cidr.  This is bound to cause many 
> issues.
> - Even worse, it appears that the ID for the network global 'system' egress 
> rule is hardcoded to '0'.  Every time I try to spin up a new network it will 
> attempt to create a rule with a '0' ID, but since one already exists with 
> that ID, there is a config collision.  In my case (Palo Alto), the second 
> rule with the same ID gets ignored because it checks to see if the rule 
> exists and it gets a 'yes' back because the previous network has an egress 
> rule with that ID already.
> Let me know if you have additional questions...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-5391:


Fix Version/s: 4.3.0

> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta reassigned CLOUDSTACK-5391:
---

Assignee: Nitin Mehta

> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-5391:


Affects Version/s: 4.2.0

> Change service offering of a stopped vm and then starting it should check 
> host cpu capability
> -
>
> Key: CLOUDSTACK-5391
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>
> Change service offering of a stopped vm and then starting it should check 
> host cpu capability with the new service offering.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5391) Change service offering of a stopped vm and then starting it should check host cpu capability

2013-12-05 Thread Nitin Mehta (JIRA)
Nitin Mehta created CLOUDSTACK-5391:
---

 Summary: Change service offering of a stopped vm and then starting 
it should check host cpu capability
 Key: CLOUDSTACK-5391
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5391
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Nitin Mehta


Change service offering of a stopped vm and then starting it should check host 
cpu capability with the new service offering.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5390) listNetworks: pageSize and page parameters are not applied properly

2013-12-05 Thread Alena Prokharchyk (JIRA)
Alena Prokharchyk created CLOUDSTACK-5390:
-

 Summary: listNetworks: pageSize and page parameters are not 
applied properly
 Key: CLOUDSTACK-5390
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5390
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Affects Versions: 4.3.0
Reporter: Alena Prokharchyk
Assignee: Alena Prokharchyk
 Fix For: 4.3.0


ListNetworks call does numerous calls to the DB to get diff kinds of networks 
based on search criteria (Isolated and Shared). The result sets are combined 
and returned to the API. As page/pageSize parameters are passed only to the DB 
call, they are not respected while generating the final set.

There can be 2 ways to fix the problem:

1) generate only one call to the DB
or
2) After the result set is finalized, apply the pagination to it. 

I would go with #2 as changing the db call can introduce regressions plus its 
very hard to achieve given the number of joins happening based on the search 
criteria. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5389) [Automation] Race Condition : Failed to find storage pool during router deployment in KVM

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan updated CLOUDSTACK-5389:


Attachment: Mslog.rar
Agent2.rar
Agent1.rar

> [Automation] Race Condition : Failed to find storage pool during router 
> deployment in KVM
> -
>
> Key: CLOUDSTACK-5389
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5389
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Branch : 4.3
> Environment : KVM (RHEL 6.3) 
>Reporter: Rayees Namathponnan
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: Agent1.rar, Agent2.rar, Mslog.rar
>
>
> This issue is observed during automation run;  6 test cases cases are 
> executing in parallel in automation environment;  random deployment failure 
> observed during automation run due to "No suitable storagePools found under 
> this Cluster: 1"
> 2 primary storages are already configured in zone1; it as enough space too 
> observed below error in ms log
> 2013-12-05 10:10:36,274 DEBUG [c.c.a.m.a.i.FirstFitAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8 FirstFitRoutingAllocator) Found a 
> suitable host, adding to list: 2
> 2013-12-05 10:10:36,274 DEBUG [c.c.a.m.a.i.FirstFitAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8 FirstFitRoutingAllocator) Host 
> Allocator returning 2 suitable hosts
> 2013-12-05 10:10:36,275 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Checking suitable pools for 
> volume (Id, Type): (1291,ROOT)
> 2013-12-05 10:10:36,275 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) We need to allocate new 
> storagepool for this volume
> 2013-12-05 10:10:36,277 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Calling StoragePoolAllocators to 
> find suitable pools
> 2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) LocalStoragePoolAllocator trying 
> to find storage pool to fit the vm
> 2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) ClusterScopeStoragePoolAllocator 
> looking for storage pool
> 2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Looking for pools in dc: 1  pod:1 
>  cluster:1 having tags:[host1]
> 2013-12-05 10:10:36,282 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No storage pools available for 
> shared volume allocation, returning
> 2013-12-05 10:10:36,287 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) List of pools in ascending order 
> of number of volumes for account id: 683 is: [1, 4, 2]
> 2013-12-05 10:10:36,287 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) ZoneWideStoragePoolAllocator to 
> find storage pool
> 2013-12-05 10:10:36,290 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) List of pools in ascending order 
> of number of volumes for account id: 683 is: []
> 2013-12-05 10:10:36,290 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable pools found for 
> volume: Vol[1291|vm=1179|ROOT] under cluster: 1
> 2013-12-05 10:10:36,290 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable pools found
> 2013-12-05 10:10:36,291 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable storagePools found 
> under this Cluster: 1
> 2013-12-05 10:10:36,294 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Could not find suitable 
> Deployment Destination for this VM under any clusters, returning.
> 2013-12-05 10:10:36,294 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Searching all possible resources 
> under this Zone: 1
> 2013-12-05 10:10:36,295 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Listing clusters in order of 
> aggregate capacity, that have (atleast one host with) enough CPU and RAM 
> capacity under this Zone: 1
> 2013-12-05 10:10:36,298 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Removing from the clusterId list 
> these clusters from avoid set: [1]
> 2013-12-05 10:10:36,300 DEBUG [c.c.d.FirstFitPlanner

[jira] [Created] (CLOUDSTACK-5389) [Automation] Race Condition : Failed to find storage pool during router deployment in KVM

2013-12-05 Thread Rayees Namathponnan (JIRA)
Rayees Namathponnan created CLOUDSTACK-5389:
---

 Summary: [Automation] Race Condition : Failed to find storage pool 
during router deployment in KVM
 Key: CLOUDSTACK-5389
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5389
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Branch : 4.3
Environment : KVM (RHEL 6.3) 
Reporter: Rayees Namathponnan
 Fix For: 4.3.0


This issue is observed during automation run;  6 test cases cases are executing 
in parallel in automation environment;  random deployment failure observed 
during automation run due to "No suitable storagePools found under this 
Cluster: 1"

2 primary storages are already configured in zone1; it as enough space too 

observed below error in ms log


2013-12-05 10:10:36,274 DEBUG [c.c.a.m.a.i.FirstFitAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8 FirstFitRoutingAllocator) Found a 
suitable host, adding to list: 2
2013-12-05 10:10:36,274 DEBUG [c.c.a.m.a.i.FirstFitAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8 FirstFitRoutingAllocator) Host 
Allocator returning 2 suitable hosts
2013-12-05 10:10:36,275 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Checking suitable pools for volume 
(Id, Type): (1291,ROOT)
2013-12-05 10:10:36,275 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) We need to allocate new storagepool 
for this volume
2013-12-05 10:10:36,277 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Calling StoragePoolAllocators to 
find suitable pools
2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) LocalStoragePoolAllocator trying to 
find storage pool to fit the vm
2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) ClusterScopeStoragePoolAllocator 
looking for storage pool
2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Looking for pools in dc: 1  pod:1  
cluster:1 having tags:[host1]
2013-12-05 10:10:36,282 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No storage pools available for 
shared volume allocation, returning
2013-12-05 10:10:36,287 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) List of pools in ascending order of 
number of volumes for account id: 683 is: [1, 4, 2]
2013-12-05 10:10:36,287 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) ZoneWideStoragePoolAllocator to 
find storage pool
2013-12-05 10:10:36,290 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) List of pools in ascending order of 
number of volumes for account id: 683 is: []
2013-12-05 10:10:36,290 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable pools found for volume: 
Vol[1291|vm=1179|ROOT] under cluster: 1
2013-12-05 10:10:36,290 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable pools found
2013-12-05 10:10:36,291 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable storagePools found 
under this Cluster: 1
2013-12-05 10:10:36,294 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Could not find suitable Deployment 
Destination for this VM under any clusters, returning.
2013-12-05 10:10:36,294 DEBUG [c.c.d.FirstFitPlanner] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Searching all possible resources 
under this Zone: 1
2013-12-05 10:10:36,295 DEBUG [c.c.d.FirstFitPlanner] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Listing clusters in order of 
aggregate capacity, that have (atleast one host with) enough CPU and RAM 
capacity under this Zone: 1
2013-12-05 10:10:36,298 DEBUG [c.c.d.FirstFitPlanner] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Removing from the clusterId list 
these clusters from avoid set: [1]
2013-12-05 10:10:36,300 DEBUG [c.c.d.FirstFitPlanner] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No clusters found after removing 
disabled clusters and clusters in avoid list, returning.
2013-12-05 10:10:36,303 DEBUG [c.c.v.UserVmManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Destroying vm 
VM[User|QA-35aa7053-f3d7-4b16-86d3-487bcce7c9bb] as it failed to create on Host 
with Id:null
2013-12-05 10:10:36,325 DEBUG [c.c.c.CapacityManagerImpl] 
(Job-Executor-98:ctx-fe814e55 ctx-e2685be8) VM state transitted from :Stopped 
to Error with event: OperationFai

[jira] [Updated] (CLOUDSTACK-5389) [Automation] Race Condition : Failed to find storage pool during router deployment in KVM

2013-12-05 Thread Rayees Namathponnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rayees Namathponnan updated CLOUDSTACK-5389:


Priority: Critical  (was: Major)

> [Automation] Race Condition : Failed to find storage pool during router 
> deployment in KVM
> -
>
> Key: CLOUDSTACK-5389
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5389
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Branch : 4.3
> Environment : KVM (RHEL 6.3) 
>Reporter: Rayees Namathponnan
>Priority: Critical
> Fix For: 4.3.0
>
>
> This issue is observed during automation run;  6 test cases cases are 
> executing in parallel in automation environment;  random deployment failure 
> observed during automation run due to "No suitable storagePools found under 
> this Cluster: 1"
> 2 primary storages are already configured in zone1; it as enough space too 
> observed below error in ms log
> 2013-12-05 10:10:36,274 DEBUG [c.c.a.m.a.i.FirstFitAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8 FirstFitRoutingAllocator) Found a 
> suitable host, adding to list: 2
> 2013-12-05 10:10:36,274 DEBUG [c.c.a.m.a.i.FirstFitAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8 FirstFitRoutingAllocator) Host 
> Allocator returning 2 suitable hosts
> 2013-12-05 10:10:36,275 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Checking suitable pools for 
> volume (Id, Type): (1291,ROOT)
> 2013-12-05 10:10:36,275 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) We need to allocate new 
> storagepool for this volume
> 2013-12-05 10:10:36,277 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Calling StoragePoolAllocators to 
> find suitable pools
> 2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) LocalStoragePoolAllocator trying 
> to find storage pool to fit the vm
> 2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) ClusterScopeStoragePoolAllocator 
> looking for storage pool
> 2013-12-05 10:10:36,278 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Looking for pools in dc: 1  pod:1 
>  cluster:1 having tags:[host1]
> 2013-12-05 10:10:36,282 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No storage pools available for 
> shared volume allocation, returning
> 2013-12-05 10:10:36,287 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) List of pools in ascending order 
> of number of volumes for account id: 683 is: [1, 4, 2]
> 2013-12-05 10:10:36,287 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) ZoneWideStoragePoolAllocator to 
> find storage pool
> 2013-12-05 10:10:36,290 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) List of pools in ascending order 
> of number of volumes for account id: 683 is: []
> 2013-12-05 10:10:36,290 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable pools found for 
> volume: Vol[1291|vm=1179|ROOT] under cluster: 1
> 2013-12-05 10:10:36,290 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable pools found
> 2013-12-05 10:10:36,291 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No suitable storagePools found 
> under this Cluster: 1
> 2013-12-05 10:10:36,294 DEBUG [c.c.d.DeploymentPlanningManagerImpl] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Could not find suitable 
> Deployment Destination for this VM under any clusters, returning.
> 2013-12-05 10:10:36,294 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Searching all possible resources 
> under this Zone: 1
> 2013-12-05 10:10:36,295 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Listing clusters in order of 
> aggregate capacity, that have (atleast one host with) enough CPU and RAM 
> capacity under this Zone: 1
> 2013-12-05 10:10:36,298 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) Removing from the clusterId list 
> these clusters from avoid set: [1]
> 2013-12-05 10:10:36,300 DEBUG [c.c.d.FirstFitPlanner] 
> (Job-Executor-98:ctx-fe814e55 ctx-e2685be8) No clusters found after removing 
> disabled clusters

[jira] [Commented] (CLOUDSTACK-5386) Secondary Storage does not accept SSL certs/domain other than from "realhostip.com"

2013-12-05 Thread Demetrius Tsitrelis (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840547#comment-13840547
 ] 

Demetrius Tsitrelis commented on CLOUDSTACK-5386:
-

Thank you for the patch.

If the DownloadManagerImpl class (or just the code which references the 
certificate) is no longer used would you please remove the obsolete code which 
writes the log message indicating that non-realhostip certs are not supported?  
I see that the UploadMonitorImpl.configure() has the same code as well.

> Secondary Storage does not accept SSL certs/domain other than from 
> "realhostip.com"
> ---
>
> Key: CLOUDSTACK-5386
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5386
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Demetrius Tsitrelis
>Assignee: Wei Zhou
>
> The "sec.storage.ssl.cert.domain" should allow for certificates other than 
> realhostip.com to be used.  One use case would be for using a self-signed 
> certificate for S3 storage.
> DownloadManageerImpl.configure() contains the following code:
>@Override
> public boolean configure(String name, Map params) {
> final Map configs = 
> _configDao.getConfiguration("ManagementServer", params);
> _sslCopy = 
> Boolean.parseBoolean(configs.get("secstorage.encrypt.copy"));
> _proxy = configs.get(Config.SecStorageProxy.key());
> String cert = configs.get("secstorage.ssl.cert.domain");
> if (!"realhostip.com".equalsIgnoreCase(cert)) {
> s_logger.warn("Only realhostip.com ssl cert is supported, 
> ignoring self-signed and other certs");
> }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5388) Volume Snapshot UI does not provide option of adding quiesce vm parameter

2013-12-05 Thread Chris Suich (JIRA)
Chris Suich created CLOUDSTACK-5388:
---

 Summary: Volume Snapshot UI does not provide option of adding 
quiesce vm parameter
 Key: CLOUDSTACK-5388
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5388
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.3.0
Reporter: Chris Suich
Assignee: Chris Suich
Priority: Minor
 Fix For: 4.3.0


Volume Snapshot UI does not provide option of adding quiesce vm parameter when 
the underlying storage does support the option.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5387) RemoteVPNonVPC : Unable to remotely access a VM in a VPC after enabling S2S VPN on the VPC VR

2013-12-05 Thread Chandan Purushothama (JIRA)
Chandan Purushothama created CLOUDSTACK-5387:


 Summary: RemoteVPNonVPC :  Unable to remotely access a VM in a VPC 
after enabling S2S VPN on the VPC VR
 Key: CLOUDSTACK-5387
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5387
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
Reporter: Chandan Purushothama
Priority: Critical
 Fix For: 4.3.0




Steps to Reproduce:


1. Deploy a VPC with a network tier in it. Deploy a VM in the network tier. 
Locate router/public ip for the VPC and enable Remote access vpn on it.
2. note preshared key
3. create a vpn user using addVpnUser API(using valid username and password)
4. from a standalone linux machine configure vpn client to point to public ip 
address from Step 1.
5. Add a ALLOW ACL Rule on ALL protocols to network tier's ACL List such that 
it blocks ssh access to the client's network.
6. ssh (using putty or any other terminal client) to the vm in network tier 
provisioned earlier.
7 Create a S2S VPN Connection on this VPC where the VPC VR is the passive end 
of the connection.
8. Establish the S2S VPN Connection from another VPC to this VPC.
9. Observe that the Remote Access to the VM no longer works.







--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-5386) Secondary Storage does not accept SSL certs/domain other than from "realhostip.com"

2013-12-05 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou resolved CLOUDSTACK-5386.
--

Resolution: Fixed
  Assignee: Wei Zhou

DownloadManagerImpl makes no effect in cloudstack 4.2, as the storage framework 
changed by Edison su.

I have committed a patch for this issue.
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=commit;h=9be402cb0693d8aeb779aa7f5ccbe7070c2f03de

> Secondary Storage does not accept SSL certs/domain other than from 
> "realhostip.com"
> ---
>
> Key: CLOUDSTACK-5386
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5386
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Storage Controller
>Affects Versions: 4.2.0
>Reporter: Demetrius Tsitrelis
>Assignee: Wei Zhou
>
> The "sec.storage.ssl.cert.domain" should allow for certificates other than 
> realhostip.com to be used.  One use case would be for using a self-signed 
> certificate for S3 storage.
> DownloadManageerImpl.configure() contains the following code:
>@Override
> public boolean configure(String name, Map params) {
> final Map configs = 
> _configDao.getConfiguration("ManagementServer", params);
> _sslCopy = 
> Boolean.parseBoolean(configs.get("secstorage.encrypt.copy"));
> _proxy = configs.get(Config.SecStorageProxy.key());
> String cert = configs.get("secstorage.ssl.cert.domain");
> if (!"realhostip.com".equalsIgnoreCase(cert)) {
> s_logger.warn("Only realhostip.com ssl cert is supported, 
> ignoring self-signed and other certs");
> }



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5352) CPU cap calculated incorrectly for VMs on XenServer hosts

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-5352:


Affects Version/s: 4.3.0

> CPU cap calculated incorrectly for VMs on XenServer hosts
> -
>
> Key: CLOUDSTACK-5352
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5352
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0, 4.3.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>
> The CPU cap assigned to VMs on XenServer hosts (via VCPUs-params parameter) 
> is not calculated correctly. The assigned values are too low and can result 
> in performance problems. This seems related to CPU overprovisioning. The 
> assigned CPU cap is approximately the expected cap / CPU overprovisioning 
> value. The customer is using CloudStack 4.2.0 with XenServer 6.1. On the 
> customer environment they have several VMs that were created before upgrading 
> to 4.2.0 from 3.0.6 and never rebooted, and those VMs appear to have the 
> expected CPU cap.
> I see similar results on a CS 4.2.1 setup with a XS 6.2 host with 1x E31220L 
> CPU – 2x physical cores / 4x logical cores (with hyperthreading) at 2.20GHz – 
> 8800 MHz total (confirmed in op_host_capacity), a Compute Offering with 2200 
> MHz and 4 cores gives a VM with:
> [root@csdemo-xen2 ~]# xe vm-list params=name-label,uuid,VCPUs-params 
> name-label=i-2-87-VM
> uuid ( RO) : 7cd5893e-728a-a0f3-c2cf-f3464cb8b9cb
> name-label ( RW): i-2-87-VM
> VCPUs-params (MRW): weight: 84; cap: 131
> And with a Compute Offering with 2200 MHz and 1 core gives a VM with:
> [root@csdemo-xen2 ~]# xe vm-list params=name-label,uuid,VCPUs-params 
> name-label=i-2-87-VM
> uuid ( RO) : c17cd63a-f6d5-8f76-d7f1-eb34d574e0dd
> name-label ( RW): i-2-87-VM
> VCPUs-params (MRW): weight: 84; cap: 32
> The configured cap does not make sense in either example. In this 
> environment, cpu.overprovisioning.factor is 3 for the cluster and 1 in Global 
> Settings. In example 1 the cap should be:
> 2200 * 0.99 * 4 / 2200 * 100
> = 396
> But it is:
> 2200 * 0.99 * 4 / (3*2200) * 100
> = 132
> For example 2 it should be:
> 2200 * 0.99 * 1 / 2200 * 100
> = 99
> But it is:
> 2200 * 0.99 * 1 / (3*2200) * 100



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-3658) [DB Upgrade] - Deprecate several old object storage tables and columns as a part of 41-42 db upgrade

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-3658:


Fix Version/s: (was: 4.3.0)
   Future

> [DB Upgrade] - Deprecate several old object storage tables and columns as a 
> part of 41-42 db upgrade
> 
>
> Key: CLOUDSTACK-3658
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3658
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup, Storage Controller
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: Future
>
> Attachments: cloud-after-upgrade.dmp
>
>
> We should deprecate the following db tables and table columes as a part of 
> 41-42 db upgrade due to recent object storage refactoring:
> -Upload
> -s3
> -swift
> -template_host_ref
> -template_s3_ref
> -template_swift_ref
> -volume_host_ref
> -columes (s3_id, swift_id, sechost_id) from snapshots table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CLOUDSTACK-3658) [DB Upgrade] - Deprecate several old object storage tables and columns as a part of 41-42 db upgrade

2013-12-05 Thread Nitin Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13794599#comment-13794599
 ] 

Nitin Mehta edited comment on CLOUDSTACK-3658 at 12/5/13 8:01 PM:
--

Dont think this is critical for 4.2.1 so punt it for future as there is no 
loss/breaking of functionality


was (Author: nitinme):
Dont think this is critical for 4.2.1 so punt it for 4.3 as there is no 
loss/breaking of functionality

> [DB Upgrade] - Deprecate several old object storage tables and columns as a 
> part of 41-42 db upgrade
> 
>
> Key: CLOUDSTACK-3658
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3658
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup, Storage Controller
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: Future
>
> Attachments: cloud-after-upgrade.dmp
>
>
> We should deprecate the following db tables and table columes as a part of 
> 41-42 db upgrade due to recent object storage refactoring:
> -Upload
> -s3
> -swift
> -template_host_ref
> -template_s3_ref
> -template_swift_ref
> -volume_host_ref
> -columes (s3_id, swift_id, sechost_id) from snapshots table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-4201) listServiceOfferings API needs to be able to take virtualmachineid of SystemVM and return service offerings available for the vm to change service offering

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-4201:


Fix Version/s: (was: 4.3.0)
   Future

> listServiceOfferings API needs to be able to take virtualmachineid of 
> SystemVM and return service offerings available for the vm to change service 
> offering
> ---
>
> Key: CLOUDSTACK-4201
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4201
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: listServiceOfferings API needs to be able to take 
> virtualmachineid of SystemVM and returns service offerings available for the 
> vm to change service offering. If vm is running only scale up service 
> offering should be presented. If vm is stopped all service offering should be 
> shown
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: Future
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CLOUDSTACK-4201) listServiceOfferings API needs to be able to take virtualmachineid of SystemVM and return service offerings available for the vm to change service offering

2013-12-05 Thread Nitin Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840417#comment-13840417
 ] 

Nitin Mehta edited comment on CLOUDSTACK-4201 at 12/5/13 7:57 PM:
--

This is a good to have but not a critical. Downgrading the priority to Major 
and moving it to future


was (Author: nitinme):
This is a good to have but not a critical. Downgrading the priority to Major

> listServiceOfferings API needs to be able to take virtualmachineid of 
> SystemVM and return service offerings available for the vm to change service 
> offering
> ---
>
> Key: CLOUDSTACK-4201
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4201
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: listServiceOfferings API needs to be able to take 
> virtualmachineid of SystemVM and returns service offerings available for the 
> vm to change service offering. If vm is running only scale up service 
> offering should be presented. If vm is stopped all service offering should be 
> shown
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: Future
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5386) Secondary Storage does not accept SSL certs/domain other than from "realhostip.com"

2013-12-05 Thread Demetrius Tsitrelis (JIRA)
Demetrius Tsitrelis created CLOUDSTACK-5386:
---

 Summary: Secondary Storage does not accept SSL certs/domain other 
than from "realhostip.com"
 Key: CLOUDSTACK-5386
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5386
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Storage Controller
Affects Versions: 4.2.0
Reporter: Demetrius Tsitrelis


The "sec.storage.ssl.cert.domain" should allow for certificates other than 
realhostip.com to be used.  One use case would be for using a self-signed 
certificate for S3 storage.

DownloadManageerImpl.configure() contains the following code:

   @Override
public boolean configure(String name, Map params) {
final Map configs = 
_configDao.getConfiguration("ManagementServer", params);
_sslCopy = Boolean.parseBoolean(configs.get("secstorage.encrypt.copy"));
_proxy = configs.get(Config.SecStorageProxy.key());

String cert = configs.get("secstorage.ssl.cert.domain");
if (!"realhostip.com".equalsIgnoreCase(cert)) {
s_logger.warn("Only realhostip.com ssl cert is supported, ignoring 
self-signed and other certs");
}




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5384) UI dataProviders are unable to differentiate between load and refresh context

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840443#comment-13840443
 ] 

ASF subversion and git services commented on CLOUDSTACK-5384:
-

Commit ee607646c9c447d40e0bf319fdb7164688528e71 in branch refs/heads/master 
from [~csuich2]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=ee607646 ]

Added load vs refresh context for dataProviders

CLOUDSTACK-5384


> UI dataProviders are unable to differentiate between load and refresh context
> -
>
> Key: CLOUDSTACK-5384
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5384
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.3.0
>Reporter: Chris Suich
>Assignee: Chris Suich
>  Labels: ui
> Fix For: 4.3.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> UI dataProviders are invoked for both loading listViews and refreshing 
> listViews, however, they are unable to tell the difference between the two 
> invocations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5384) UI dataProviders are unable to differentiate between load and refresh context

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840441#comment-13840441
 ] 

ASF subversion and git services commented on CLOUDSTACK-5384:
-

Commit 93b8511f6e564835797150071799da50157b3a20 in branch refs/heads/4.3 from 
[~csuich2]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=93b8511 ]

Added load vs refresh context for dataProviders

CLOUDSTACK-5384


> UI dataProviders are unable to differentiate between load and refresh context
> -
>
> Key: CLOUDSTACK-5384
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5384
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: UI
>Affects Versions: 4.3.0
>Reporter: Chris Suich
>Assignee: Chris Suich
>  Labels: ui
> Fix For: 4.3.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> UI dataProviders are invoked for both loading listViews and refreshing 
> listViews, however, they are unable to tell the difference between the two 
> invocations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CLOUDSTACK-5279) UI - Not able to list detail view of volumes.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan reopened CLOUDSTACK-5279:
-


Jessica,
I am providing you with the DB dump.
I dont have the screenshot since the account was removed in this setup.

Let me know if u are able to see this issue?
-Thanks
Sangeetha

> UI - Not able to list detail view of volumes.
> -
>
> Key: CLOUDSTACK-5279
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5279
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Jessica Wang
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: test.rar
>
>
> UI - Not able to list detail view of volumes.
> From storage-> list Volume , select any volume to list detail view of the 
> volume.
> UI keeps spinning forever.
> Following error seen:
> TypeError: args.context.volumes is undefined
> url: createURL("listVolumes&id=" + args.context.volumes[0].id),



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5090) Anti-Affinity: VM fails to start on a cluster belonging to a different pod.

2013-12-05 Thread Prachi Damle (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840438#comment-13840438
 ] 

Prachi Damle commented on CLOUDSTACK-5090:
--

Changing this to a major as it is not a blocking usecase. The change can have 
wider testing need due to possibility of regressions.

> Anti-Affinity: VM fails to start on a cluster belonging to a different pod.
> ---
>
> Key: CLOUDSTACK-5090
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5090
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
>Reporter: Chandan Purushothama
>Assignee: Prachi Damle
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: mysql_cloudstack_db_dumps.zip
>
>
> Test scenario:
> Set up have 3 clusters with 1 host each. One of the cluster's belong to a 
> different pod:
> 1.As regular user - U1 , Create multiple anti-affinity group.
> 2. Deploy 2 Vms , say Vm11(host1) and VM12(host2) using affinity group A1.
> 3. Deploy 1 Vm , say Vm21(host1) using affinity group A2.
> 4. Stop Vm11.
> 5. Update the list of affinityGroups for this VM to "A1" and "A2"
> 6. Start the VM. 
> ===
> Start VM Job:
> ===
> 2013-11-07 14:43:15,755 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (catalina-exec-7:null) submit async job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ], details: AsyncJobVO {id:40, userId: 
> 3, accountId: 3, sessionKey: null, instanceType: VirtualMachine, instanceId: 
> 9, cmd: org.apache.cloudstack.api.command.user.vm.StartVMCmd, cmdOriginator: 
> null, cmdInfo: 
> {"response":"json","id":"02e909e8-f28f-40b7-9830-ea68e44aa0ed","sessionkey":"Xj52fOTZWqAUf0nnCSUZlybSVfI\u003d","cmdEventType":"VM.START","ctxUserId":"3","httpmethod":"GET","_":"1383864701141","ctxAccountId":"3","ctxStartEventId":"138"},
>  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
> processStatus: 0, resultCode: 0, result: null, initMsid: 7471666038533, 
> completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
> 2013-11-07 14:43:15,756 DEBUG [cloud.api.ApiServlet] (catalina-exec-7:null) 
> ===END===  10.214.4.75 -- GET  
> command=startVirtualMachine&id=02e909e8-f28f-40b7-9830-ea68e44aa0ed&response=json&sessionkey=Xj52fOTZWqAUf0nnCSUZlybSVfI%3D&_=1383864701141
> 2013-11-07 14:43:15,758 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Executing 
> org.apache.cloudstack.api.command.user.vm.StartVMCmd for job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]
> 2013-11-07 14:43:15,774 DEBUG [cloud.user.AccountManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Access to 
> VM[User|TestVM-1] granted to Acct[609f0727-6b59-45aa-9bc2-14877b39b4e1-test] 
> by DomainChecker_EnhancerByCloudStack_a4e5904f
> 2013-11-07 14:43:15,784 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,788 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,811 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A1 for VM Id: 9
> 2013-11-07 14:43:15,813 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 1 to avoid set, since VM 11 is present on the host
> 2013-11-07 14:43:15,814 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A2 for VM Id: 9
> 2013-11-07 14:43:15,816 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 6 to avoid set, since VM 12 is present on the host
> 2013-11-07 14:43:15,835 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Deploy 
> avoids pods: [], clusters: [], hosts: [1, 6]
> 2013-11-07 14:43:15,836 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> DeploymentPlanner allocation algorithm: 
> com.cloud.deploy.FirstFitPlanner_EnhancerByCloudStack_9c110ba0@49591f8a
> 2013-11-07 14:43:15,837 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-

[jira] [Updated] (CLOUDSTACK-5279) UI - Not able to list detail view of volumes.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5279:


Attachment: test.rar

> UI - Not able to list detail view of volumes.
> -
>
> Key: CLOUDSTACK-5279
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5279
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Jessica Wang
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: test.rar
>
>
> UI - Not able to list detail view of volumes.
> From storage-> list Volume , select any volume to list detail view of the 
> volume.
> UI keeps spinning forever.
> Following error seen:
> TypeError: args.context.volumes is undefined
> url: createURL("listVolumes&id=" + args.context.volumes[0].id),



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-5090) Anti-Affinity: VM fails to start on a cluster belonging to a different pod.

2013-12-05 Thread Prachi Damle (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prachi Damle updated CLOUDSTACK-5090:
-

Priority: Major  (was: Critical)

> Anti-Affinity: VM fails to start on a cluster belonging to a different pod.
> ---
>
> Key: CLOUDSTACK-5090
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5090
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
>Reporter: Chandan Purushothama
>Assignee: Prachi Damle
> Fix For: 4.3.0
>
> Attachments: mysql_cloudstack_db_dumps.zip
>
>
> Test scenario:
> Set up have 3 clusters with 1 host each. One of the cluster's belong to a 
> different pod:
> 1.As regular user - U1 , Create multiple anti-affinity group.
> 2. Deploy 2 Vms , say Vm11(host1) and VM12(host2) using affinity group A1.
> 3. Deploy 1 Vm , say Vm21(host1) using affinity group A2.
> 4. Stop Vm11.
> 5. Update the list of affinityGroups for this VM to "A1" and "A2"
> 6. Start the VM. 
> ===
> Start VM Job:
> ===
> 2013-11-07 14:43:15,755 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (catalina-exec-7:null) submit async job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ], details: AsyncJobVO {id:40, userId: 
> 3, accountId: 3, sessionKey: null, instanceType: VirtualMachine, instanceId: 
> 9, cmd: org.apache.cloudstack.api.command.user.vm.StartVMCmd, cmdOriginator: 
> null, cmdInfo: 
> {"response":"json","id":"02e909e8-f28f-40b7-9830-ea68e44aa0ed","sessionkey":"Xj52fOTZWqAUf0nnCSUZlybSVfI\u003d","cmdEventType":"VM.START","ctxUserId":"3","httpmethod":"GET","_":"1383864701141","ctxAccountId":"3","ctxStartEventId":"138"},
>  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
> processStatus: 0, resultCode: 0, result: null, initMsid: 7471666038533, 
> completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
> 2013-11-07 14:43:15,756 DEBUG [cloud.api.ApiServlet] (catalina-exec-7:null) 
> ===END===  10.214.4.75 -- GET  
> command=startVirtualMachine&id=02e909e8-f28f-40b7-9830-ea68e44aa0ed&response=json&sessionkey=Xj52fOTZWqAUf0nnCSUZlybSVfI%3D&_=1383864701141
> 2013-11-07 14:43:15,758 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Executing 
> org.apache.cloudstack.api.command.user.vm.StartVMCmd for job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]
> 2013-11-07 14:43:15,774 DEBUG [cloud.user.AccountManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Access to 
> VM[User|TestVM-1] granted to Acct[609f0727-6b59-45aa-9bc2-14877b39b4e1-test] 
> by DomainChecker_EnhancerByCloudStack_a4e5904f
> 2013-11-07 14:43:15,784 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,788 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,811 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A1 for VM Id: 9
> 2013-11-07 14:43:15,813 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 1 to avoid set, since VM 11 is present on the host
> 2013-11-07 14:43:15,814 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A2 for VM Id: 9
> 2013-11-07 14:43:15,816 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 6 to avoid set, since VM 12 is present on the host
> 2013-11-07 14:43:15,835 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Deploy 
> avoids pods: [], clusters: [], hosts: [1, 6]
> 2013-11-07 14:43:15,836 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> DeploymentPlanner allocation algorithm: 
> com.cloud.deploy.FirstFitPlanner_EnhancerByCloudStack_9c110ba0@49591f8a
> 2013-11-07 14:43:15,837 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Trying to 
> allocate a host and storage pools from dc:1, pod:1,cluster:3, requested cpu: 
> 500, requested ram: 536870912
> 2013-11-07 14:43:15,837 DEBUG [cloud.deploy.

[jira] [Commented] (CLOUDSTACK-5090) Anti-Affinity: VM fails to start on a cluster belonging to a different pod.

2013-12-05 Thread Prachi Damle (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840433#comment-13840433
 ] 

Prachi Damle commented on CLOUDSTACK-5090:
--

Discussed with Will and Alex:

We were limiting a VM to the pod in basic Zone earlier. Also storage migration 
was not available to migrate volumes. Now these two are no longer limitations. 
So we should be able to move VM out of pod.

But this change of not saving the podId in the VM entry, may cause some other 
regressions in the system, since we have assumed that podId will be non-null so 
far.

Also we need to distinguish the case where a VM's podId is set during VM 
creation by passing in a deployment plan. In that case the podId should always 
be used for further restarts.





> Anti-Affinity: VM fails to start on a cluster belonging to a different pod.
> ---
>
> Key: CLOUDSTACK-5090
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5090
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
>Reporter: Chandan Purushothama
>Assignee: Prachi Damle
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: mysql_cloudstack_db_dumps.zip
>
>
> Test scenario:
> Set up have 3 clusters with 1 host each. One of the cluster's belong to a 
> different pod:
> 1.As regular user - U1 , Create multiple anti-affinity group.
> 2. Deploy 2 Vms , say Vm11(host1) and VM12(host2) using affinity group A1.
> 3. Deploy 1 Vm , say Vm21(host1) using affinity group A2.
> 4. Stop Vm11.
> 5. Update the list of affinityGroups for this VM to "A1" and "A2"
> 6. Start the VM. 
> ===
> Start VM Job:
> ===
> 2013-11-07 14:43:15,755 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (catalina-exec-7:null) submit async job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ], details: AsyncJobVO {id:40, userId: 
> 3, accountId: 3, sessionKey: null, instanceType: VirtualMachine, instanceId: 
> 9, cmd: org.apache.cloudstack.api.command.user.vm.StartVMCmd, cmdOriginator: 
> null, cmdInfo: 
> {"response":"json","id":"02e909e8-f28f-40b7-9830-ea68e44aa0ed","sessionkey":"Xj52fOTZWqAUf0nnCSUZlybSVfI\u003d","cmdEventType":"VM.START","ctxUserId":"3","httpmethod":"GET","_":"1383864701141","ctxAccountId":"3","ctxStartEventId":"138"},
>  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
> processStatus: 0, resultCode: 0, result: null, initMsid: 7471666038533, 
> completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
> 2013-11-07 14:43:15,756 DEBUG [cloud.api.ApiServlet] (catalina-exec-7:null) 
> ===END===  10.214.4.75 -- GET  
> command=startVirtualMachine&id=02e909e8-f28f-40b7-9830-ea68e44aa0ed&response=json&sessionkey=Xj52fOTZWqAUf0nnCSUZlybSVfI%3D&_=1383864701141
> 2013-11-07 14:43:15,758 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Executing 
> org.apache.cloudstack.api.command.user.vm.StartVMCmd for job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]
> 2013-11-07 14:43:15,774 DEBUG [cloud.user.AccountManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Access to 
> VM[User|TestVM-1] granted to Acct[609f0727-6b59-45aa-9bc2-14877b39b4e1-test] 
> by DomainChecker_EnhancerByCloudStack_a4e5904f
> 2013-11-07 14:43:15,784 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,788 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,811 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A1 for VM Id: 9
> 2013-11-07 14:43:15,813 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 1 to avoid set, since VM 11 is present on the host
> 2013-11-07 14:43:15,814 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A2 for VM Id: 9
> 2013-11-07 14:43:15,816 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 6 to avoid set, since VM 12 is present on the host
> 2013-11-07 14:43:15,835 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16

[jira] [Commented] (CLOUDSTACK-5090) Anti-Affinity: VM fails to start on a cluster belonging to a different pod.

2013-12-05 Thread Prachi Damle (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840429#comment-13840429
 ] 

Prachi Damle commented on CLOUDSTACK-5090:
--

This is not a critical issue. We do not guarantee host anti-affinity. 

> Anti-Affinity: VM fails to start on a cluster belonging to a different pod.
> ---
>
> Key: CLOUDSTACK-5090
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5090
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.1
>Reporter: Chandan Purushothama
>Assignee: Prachi Damle
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: mysql_cloudstack_db_dumps.zip
>
>
> Test scenario:
> Set up have 3 clusters with 1 host each. One of the cluster's belong to a 
> different pod:
> 1.As regular user - U1 , Create multiple anti-affinity group.
> 2. Deploy 2 Vms , say Vm11(host1) and VM12(host2) using affinity group A1.
> 3. Deploy 1 Vm , say Vm21(host1) using affinity group A2.
> 4. Stop Vm11.
> 5. Update the list of affinityGroups for this VM to "A1" and "A2"
> 6. Start the VM. 
> ===
> Start VM Job:
> ===
> 2013-11-07 14:43:15,755 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (catalina-exec-7:null) submit async job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ], details: AsyncJobVO {id:40, userId: 
> 3, accountId: 3, sessionKey: null, instanceType: VirtualMachine, instanceId: 
> 9, cmd: org.apache.cloudstack.api.command.user.vm.StartVMCmd, cmdOriginator: 
> null, cmdInfo: 
> {"response":"json","id":"02e909e8-f28f-40b7-9830-ea68e44aa0ed","sessionkey":"Xj52fOTZWqAUf0nnCSUZlybSVfI\u003d","cmdEventType":"VM.START","ctxUserId":"3","httpmethod":"GET","_":"1383864701141","ctxAccountId":"3","ctxStartEventId":"138"},
>  cmdVersion: 0, callbackType: 0, callbackAddress: null, status: 0, 
> processStatus: 0, resultCode: 0, result: null, initMsid: 7471666038533, 
> completeMsid: null, lastUpdated: null, lastPolled: null, created: null}
> 2013-11-07 14:43:15,756 DEBUG [cloud.api.ApiServlet] (catalina-exec-7:null) 
> ===END===  10.214.4.75 -- GET  
> command=startVirtualMachine&id=02e909e8-f28f-40b7-9830-ea68e44aa0ed&response=json&sessionkey=Xj52fOTZWqAUf0nnCSUZlybSVfI%3D&_=1383864701141
> 2013-11-07 14:43:15,758 DEBUG [cloud.async.AsyncJobManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Executing 
> org.apache.cloudstack.api.command.user.vm.StartVMCmd for job-40 = [ 
> c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]
> 2013-11-07 14:43:15,774 DEBUG [cloud.user.AccountManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Access to 
> VM[User|TestVM-1] granted to Acct[609f0727-6b59-45aa-9bc2-14877b39b4e1-test] 
> by DomainChecker_EnhancerByCloudStack_a4e5904f
> 2013-11-07 14:43:15,784 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,788 DEBUG [cloud.network.NetworkModelImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Service 
> SecurityGroup is not supported in the network id=205
> 2013-11-07 14:43:15,811 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A1 for VM Id: 9
> 2013-11-07 14:43:15,813 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 1 to avoid set, since VM 11 is present on the host
> 2013-11-07 14:43:15,814 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> Processing affinity group A2 for VM Id: 9
> 2013-11-07 14:43:15,816 DEBUG [cloudstack.affinity.HostAntiAffinityProcessor] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Added 
> host 6 to avoid set, since VM 12 is present on the host
> 2013-11-07 14:43:15,835 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Deploy 
> avoids pods: [], clusters: [], hosts: [1, 6]
> 2013-11-07 14:43:15,836 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) 
> DeploymentPlanner allocation algorithm: 
> com.cloud.deploy.FirstFitPlanner_EnhancerByCloudStack_9c110ba0@49591f8a
> 2013-11-07 14:43:15,837 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] 
> (Job-Executor-31:job-40 = [ c6b67100-24d5-4eaf-8104-a93c3c75ce16 ]) Trying to 
> allocate a host and storage pools

[jira] [Updated] (CLOUDSTACK-5385) Management server not able start when there ~15 snapshot policies.

2013-12-05 Thread Sangeetha Hariharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Hariharan updated CLOUDSTACK-5385:


Attachment: test.rar

> Management server not able start when there ~15 snapshot policies.
> --
>
> Key: CLOUDSTACK-5385
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5385
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Priority: Blocker
> Fix For: 4.3.0
>
> Attachments: test.rar
>
>
> Management server not able start when there ~15 snapshot policies.
> Management server was up and running fine.
> I had snapshot policies configured for 15 ROOT volumes.
> Stopped and started the management server.
> Management server does not start up successfully.
> Following is what I see in management serve logs:
> It is stuck after this:
> 2013-12-04 20:35:24,132 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server 112516401760401 is being started
> 2013-12-04 20:35:24,138 INFO  [c.c.c.ClusterManagerImpl] (main:null) 
> Management server (host id : 1) is being started at 10.223.49.5:9090
> 2013-12-04 20:35:24,152 INFO  [c.c.c.ClusterManagerImpl] (main:null) Cluster 
> manager was started successfully
> 2013-12-04 20:35:24,153 INFO  [c.c.s.s.SecondaryStorageManagerImpl] 
> (main:null) Start secondary storage vm manager
> 2013-12-04 20:35:24,159 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-0:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-1:null) Starting work
> 2013-12-04 20:35:24,165 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-4:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-3:null) Starting work
> 2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
> (HA-Worker-2:null) Starting work
> 2013-12-04 20:35:24,236 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 1 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,297 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 2 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,314 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 3 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,334 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 4 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,354 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 5 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,379 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 6 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,434 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 7 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,454 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 8 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,472 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 9 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,493 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 10 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,510 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 11 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,526 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 13 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,543 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 14 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:24,565 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
> Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 15 is 
> 2013-12-05 01:40:00 GMT
> 2013-12-04 20:35:37,364 INFO  [c.c.u.c.ComponentContext] (main:null) 
> Configuring 
> com.cloud.bridge.persist.dao.OfferingDaoImpl_EnhancerByCloudStack_e5b26cda
> 2013-12-0

[jira] [Updated] (CLOUDSTACK-4201) listServiceOfferings API needs to be able to take virtualmachineid of SystemVM and returns service offerings available for the vm to change service offering

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-4201:


Priority: Major  (was: Critical)

> listServiceOfferings API needs to be able to take virtualmachineid of 
> SystemVM and returns service offerings available for the vm to change service 
> offering
> 
>
> Key: CLOUDSTACK-4201
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4201
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: listServiceOfferings API needs to be able to take 
> virtualmachineid of SystemVM and returns service offerings available for the 
> vm to change service offering. If vm is running only scale up service 
> offering should be presented. If vm is stopped all service offering should be 
> shown
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-4201) listServiceOfferings API needs to be able to take virtualmachineid of SystemVM and return service offerings available for the vm to change service offering

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-4201:


Summary: listServiceOfferings API needs to be able to take virtualmachineid 
of SystemVM and return service offerings available for the vm to change service 
offering  (was: listServiceOfferings API needs to be able to take 
virtualmachineid of SystemVM and returns service offerings available for the vm 
to change service offering)

> listServiceOfferings API needs to be able to take virtualmachineid of 
> SystemVM and return service offerings available for the vm to change service 
> offering
> ---
>
> Key: CLOUDSTACK-4201
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4201
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: listServiceOfferings API needs to be able to take 
> virtualmachineid of SystemVM and returns service offerings available for the 
> vm to change service offering. If vm is running only scale up service 
> offering should be presented. If vm is stopped all service offering should be 
> shown
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
> Fix For: 4.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-4201) listServiceOfferings API needs to be able to take virtualmachineid of SystemVM and returns service offerings available for the vm to change service offering

2013-12-05 Thread Nitin Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840417#comment-13840417
 ] 

Nitin Mehta commented on CLOUDSTACK-4201:
-

This is a good to have but not a critical. Downgrading the priority to Major

> listServiceOfferings API needs to be able to take virtualmachineid of 
> SystemVM and returns service offerings available for the vm to change service 
> offering
> 
>
> Key: CLOUDSTACK-4201
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4201
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.2.0
> Environment: listServiceOfferings API needs to be able to take 
> virtualmachineid of SystemVM and returns service offerings available for the 
> vm to change service offering. If vm is running only scale up service 
> offering should be presented. If vm is stopped all service offering should be 
> shown
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CLOUDSTACK-5279) UI - Not able to list detail view of volumes.

2013-12-05 Thread Jessica Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jessica Wang resolved CLOUDSTACK-5279.
--

Resolution: Incomplete

Sangeetha,

Could you please provide screenshot and database dump?
thank you.

Jessica

> UI - Not able to list detail view of volumes.
> -
>
> Key: CLOUDSTACK-5279
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5279
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.0
> Environment: Build from 4.3
>Reporter: Sangeetha Hariharan
>Assignee: Jessica Wang
>Priority: Critical
> Fix For: 4.3.0
>
>
> UI - Not able to list detail view of volumes.
> From storage-> list Volume , select any volume to list detail view of the 
> volume.
> UI keeps spinning forever.
> Following error seen:
> TypeError: args.context.volumes is undefined
> url: createURL("listVolumes&id=" + args.context.volumes[0].id),



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CLOUDSTACK-5303) "snapshot" count and "secondary_storage" count are not correct in resource_count table

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840405#comment-13840405
 ] 

ASF subversion and git services commented on CLOUDSTACK-5303:
-

Commit fa43987e43162c6f87ca8e341afe55e4f45ef058 in branch refs/heads/4.2 from 
[~weizhou]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=fa43987 ]

CLOUDSTACK-5303: fix incorrect resource count (snapshot, secondary_storage


> "snapshot" count and "secondary_storage" count  are not correct in 
> resource_count table
> ---
>
> Key: CLOUDSTACK-5303
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5303
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.2.0, 4.3.0
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: 4.2.1, 4.3.0
>
>
> After I had some testing on snapshot (take,delete,create template), the 
> values in resource_count looks wrong.
> mysql> select * from resource_count where type="snapshot";
> +-++---+--+---+
> | id  | account_id | domain_id | type | count |
> +-++---+--+---+
> |   8 |   NULL | 1 | snapshot | 2 |
> |  40 |   NULL | 2 | snapshot | 0 |
> |  52 |   NULL | 3 | snapshot | 0 |
> | 100 |   NULL | 4 | snapshot | 0 |
> | 184 |   NULL | 5 | snapshot | 0 |
> | 220 |   NULL | 6 | snapshot | 1 |
> | 280 |   NULL | 7 | snapshot | 1 |
> | 412 |   NULL | 8 | snapshot | 0 |
> |  16 |  1 |  NULL | snapshot | 0 |
> |  28 |  2 |  NULL | snapshot | 0 |
> |  64 |  3 |  NULL | snapshot | 0 |
> |  76 |  4 |  NULL | snapshot | 0 |
> | 172 | 11 |  NULL | snapshot | 0 |
> | 196 | 12 |  NULL | snapshot | 0 |
> | 208 | 13 |  NULL | snapshot | 0 |
> | 232 | 14 |  NULL | snapshot | 1 |
> | 244 | 15 |  NULL | snapshot | 0 |
> | 292 | 18 |  NULL | snapshot | 1 |
> | 304 | 19 |  NULL | snapshot | 0 |
> | 316 | 20 |  NULL | snapshot | 0 |
> | 328 | 21 |  NULL | snapshot | 0 |
> | 340 | 22 |  NULL | snapshot | 0 |
> | 352 | 23 |  NULL | snapshot | 0 |
> | 376 | 25 |  NULL | snapshot | 0 |
> | 388 | 26 |  NULL | snapshot | 0 |
> | 400 | 27 |  NULL | snapshot | 0 |
> | 424 | 28 |  NULL | snapshot | 1 |
> | 436 | 29 |  NULL | snapshot |-1 |
> | 448 | 30 |  NULL | snapshot | 0 |
> | 460 | 31 |  NULL | snapshot | 0 |
> | 472 | 32 |  NULL | snapshot | 0 |
> +-++---+--+---+
> 31 rows in set (0.00 sec)
> mysql> select * from resource_count where type="secondary_storage";
> +-++---+---+--+
> | id  | account_id | domain_id | type  | count|
> +-++---+---+--+
> |   4 |   NULL | 1 | secondary_storage | 567941002752 |
> |  48 |   NULL | 2 | secondary_storage |0 |
> |  60 |   NULL | 3 | secondary_storage |0 |
> | 108 |   NULL | 4 | secondary_storage |0 |
> | 192 |   NULL | 5 | secondary_storage |0 |
> | 228 |   NULL | 6 | secondary_storage | 300647710720 |
> | 288 |   NULL | 7 | secondary_storage | 128849018880 |
> | 420 |   NULL | 8 | secondary_storage |0 |
> |  24 |  1 |  NULL | secondary_storage |   8866096640 |
> |  36 |  2 |  NULL | secondary_storage | 172527849472 |
> |  72 |  3 |  NULL | secondary_storage |0 |
> |  84 |  4 |  NULL | secondary_storage |0 |
> | 180 | 11 |  NULL | secondary_storage |0 |
> | 204 | 12 |  NULL | secondary_storage |0 |
> | 216 | 13 |  NULL | secondary_storage |0 |
> | 240 | 14 |  NULL | secondary_storage | 257698037760 |
> | 252 | 15 |  NULL | secondary_storage |0 |
> | 300 | 18 |  NULL | secondary_storage | 128849018880 |
> | 312 | 19 |  NULL | secondary_storage |0 |
> | 324 | 20 |  NULL | secondary_storage |0 |
> | 336 | 21 |  NULL | secondary_storage |0 |
> | 348 | 22 |  

[jira] [Created] (CLOUDSTACK-5385) Management server not able start when there ~15 snapshot policies.

2013-12-05 Thread Sangeetha Hariharan (JIRA)
Sangeetha Hariharan created CLOUDSTACK-5385:
---

 Summary: Management server not able start when there ~15 snapshot 
policies.
 Key: CLOUDSTACK-5385
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5385
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.0
 Environment: Build from 4.3
Reporter: Sangeetha Hariharan
Priority: Blocker
 Fix For: 4.3.0


Management server not able start when there ~15 snapshot policies.

Management server was up and running fine.
I had snapshot policies configured for 15 ROOT volumes.
Stopped and started the management server.

Management server does not start up successfully.

Following is what I see in management serve logs:
It is stuck after this:


2013-12-04 20:35:24,132 INFO  [c.c.c.ClusterManagerImpl] (main:null) Management 
server 112516401760401 is being started
2013-12-04 20:35:24,138 INFO  [c.c.c.ClusterManagerImpl] (main:null) Management 
server (host id : 1) is being started at 10.223.49.5:9090
2013-12-04 20:35:24,152 INFO  [c.c.c.ClusterManagerImpl] (main:null) Cluster 
manager was started successfully
2013-12-04 20:35:24,153 INFO  [c.c.s.s.SecondaryStorageManagerImpl] (main:null) 
Start secondary storage vm manager
2013-12-04 20:35:24,159 INFO  [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-0:null) Starting work
2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-1:null) Starting work
2013-12-04 20:35:24,165 INFO  [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-4:null) Starting work
2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-3:null) Starting work
2013-12-04 20:35:24,162 INFO  [c.c.h.HighAvailabilityManagerImpl] 
(HA-Worker-2:null) Starting work
2013-12-04 20:35:24,236 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 1 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,297 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 2 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,314 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 3 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,334 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 4 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,354 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 5 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,379 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 6 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,434 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 7 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,454 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 8 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,472 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 9 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,493 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 10 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,510 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 11 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,526 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 13 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,543 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 14 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:24,565 DEBUG [c.c.s.s.SnapshotSchedulerImpl] (main:null) 
Current time is 2013-12-05 01:35:03 GMT. NextScheduledTime of policyId 15 is 
2013-12-05 01:40:00 GMT
2013-12-04 20:35:37,364 INFO  [c.c.u.c.ComponentContext] (main:null) 
Configuring 
com.cloud.bridge.persist.dao.OfferingDaoImpl_EnhancerByCloudStack_e5b26cda
2013-12-04 20:35:37,384 INFO  [c.c.u.c.ComponentContext] (main:null) 
Configuring 
com.cloud.bridge.persist.dao.CloudStackAccountDaoImpl_EnhancerByCloudStack_50a7fd80
2013-12-04 20:35:37,387 INFO  [c.c.u.c.ComponentContext] (main:null) 
Configuring 
com.cloud.bridge.persist.dao.SMetaDaoImpl_EnhancerByCloudStack_2595cd2
2013-12-04 20:35:37,388 INFO  [c.c.u.c.ComponentContext] (main:null) 
Configuring

[jira] [Commented] (CLOUDSTACK-999) Plugin to provide Hyper-V 2012 support

2013-12-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13840397#comment-13840397
 ] 

ASF subversion and git services commented on CLOUDSTACK-999:


Commit db8e2e5552c108e45e7a54c2f38bdb87ffadf837 in branch refs/heads/master 
from [~jessicawang]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=db8e2e5 ]

CLOUDSTACK-999: HyperV - UI > Infrastructure > zone detail > physical network > 
Guest > Details tab > add HyperV Traffic Label field.


> Plugin to provide Hyper-V 2012 support
> --
>
> Key: CLOUDSTACK-999
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-999
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, Template, UI
>Affects Versions: Future
> Environment: Hyper-V 2012 is available on releases of Windows 
> operating system from 2012 onwards.  E.g. Windows Server 2012, and Hyper-V 
> Server 2012.
> The plugin will execute at least in part on the CloudStack management server.
>Reporter: Donal Lafferty
>Assignee: Donal Lafferty
>  Labels: Hyper-V, newbie
> Fix For: 4.3.0
>
> Attachments: Jessica_UI_change_1.PNG, Jessica_UI_change_2.PNG, 
> jessica_UI_change_3.jpg, jessica_hyperv_edit_traffic_type_of_guest.PNG, 
> jessica_hyperv_edit_traffic_type_of_management.PNG, 
> jessica_hyperv_edit_traffic_type_of_public.PNG
>
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Hyper-V+2012+%283.0%29+Support
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Original+Feature+Spec
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/BootArgs+Support+for+Hyper-V+with+KVP+Data+Exchange
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/CIFS+Support
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Progress



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5384) UI dataProviders are unable to differentiate between load and refresh context

2013-12-05 Thread Chris Suich (JIRA)
Chris Suich created CLOUDSTACK-5384:
---

 Summary: UI dataProviders are unable to differentiate between load 
and refresh context
 Key: CLOUDSTACK-5384
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5384
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.3.0
Reporter: Chris Suich
Assignee: Chris Suich
 Fix For: 4.3.0


UI dataProviders are invoked for both loading listViews and refreshing 
listViews, however, they are unable to tell the difference between the two 
invocations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CLOUDSTACK-3658) [DB Upgrade] - Deprecate several old object storage tables and columns as a part of 41-42 db upgrade

2013-12-05 Thread Nitin Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin Mehta updated CLOUDSTACK-3658:


Summary: [DB Upgrade] - Deprecate several old object storage tables and 
columns as a part of 41-42 db upgrade  (was: [DB Upgrade] - Deprecate several 
old object storage tables and columes as a part of 41-42 db upgrade)

> [DB Upgrade] - Deprecate several old object storage tables and columns as a 
> part of 41-42 db upgrade
> 
>
> Key: CLOUDSTACK-3658
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3658
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Install and Setup, Storage Controller
>Affects Versions: 4.2.0
>Reporter: Nitin Mehta
>Assignee: Nitin Mehta
>Priority: Critical
> Fix For: 4.3.0
>
> Attachments: cloud-after-upgrade.dmp
>
>
> We should deprecate the following db tables and table columes as a part of 
> 41-42 db upgrade due to recent object storage refactoring:
> -Upload
> -s3
> -swift
> -template_host_ref
> -template_s3_ref
> -template_swift_ref
> -volume_host_ref
> -columes (s3_id, swift_id, sechost_id) from snapshots table.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CLOUDSTACK-5383) Multiselect actions are not reset when a multiselect action is performed

2013-12-05 Thread Chris Suich (JIRA)
Chris Suich created CLOUDSTACK-5383:
---

 Summary: Multiselect actions are not reset when a multiselect 
action is performed
 Key: CLOUDSTACK-5383
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5383
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: UI
Affects Versions: 4.3.0
Reporter: Chris Suich
Priority: Minor


When a multiselect action is performed in the UI, the header is not reset to 
only show non-multiselect actions once the action completes.

For example, on UI instances:
1) Select multiple items (note that 'Add Instance' is removed and 'Take VM 
Snapshot' is added)
2) Take a snapshot of the VMs
3) When the operation completes, notice that the header still shows 'Take VM 
Snapshot' even though no rows are selected.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >