[jira] [Commented] (CLOUDSTACK-9175) [VMware DRS] Adding new host to DRS cluster does not participate in load balancing

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061853#comment-15061853
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9175:


Github user resmo commented on the pull request:

https://github.com/apache/cloudstack/pull/1248#issuecomment-165410616
  
@sureshanaparti  Ok, I see. then I would indeed use old but suggest 
`oldest` : `findExistentHypervisorHostInCluster`. Would you mind rebasing to 
get a clean PR without merge commits and reverts?


> [VMware DRS] Adding new host to DRS cluster does not participate in load 
> balancing
> --
>
> Key: CLOUDSTACK-9175
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9175
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a new VMware host is added into a cluster, Cloudstack, by default, 
> doesn't create all the port groups present in the cluster. And since it 
> doesn't have all the necessary networking port groups (existing VM's port 
> groups) it is not eligible to participate in DRS load balancing or HA.
> Steps:
> 1. Have a DRS and HA cluster in fully automated mode, with two hosts H1 and 
> H2 created in the vCenter.
> 2. Configure this cluster in Cloudstack and create couple of VMs.
> 3. Start stressing the host by running some cpu hogging scripts in each of 
> the VM.
> 4. Enable maintenance mode on one of the host - say H1 from Cloudstack.
> 5. Also, quickly enable maintenance mode on host H1 from vCenter.
> (This should migrate all the VMs to host H2) Make sure none of the VMs are 
> present on host H1.
> 6. Add host H3 into DRS cluster from vCenter and from Cloudstack as well.
> 7. At this point, the load is definitely imbalanced. This can be verified 
> from vCenter ( Click on cluster -> Go to Summary tab -> under vSphere DRS 
> section, it should show 'Load imbalanced'
> Now, as per DRS rules, the load should be balanced across all the available 
> hosts.
> In this case, even after adding new host, the load is imbalanced. 
> The reason for the load imbalance is VMs (created from Cloudstack) are not 
> eligible to migrate to new host because networks or the cloud portgroups are 
> not available on the new host H3 (except for private).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9175) [VMware DRS] Adding new host to DRS cluster does not participate in load balancing

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061860#comment-15061860
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9175:


Github user sureshanaparti commented on the pull request:

https://github.com/apache/cloudstack/pull/1248#issuecomment-165412102
  
@resmo, You mean findOldestExistentHypervisorHostInCluster. I'm OK with it. 
Shall I create new PR without all these merges?


> [VMware DRS] Adding new host to DRS cluster does not participate in load 
> balancing
> --
>
> Key: CLOUDSTACK-9175
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9175
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a new VMware host is added into a cluster, Cloudstack, by default, 
> doesn't create all the port groups present in the cluster. And since it 
> doesn't have all the necessary networking port groups (existing VM's port 
> groups) it is not eligible to participate in DRS load balancing or HA.
> Steps:
> 1. Have a DRS and HA cluster in fully automated mode, with two hosts H1 and 
> H2 created in the vCenter.
> 2. Configure this cluster in Cloudstack and create couple of VMs.
> 3. Start stressing the host by running some cpu hogging scripts in each of 
> the VM.
> 4. Enable maintenance mode on one of the host - say H1 from Cloudstack.
> 5. Also, quickly enable maintenance mode on host H1 from vCenter.
> (This should migrate all the VMs to host H2) Make sure none of the VMs are 
> present on host H1.
> 6. Add host H3 into DRS cluster from vCenter and from Cloudstack as well.
> 7. At this point, the load is definitely imbalanced. This can be verified 
> from vCenter ( Click on cluster -> Go to Summary tab -> under vSphere DRS 
> section, it should show 'Load imbalanced'
> Now, as per DRS rules, the load should be balanced across all the available 
> hosts.
> In this case, even after adding new host, the load is imbalanced. 
> The reason for the load imbalance is VMs (created from Cloudstack) are not 
> eligible to migrate to new host because networks or the cloud portgroups are 
> not available on the new host H3 (except for private).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9175) [VMware DRS] Adding new host to DRS cluster does not participate in load balancing

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061864#comment-15061864
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9175:


Github user resmo commented on the pull request:

https://github.com/apache/cloudstack/pull/1248#issuecomment-165413590
  
@sureshanaparti that would be great. appreciate your work on vmware parts!


> [VMware DRS] Adding new host to DRS cluster does not participate in load 
> balancing
> --
>
> Key: CLOUDSTACK-9175
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9175
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a new VMware host is added into a cluster, Cloudstack, by default, 
> doesn't create all the port groups present in the cluster. And since it 
> doesn't have all the necessary networking port groups (existing VM's port 
> groups) it is not eligible to participate in DRS load balancing or HA.
> Steps:
> 1. Have a DRS and HA cluster in fully automated mode, with two hosts H1 and 
> H2 created in the vCenter.
> 2. Configure this cluster in Cloudstack and create couple of VMs.
> 3. Start stressing the host by running some cpu hogging scripts in each of 
> the VM.
> 4. Enable maintenance mode on one of the host - say H1 from Cloudstack.
> 5. Also, quickly enable maintenance mode on host H1 from vCenter.
> (This should migrate all the VMs to host H2) Make sure none of the VMs are 
> present on host H1.
> 6. Add host H3 into DRS cluster from vCenter and from Cloudstack as well.
> 7. At this point, the load is definitely imbalanced. This can be verified 
> from vCenter ( Click on cluster -> Go to Summary tab -> under vSphere DRS 
> section, it should show 'Load imbalanced'
> Now, as per DRS rules, the load should be balanced across all the available 
> hosts.
> In this case, even after adding new host, the load is imbalanced. 
> The reason for the load imbalance is VMs (created from Cloudstack) are not 
> eligible to migrate to new host because networks or the cloud portgroups are 
> not available on the new host H3 (except for private).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9175) [VMware DRS] Adding new host to DRS cluster does not participate in load balancing

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061867#comment-15061867
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9175:


Github user sureshanaparti commented on the pull request:

https://github.com/apache/cloudstack/pull/1248#issuecomment-165414155
  
@resmo Sure. I'll do that. Thanks.


> [VMware DRS] Adding new host to DRS cluster does not participate in load 
> balancing
> --
>
> Key: CLOUDSTACK-9175
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9175
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a new VMware host is added into a cluster, Cloudstack, by default, 
> doesn't create all the port groups present in the cluster. And since it 
> doesn't have all the necessary networking port groups (existing VM's port 
> groups) it is not eligible to participate in DRS load balancing or HA.
> Steps:
> 1. Have a DRS and HA cluster in fully automated mode, with two hosts H1 and 
> H2 created in the vCenter.
> 2. Configure this cluster in Cloudstack and create couple of VMs.
> 3. Start stressing the host by running some cpu hogging scripts in each of 
> the VM.
> 4. Enable maintenance mode on one of the host - say H1 from Cloudstack.
> 5. Also, quickly enable maintenance mode on host H1 from vCenter.
> (This should migrate all the VMs to host H2) Make sure none of the VMs are 
> present on host H1.
> 6. Add host H3 into DRS cluster from vCenter and from Cloudstack as well.
> 7. At this point, the load is definitely imbalanced. This can be verified 
> from vCenter ( Click on cluster -> Go to Summary tab -> under vSphere DRS 
> section, it should show 'Load imbalanced'
> Now, as per DRS rules, the load should be balanced across all the available 
> hosts.
> In this case, even after adding new host, the load is imbalanced. 
> The reason for the load imbalance is VMs (created from Cloudstack) are not 
> eligible to migrate to new host because networks or the cloud portgroups are 
> not available on the new host H3 (except for private).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9185) [VMware DRS] VM sync failed with exception due to out-of-band changes

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061881#comment-15061881
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9185:


Github user resmo commented on the pull request:

https://github.com/apache/cloudstack/pull/1256#issuecomment-165420030
  
LGTM


> [VMware DRS] VM sync failed with exception due to out-of-band changes
> -
>
> Key: CLOUDSTACK-9185
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9185
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>
> 1. Configure VMware advanced zone 
> 2. Create a cluster with 2 hosts
> 3. Enable the DRS in the cluster from vCenter
> 4. Configure the DRS rule – VMs grouped together for SSVM and Router VM
> 5. Make sure both SSVM and Router VM are on the same host
> 6. Migrate Router VM from CloudStack to another host
> 7. Check the VM states
> Actual result:
> After vm migrated to another host successfully, DRS on background moves back 
> Router VM back to original host as per DRS rule(VMs together) configured.
> CS identifed our-of-band change and failed with below exception:
> 2015-11-24 00:45:06,298 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-343:ctx-4d811a39) (logid:46ceb8da) VM state report. host: 
> 5, vm id: 48, power state: PowerOn
> 2015-11-24 00:45:06,304 DEBUG [c.c.v.VirtualMachinePowerStateSyncImpl] 
> (DirectAgentCronJob-343:ctx-4d811a39) (logid:46ceb8da) VM state report is 
> updated. host: 5, vm id: 48, power state: PowerOn
> 2015-11-24 00:45:06,313 INFO [c.c.v.VirtualMachineManagerImpl] 
> (DirectAgentCronJob-343:ctx-4d811a39) (logid:46ceb8da) Detected out of band 
> VM migration from host 1 to host 5
> 2015-11-24 00:45:06,324 ERROR [o.a.c.f.m.MessageDispatcher] 
> (DirectAgentCronJob-343:ctx-4d811a39) (logid:46ceb8da) Unexpected exception 
> when calling 
> com.cloud.vm.ClusteredVirtualMachineManagerImpl.HandlePowerStateReport
> java.lang.reflect.InvocationTargetException
> at sun.reflect.GeneratedMethodAccessor228.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.dispatch(MessageDispatcher.java:75)
> at 
> org.apache.cloudstack.framework.messagebus.MessageDispatcher.onPublishMessage(MessageDispatcher.java:45)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase$SubscriptionNode.notifySubscribers(MessageBusBase.java:441)
> at 
> org.apache.cloudstack.framework.messagebus.MessageBusBase.publish(MessageBusBase.java:178)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processReport(VirtualMachinePowerStateSyncImpl.java:87)
> at 
> com.cloud.vm.VirtualMachinePowerStateSyncImpl.processHostVmStatePingReport(VirtualMachinePowerStateSyncImpl.java:70)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.processCommands(VirtualMachineManagerImpl.java:2840)
> at 
> com.cloud.agent.manager.AgentManagerImpl.handleCommands(AgentManagerImpl.java:309)
> at 
> com.cloud.agent.manager.DirectAgentAttache$PingTask.runInContext(DirectAgentAttache.java:192)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4b366ec2 
> rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@7f5b2a5e[Terminated, pool 
> size = 0, active thread

[jira] [Commented] (CLOUDSTACK-9099) SecretKey is returned from the APIs

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061916#comment-15061916
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9099:


Github user kansal commented on the pull request:

https://github.com/apache/cloudstack/pull/1152#issuecomment-165426036
  
Have updated this PR. Instead of directly removing the secret key from 
response, I have deprecated that as many regressions were using the secret key 
from those APIs for authentication. Maybe from next major release we can remove 
that. 

@DaanHoogland  marvin test cases on the way!!!


> SecretKey is returned from the APIs
> ---
>
> Key: CLOUDSTACK-9099
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9099
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Kshitij Kansal
>Assignee: Kshitij Kansal
>
> The sercreKey parameter is returned from the following APIs:
> createAccount
> createUser
> disableAccount
> disableUser
> enableAccount
> enableUser
> listAccounts
> listUsers
> lockAccount
> lockUser
> registerUserKeys
> updateAccount
> updateUser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8302) Cleanup snapshot on KVM with RBD

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062103#comment-15062103
 ] 

ASF GitHub Bot commented on CLOUDSTACK-8302:


Github user voloshanenko commented on the pull request:

https://github.com/apache/cloudstack/pull/1230#issuecomment-165465027
  
Guys, i see that build hangs... Can you please re run them?


> Cleanup snapshot on KVM with RBD
> 
>
> Key: CLOUDSTACK-8302
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8302
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: KVM, Snapshot, Storage Controller
>Affects Versions: 4.4.0, 4.4.1, 4.4.2
> Environment: CloudStack 4.4.2 + KVM on CentOS 6.6 + Ceph/RBD 0.80.8
>Reporter: Star Guo
>Assignee: Wido den Hollander
>Priority: Critical
>
> I just build a lab with CloudStack 4.4.2 + CentOS 6.6 KVM + Ceph/RBD 0.80.8.
> I deploy an instance on RBD and I create the ROOT volume snapshots. When 
> delete a snapshot the UI show OK, but the snapshot of the volume in the RBD 
> pool is still exist.
> And I find the code in 
> com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java: 
> …
> @Override
> public Answer deleteSnapshot(DeleteCommand cmd) {
> return new Answer(cmd);
> }
> …
> deleteSnapshot() does not be implememented. And I also find the code:
> ...
> @Override
> public Answer createTemplateFromSnapshot(CopyCommand cmd) {
> return null;  //To change body of implemented methods use File | 
> Settings | File Templates.
> }
> ...
> So does createTenokateFromSnapshot(). I just look for it in MASTER branch but 
> not do that yet. Will CloudStack Dev Team plan to do that ? Thanks .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9174) Quota Service: When a account/user is deleted with low quota, quota service still tries to alert the user resulting in NPE

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062345#comment-15062345
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9174:


Github user resmo commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1254#discussion_r47931786
  
--- Diff: 
plugins/database/quota/src/org/apache/cloudstack/api/command/QuotaSummaryCmd.java
 ---
@@ -59,7 +59,7 @@ public QuotaSummaryCmd() {
 public void execute() {
 Account caller = CallContext.current().getCallingAccount();
 List responses;
-if (caller.getAccountId() <= 2) { //non root admin or system
+if (caller.getAccountId() <= 2) { // root admin or system
--- End diff --

to assume that an account <=2 has root admin privileges is just naive. 
There must be a solid way to do this. Just my 2 cents..


> Quota Service: When a account/user is deleted with low quota, quota service 
> still tries to alert the user resulting in NPE
> --
>
> Key: CLOUDSTACK-9174
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9174
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.7.0
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.8.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9175) [VMware DRS] Adding new host to DRS cluster does not participate in load balancing

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062541#comment-15062541
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9175:


GitHub user sureshanaparti opened a pull request:

https://github.com/apache/cloudstack/pull/1257

CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not 
participate in load balancing.

Summary: When a new host is added to a cluster, Cloudstack doesn't create 
all the port groups (created by cloudstack earlier in other hosts) present in 
the cluster. Since the new host doesn't have all the necessary networking port 
groups of cloudstack, it is not eligible to participate in DRS load balancing 
or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API 
to find the list of unique port groups on a previously added host (older host 
in the cluster) if exists and then create them on the new host.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sureshanaparti/cloudstack CLOUDSTACK-9175

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1257.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1257


commit 9091829ed9423fbb4db4202bcdfc379003b3a082
Author: Suresh Kumar Anaparti 
Date:   2015-12-17T18:47:38Z

CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not 
participate in load balancing.

Summary: When a new host is added to a cluster, Cloudstack doesn't create 
all the port groups (created by cloudstack earlier in other hosts) present in 
the cluster. Since the new host doesn't have all the necessary networking port 
groups of cloudstack, it is not eligible to participate in DRS load balancing 
or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API 
to find the list of unique port groups on a previously added host (older host 
in the cluster) if exists and then create them on the new host.




> [VMware DRS] Adding new host to DRS cluster does not participate in load 
> balancing
> --
>
> Key: CLOUDSTACK-9175
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9175
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a new VMware host is added into a cluster, Cloudstack, by default, 
> doesn't create all the port groups present in the cluster. And since it 
> doesn't have all the necessary networking port groups (existing VM's port 
> groups) it is not eligible to participate in DRS load balancing or HA.
> Steps:
> 1. Have a DRS and HA cluster in fully automated mode, with two hosts H1 and 
> H2 created in the vCenter.
> 2. Configure this cluster in Cloudstack and create couple of VMs.
> 3. Start stressing the host by running some cpu hogging scripts in each of 
> the VM.
> 4. Enable maintenance mode on one of the host - say H1 from Cloudstack.
> 5. Also, quickly enable maintenance mode on host H1 from vCenter.
> (This should migrate all the VMs to host H2) Make sure none of the VMs are 
> present on host H1.
> 6. Add host H3 into DRS cluster from vCenter and from Cloudstack as well.
> 7. At this point, the load is definitely imbalanced. This can be verified 
> from vCenter ( Click on cluster -> Go to Summary tab -> under vSphere DRS 
> section, it should show 'Load imbalanced'
> Now, as per DRS rules, the load should be balanced across all the available 
> hosts.
> In this case, even after adding new host, the load is imbalanced. 
> The reason for the load imbalance is VMs (created from Cloudstack) are not 
> eligible to migrate to new host because networks or the cloud portgroups are 
> not available on the new host H3 (except for private).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9175) [VMware DRS] Adding new host to DRS cluster does not participate in load balancing

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062550#comment-15062550
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9175:


Github user sureshanaparti commented on the pull request:

https://github.com/apache/cloudstack/pull/1248#issuecomment-165547302
  
@resmo Raised a clean PR: https://github.com/apache/cloudstack/pull/1257.
Shall I close this one?


> [VMware DRS] Adding new host to DRS cluster does not participate in load 
> balancing
> --
>
> Key: CLOUDSTACK-9175
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9175
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, VMware
>Affects Versions: 4.5.2
>Reporter: Suresh Kumar Anaparti
>Assignee: Suresh Kumar Anaparti
>
> When a new VMware host is added into a cluster, Cloudstack, by default, 
> doesn't create all the port groups present in the cluster. And since it 
> doesn't have all the necessary networking port groups (existing VM's port 
> groups) it is not eligible to participate in DRS load balancing or HA.
> Steps:
> 1. Have a DRS and HA cluster in fully automated mode, with two hosts H1 and 
> H2 created in the vCenter.
> 2. Configure this cluster in Cloudstack and create couple of VMs.
> 3. Start stressing the host by running some cpu hogging scripts in each of 
> the VM.
> 4. Enable maintenance mode on one of the host - say H1 from Cloudstack.
> 5. Also, quickly enable maintenance mode on host H1 from vCenter.
> (This should migrate all the VMs to host H2) Make sure none of the VMs are 
> present on host H1.
> 6. Add host H3 into DRS cluster from vCenter and from Cloudstack as well.
> 7. At this point, the load is definitely imbalanced. This can be verified 
> from vCenter ( Click on cluster -> Go to Summary tab -> under vSphere DRS 
> section, it should show 'Load imbalanced'
> Now, as per DRS rules, the load should be balanced across all the available 
> hosts.
> In this case, even after adding new host, the load is imbalanced. 
> The reason for the load imbalance is VMs (created from Cloudstack) are not 
> eligible to migrate to new host because networks or the cloud portgroups are 
> not available on the new host H3 (except for private).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8927) [VPC]Executing command in VR: /opt/cloud/bin/router_proxy.sh is failing whenever there is a configuration change in VR

2015-12-17 Thread Andrei Mikhailovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063313#comment-15063313
 ] 

Andrei Mikhailovsky commented on CLOUDSTACK-8927:
-

I am experiencing this issue on 3 out of 22 virtual routers. This started 
happening after upgrading from 4.5.2 to 4.6.2 tonight.

> [VPC]Executing command in VR: /opt/cloud/bin/router_proxy.sh is failing 
> whenever there is a configuration change in VR
> --
>
> Key: CLOUDSTACK-8927
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8927
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.6.0
>Reporter: manasaveloori
>Assignee: Wilder Rodrigues
>Priority: Blocker
> Fix For: 4.6.0
>
> Attachments: management-server.rar, management-server.site-site.gz
>
>
> Whenever there is a configuration change in VPC VR observing the connectivity 
> issues with VR.
> Case1:
> Created VPC and tier network with default allow.
> Now created a new ACL list and rules. Changed the ACL list for the tier 
> network.Reboot VR
> 2015-09-30 04:35:39,553 ERROR [c.c.u.s.SshHelper] 
> (DirectAgent-336:ctx-b9e5cdf1) SSH execution of command 
> /opt/cloud/bin/router_proxy.sh update_config.py 169.254.3.89 
> guest_network.json has an error status code in return. result output:
> 2015-09-30 04:35:39,554 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-336:ctx-b9e5cdf1) Processing ScriptConfigItem, executing 
> update_config.py guest_network.json took 21165ms
> 2015-09-30 04:35:39,554 WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-336:ctx-b9e5cdf1) Expected 1 answers while executing 
> SetupGuestNetworkCommand but received 2
> 2015-09-30 04:35:45,769 ERROR [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-94:ctx-56b18174 job-227/job-228 ctx-f92247d7) Failed to 
> start instance VM[DomainRouter|r-22-VM]
> com.cloud.utils.exception.ExecutionException: Unable to start 
> VM[DomainRouter|r-22-VM] due to error in finalizeStart, not retrying
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
> at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4732)
> at 
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> at 
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:537)
> at 
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
> at 
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at org.apache.cloudstack.managed.context.impl.Def
> Case2:
> Reboot VR with remote access VPN enabled on VPC VR:
> Created VPC ,enabled vpn and rebooted the VR.
> ERROR in logs:
> 2015-09-30 04:46:18,663 ERROR [c.c.u.s.SshHelper] 
> (DirectAgent-46:ctx-3c355a22) SSH execution of command 
> /opt/cloud/bin/router_proxy.sh update_config.py 169.254.0.95 
> vpn_user_list.json has an error status code in return. result output:
> 2015-09-30 04:46:18,664 DEBUG [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-46:ctx-3c355a22) Processing ScriptConfigItem, executing 
> update_config.py vpn_user_list.json took 21168ms
> 2015-09-30 04:46:18,664 WARN  [c.c.a.r.v.VirtualRoutingResource] 
> (DirectAgent-46:ctx-3c355a22) Expected 1 answers while executing 
> VpnUsersCfgCommand but received 2
> 015-09-30 04:46:24,821 ERROR [c.c.v.VirtualMachineManagerImpl] 
> (Work-Job-Executor-101:ctx-fecf4919 job-240/job-242 ctx-44fde71b) Failed to 
> start instance VM[DomainRouter|r-23-VM]
> com.cloud.utils.exception.ExecutionException: Unable to start 
> VM[DomainRouter|r-23-VM] due to error in finalizeStart, not retrying
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1083)
> at 
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4576)
> at sun.reflect.GeneratedMethodAccessor382.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Delega

[jira] [Commented] (CLOUDSTACK-9174) Quota Service: When a account/user is deleted with low quota, quota service still tries to alert the user resulting in NPE

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063348#comment-15063348
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9174:


Github user agneya2001 commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1254#discussion_r47990946
  
--- Diff: 
plugins/database/quota/src/org/apache/cloudstack/api/command/QuotaSummaryCmd.java
 ---
@@ -59,7 +59,7 @@ public QuotaSummaryCmd() {
 public void execute() {
 Account caller = CallContext.current().getCallingAccount();
 List responses;
-if (caller.getAccountId() <= 2) { //non root admin or system
+if (caller.getAccountId() <= 2) { // root admin or system
--- End diff --

rightly pointed


> Quota Service: When a account/user is deleted with low quota, quota service 
> still tries to alert the user resulting in NPE
> --
>
> Key: CLOUDSTACK-9174
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9174
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Affects Versions: 4.7.0
>Reporter: Abhinandan Prateek
>Assignee: Abhinandan Prateek
>Priority: Critical
> Fix For: 4.8.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8746) VM Snapshotting implementation for KVM

2015-12-17 Thread haijiao (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063377#comment-15063377
 ] 

haijiao commented on CLOUDSTACK-8746:
-

can we get 2 LGTMs to have this in 4.7.1 ?

> VM Snapshotting implementation for KVM
> --
>
> Key: CLOUDSTACK-8746
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8746
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Wei Zhou
>Assignee: Wei Zhou
>
> Currently it is not supported.
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Snapshots



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9186) Root admin cannot see VPC created by Domain admin user

2015-12-17 Thread Nitin Kumar Maharana (JIRA)
Nitin Kumar Maharana created CLOUDSTACK-9186:


 Summary: Root admin cannot see VPC created by Domain admin user
 Key: CLOUDSTACK-9186
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9186
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: Nitin Kumar Maharana


Issue:
=
Root admin cannot see LB rules and Public LB IP addresses created by 
domain-admin in UI therefore root admin cannot manage those.

Reproducible Steps: 
 
1.  Log in as a Domain-Admin account and create a VPC with vpc virtual 
router as public load balancer provider 
2.  click on the newly created VPC -> click on the VPC tier -> click 
internal LB 
3.  Add internal LB, 
4.  Logoff domain-admin and login as root admin 
5.  Navigate the VPC created previously and click internal LB, internal lb 
is not showing up.

Same steps for Public LB IP addresses except select the correct Network 
offering while creating a tier.

Expected Behaviour: 
= 
Root admin should be able to manage VPC created by Domain admin user .
Actual Behaviour:
== 
Root admin cannot see VPC created by Domain admin user and hence not able to 
manage it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-9186) Root admin cannot see VPC created by Domain admin user

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063649#comment-15063649
 ] 

ASF GitHub Bot commented on CLOUDSTACK-9186:


GitHub user nitin-maharana opened a pull request:

https://github.com/apache/cloudstack/pull/1258

CLOUDSTACK-9186: Root admin cannot see VPC created by Domain admin user

Issue:
=
Root admin cannot see LB rules and Public LB IP addresses created by 
domain-admin in UI therefore root admin cannot manage those.

Reproducible Steps: 
 
1.  Log in as a Domain-Admin account and create a VPC with vpc virtual 
router as public load balancer provider 
2.  click on the newly created VPC -> click on the VPC tier -> click 
internal LB 
3.  Add internal LB, 
4.  Logoff domain-admin and login as root admin 
5.  Navigate the VPC created previously and click internal LB, internal lb 
is not showing up.

Same steps for Public LB IP addresses except select the correct Network 
offering while creating a tier.

Expected Behaviour: 
= 
Root admin should be able to manage VPC created by Domain admin user .

Actual Behaviour:
== 
Root admin cannot see VPC created by Domain admin user and hence not able 
to manage it.

Fix:
===
Added the parameter listAll=true in case of Internal LB as well as Public 
LB IP addresses.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nitin-maharana/CloudStack CloudStack-Nitin3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1258.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1258


commit fb882aa909f52bfd5e590e967bf4e0d2b5d936e7
Author: Nitin Kumar Maharana 
Date:   2015-12-18T07:53:50Z

CLOUDSTACK-9186: Root admin cannot see VPC created by Domain admin user

Added the parameter listAll=true in case of Internal LB as well as Public 
LB IP addresses.




> Root admin cannot see VPC created by Domain admin user
> --
>
> Key: CLOUDSTACK-9186
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9186
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: Nitin Kumar Maharana
>
> Issue:
> =
> Root admin cannot see LB rules and Public LB IP addresses created by 
> domain-admin in UI therefore root admin cannot manage those.
> Reproducible Steps: 
>  
> 1.Log in as a Domain-Admin account and create a VPC with vpc virtual 
> router as public load balancer provider 
> 2.click on the newly created VPC -> click on the VPC tier -> click 
> internal LB 
> 3.Add internal LB, 
> 4.Logoff domain-admin and login as root admin 
> 5.Navigate the VPC created previously and click internal LB, internal lb 
> is not showing up.
> Same steps for Public LB IP addresses except select the correct Network 
> offering while creating a tier.
> Expected Behaviour: 
> = 
> Root admin should be able to manage VPC created by Domain admin user .
> Actual Behaviour:
> == 
> Root admin cannot see VPC created by Domain admin user and hence not able to 
> manage it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)