Dudi Maroshi has uploaded a new change for review.

Change subject: engine: forbid single (vm mem) > (host mem)
......................................................................

engine: forbid single (vm mem) > (host mem)

Forbid running single vm with mem > host physical memory.
Yet we allow running few vms with total mem > host physical memory. (relying on 
balloning).
This might be unwise yet can occour after migration or/and host downsizing.
Whenever the: (vm mem) > (host mem) the host will start swaping and performance 
degrage.
If it possible to think of scenrios vm allocated large mem and not using it, 
and cannot downsize vm mem.

Fix swap and refactor reused messages.

If fail run vm due to insufficent mem. Report the available mem.
Added unit test to SlaValidatorTest

Change-Id: Ia5f5280c43820732a36a235024ec5c887c9fcb98
Bug-Url: https://bugzilla.redhat.com/1180071
Signed-off-by: Dudi Maroshi <[email protected]>
---
M 
backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/SlaValidator.java
M 
backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/MemoryPolicyUnit.java
A 
backend/manager/modules/bll/src/test/java/org/ovirt/engine/core/bll/scheduling/SlaValidatorTest.java
M backend/manager/modules/dal/src/main/resources/bundles/AppErrors.properties
M 
frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/AppErrors.java
M 
frontend/webadmin/modules/userportal-gwtp/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
M 
frontend/webadmin/modules/webadmin/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
7 files changed, 107 insertions(+), 19 deletions(-)


  git pull ssh://gerrit.ovirt.org:29418/ovirt-engine refs/changes/99/38399/1

diff --git 
a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/SlaValidator.java
 
b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/SlaValidator.java
index bb88a37..39014ab 100644
--- 
a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/SlaValidator.java
+++ 
b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/SlaValidator.java
@@ -19,26 +19,43 @@
     public boolean hasMemoryToRunVM(VDS curVds, VM vm) {
         boolean retVal = false;
         if (curVds.getMemCommited() != null && curVds.getPhysicalMemMb() != 
null && curVds.getReservedMem() != null) {
-            double vdsCurrentMem =
-                    curVds.getMemCommited() + curVds.getPendingVmemSize() + 
curVds.getGuestOverhead() + curVds
-                            .getReservedMem() + vm.getMinAllocatedMem();
-            double vdsMemLimit = curVds.getMaxVdsMemoryOverCommit() * 
curVds.getPhysicalMemMb() / 100.0;
-            if (log.isDebugEnabled()) {
-                log.debugFormat("hasMemoryToRunVM: host {0} pending vmem size 
is : {1} MB",
-                        curVds.getName(),
-                        curVds.getPendingVmemSize());
-                log.debugFormat("Host Mem Conmmitted: {0}, Host Reserved Mem: 
{1}, Host Guest Overhead {2}, VM Min Allocated Mem {3}",
-                        curVds.getMemCommited(),
-                        curVds.getReservedMem(),
-                        curVds.getGuestOverhead(),
-                        vm.getMinAllocatedMem());
-                log.debugFormat("{0} <= ???  {1}", vdsCurrentMem, vdsMemLimit);
-            }
+            double vdsCurrentMem = getVdsCurrentMemoryInUse(curVds) + 
vm.getMinAllocatedMem();
+            double vdsMemLimit = getVdsMemLimit(curVds);
+            log.debugFormat("hasMemoryToRunVM: host '{0}' physical vmem size 
is : {1} MB",
+                    curVds.getName(),
+                    curVds.getPhysicalMemMb());
+            log.debugFormat("Host Mem Conmmitted: '{0}', pending vmem size is 
: {1}, Host Guest Overhead {2}, Host Reserved Mem: {3}, VM Min Allocated Mem 
{4}",
+                    curVds.getMemCommited(),
+                    curVds.getPendingVmemSize(),
+                    curVds.getGuestOverhead(),
+                    curVds.getReservedMem(),
+                    vm.getMinAllocatedMem());
+            log.debugFormat("{0} <= ???  {1}", vdsCurrentMem, vdsMemLimit);
             retVal = (vdsCurrentMem <= vdsMemLimit);
         }
         return retVal;
     }
 
+    public int getHostAvailableMemoryLimit(VDS curVds) {
+        if (curVds.getMemCommited() != null && curVds.getPhysicalMemMb() != 
null && curVds.getReservedMem() != null) {
+            double vdsCurrentMem = getVdsCurrentMemoryInUse(curVds);
+            double vdsMemLimit = getVdsMemLimit(curVds);
+            return (int) (vdsMemLimit - vdsCurrentMem);
+        }
+        return 0;
+    }
+
+    private double getVdsMemLimit(VDS curVds) {
+        // if single vm on host. Disregard memory over commitment
+        int computedMemoryOverCommit = (curVds.getVmCount() == 0) ? 100 : 
curVds.getMaxVdsMemoryOverCommit();
+        return (computedMemoryOverCommit * curVds.getPhysicalMemMb() / 100.0);
+    }
+
+    private double getVdsCurrentMemoryInUse(VDS curVds) {
+        return curVds.getMemCommited() + curVds.getPendingVmemSize() + 
curVds.getGuestOverhead()
+                        + curVds.getReservedMem();
+    }
+
     public static Integer getEffectiveCpuCores(VDS vds) {
         VDSGroup vdsGroup = 
DbFacade.getInstance().getVdsGroupDao().get(vds.getVdsGroupId());
         return getEffectiveCpuCores(vds, vdsGroup != null ? 
vdsGroup.getCountThreadsAsCores() : false);
diff --git 
a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/MemoryPolicyUnit.java
 
b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/MemoryPolicyUnit.java
index 5ede7ef..ac553f2 100644
--- 
a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/MemoryPolicyUnit.java
+++ 
b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/MemoryPolicyUnit.java
@@ -6,6 +6,7 @@
 import java.util.Map;
 
 import org.ovirt.engine.core.bll.scheduling.PolicyUnitImpl;
+import org.ovirt.engine.core.bll.scheduling.SlaValidator;
 import org.ovirt.engine.core.common.businessentities.NumaTuneMode;
 import org.ovirt.engine.core.common.businessentities.VDS;
 import org.ovirt.engine.core.common.businessentities.VM;
@@ -20,8 +21,11 @@
 import org.ovirt.engine.core.common.utils.Pair;
 import org.ovirt.engine.core.compat.Guid;
 import org.ovirt.engine.core.dal.dbbroker.DbFacade;
+import org.ovirt.engine.core.utils.log.Log;
+import org.ovirt.engine.core.utils.log.LogFactory;
 
 public class MemoryPolicyUnit extends PolicyUnitImpl {
+    private static final Log log = LogFactory.getLog(MemoryPolicyUnit.class);
 
     public MemoryPolicyUnit(PolicyUnit policyUnit) {
         super(policyUnit);
@@ -43,6 +47,11 @@
             }
             if (!memoryChecker.evaluate(vds, vm)) {
                 log.debugFormat("host '{0}' has insufficient memory to run the 
VM", vds.getName());
+                int hostAavailableMem = 
SlaValidator.getInstance().getHostAvailableMemoryLimit(vds);
+                log.debugFormat("Host '{0}' has {1} MB available. Insufficient 
memory to run the VM",
+                        vds.getName(),
+                        hostAavailableMem);
+                messages.addMessage(vds.getId(), String.format("$availableMem 
%1$d", hostAavailableMem));
                 messages.addMessage(vds.getId(), 
VdcBllMessages.VAR__DETAIL__NOT_ENOUGH_MEMORY.toString());
                 continue;
             }
diff --git 
a/backend/manager/modules/bll/src/test/java/org/ovirt/engine/core/bll/scheduling/SlaValidatorTest.java
 
b/backend/manager/modules/bll/src/test/java/org/ovirt/engine/core/bll/scheduling/SlaValidatorTest.java
new file mode 100644
index 0000000..93fb594
--- /dev/null
+++ 
b/backend/manager/modules/bll/src/test/java/org/ovirt/engine/core/bll/scheduling/SlaValidatorTest.java
@@ -0,0 +1,62 @@
+package org.ovirt.engine.core.bll.scheduling;
+
+import static org.junit.Assert.assertEquals;
+
+import org.junit.Test;
+import org.ovirt.engine.core.common.businessentities.VDS;
+import org.ovirt.engine.core.common.businessentities.VM;
+import org.ovirt.engine.core.compat.Guid;
+import org.ovirt.engine.core.compat.Version;
+
+public class SlaValidatorTest {
+
+    private VDS makeTestVds(Guid vdsId) {
+        VDS newVdsData = new VDS();
+        newVdsData.setHostName("BUZZ");
+        newVdsData.setVdsName("BAR");
+        newVdsData.setVdsGroupCompatibilityVersion(new Version("1.2.3"));
+        newVdsData.setVdsGroupId(Guid.newGuid());
+        newVdsData.setId(vdsId);
+        return newVdsData;
+    }
+
+    @Test
+    public void validateVmMemoryCanRunOnVds() {
+        Guid guid = Guid.newGuid();
+        VDS vds = makeTestVds(guid);
+        vds.setPhysicalMemMb(10000);
+        vds.setReservedMem(1000);
+        vds.setMemCommited(100);
+        vds.setPendingVmemSize(10);
+        vds.setGuestOverhead(1);
+
+        vds.setMaxVdsMemoryOverCommit(200); // 200% mem overcommit
+
+        VM vm = new VM();
+
+        // vmMem < hostMem (pass)
+        vm.setMinAllocatedMem(8800);
+        vds.setVmCount(0);
+        boolean vmPassedMemoryRequirement = 
SlaValidator.getInstance().hasMemoryToRunVM(vds, vm);
+        assertEquals(vmPassedMemoryRequirement, true);
+
+        // vmMem > hostMem (fail)
+        vm.setMinAllocatedMem(10000);
+        vds.setVmCount(0);
+        vmPassedMemoryRequirement = 
SlaValidator.getInstance().hasMemoryToRunVM(vds, vm);
+        assertEquals(vmPassedMemoryRequirement, false);
+
+        // vmMem > hostMem (pass) (2 or more running vms)
+        vm.setMinAllocatedMem(10000);
+        vds.setVmCount(1);
+        vmPassedMemoryRequirement = 
SlaValidator.getInstance().hasMemoryToRunVM(vds, vm);
+        assertEquals(vmPassedMemoryRequirement, true);
+
+        // vmMem >> hostMem (fail) (2 or more running vms)
+        vm.setMinAllocatedMem(20000);
+        vds.setVmCount(1);
+        vmPassedMemoryRequirement = 
SlaValidator.getInstance().hasMemoryToRunVM(vds, vm);
+        assertEquals(vmPassedMemoryRequirement, false);
+    }
+
+}
diff --git 
a/backend/manager/modules/dal/src/main/resources/bundles/AppErrors.properties 
b/backend/manager/modules/dal/src/main/resources/bundles/AppErrors.properties
index bbffa8b..01a0e32 100644
--- 
a/backend/manager/modules/dal/src/main/resources/bundles/AppErrors.properties
+++ 
b/backend/manager/modules/dal/src/main/resources/bundles/AppErrors.properties
@@ -1228,7 +1228,7 @@
 VAR__DETAIL__AFFINITY_FAILED_NEGATIVE=$detailMessage it matched negative 
affinity rules ${affinityRules}
 VAR__DETAIL__LOW_CPU_LEVEL=$detailMessage its CPU level ${hostCPULevel} is 
lower than the VM requires ${vmCPULevel}
 VAR__DETAIL__SWAP_VALUE_ILLEGAL=$detailMessage its swap value was illegal
-VAR__DETAIL__NOT_ENOUGH_MEMORY=$detailMessage it has insufficient free memory 
to run the VM
+VAR__DETAIL__NOT_ENOUGH_MEMORY=$detailMessage has availabe ${availableMem} MB 
memory. Insufficient free memory to run the VM
 VAR__DETAIL__NOT_MEMORY_PINNED_NUMA=$detailMessage cannot accommodate memory 
of VM's pinned virtual NUMA nodes within host's physical NUMA nodes.
 VAR__DETAIL__NOT_ENOUGH_CORES=$detailMessage it does not have enough cores to 
run the VM
 VAR__DETAIL__NUMA_PINNING_FAILED=$detailMessage it has insufficient NUMA node 
free memory to run the VM
diff --git 
a/frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/AppErrors.java
 
b/frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/AppErrors.java
index b34e6aa..35adc8d 100644
--- 
a/frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/AppErrors.java
+++ 
b/frontend/webadmin/modules/frontend/src/main/java/org/ovirt/engine/ui/frontend/AppErrors.java
@@ -3276,7 +3276,7 @@
     @DefaultStringValue("$detailMessage its swap value was illegal")
     String VAR__DETAIL__SWAP_VALUE_ILLEGAL();
 
-    @DefaultStringValue("$detailMessage it has insufficient free memory to run 
the VM")
+    @DefaultStringValue("$detailMessage has availabe ${availableMem} MB 
memory. Insufficient free memory to run the VM")
     String VAR__DETAIL__NOT_ENOUGH_MEMORY();
 
     @DefaultStringValue("$detailMessage cannot accommodate memory of VM's 
pinned virtual NUMA nodes within host's physical NUMA nodes.")
diff --git 
a/frontend/webadmin/modules/userportal-gwtp/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
 
b/frontend/webadmin/modules/userportal-gwtp/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
index 55dfd79..1c4e062 100644
--- 
a/frontend/webadmin/modules/userportal-gwtp/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
+++ 
b/frontend/webadmin/modules/userportal-gwtp/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
@@ -1023,7 +1023,7 @@
 VAR__DETAIL__AFFINITY_FAILED_NEGATIVE=$detailMessage it matched negative 
affinity rules ${affinityRules}
 VAR__DETAIL__LOW_CPU_LEVEL=$detailMessage its CPU level ${hostCPULevel} is 
lower than the VM requires ${vmCPULevel}
 VAR__DETAIL__SWAP_VALUE_ILLEGAL=$detailMessage its swap value was illegal
-VAR__DETAIL__NOT_ENOUGH_MEMORY=$detailMessage it has insufficient free memory 
to run the VM
+VAR__DETAIL__NOT_ENOUGH_MEMORY=$detailMessage has availabe ${availableMem} MB 
memory. Insufficient free memory to run the VM
 VAR__DETAIL__NOT_MEMORY_PINNED_NUMA=$detailMessage cannot accommodate memory 
of VM's pinned virtual NUMA nodes within host's physical NUMA nodes.
 VAR__DETAIL__NOT_ENOUGH_CORES=$detailMessage it does not have enough cores to 
run the VM
 VAR__DETAIL__NUMA_PINNING_FAILED=$detailMessage it has insufficient NUMA node 
free memory to run the VM
diff --git 
a/frontend/webadmin/modules/webadmin/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
 
b/frontend/webadmin/modules/webadmin/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
index 65cf3f6..9ba9ebe 100644
--- 
a/frontend/webadmin/modules/webadmin/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
+++ 
b/frontend/webadmin/modules/webadmin/src/main/resources/org/ovirt/engine/ui/frontend/AppErrors.properties
@@ -1191,7 +1191,7 @@
 VAR__DETAIL__AFFINITY_FAILED_NEGATIVE=$detailMessage it matched negative 
affinity rules ${affinityRules}
 VAR__DETAIL__LOW_CPU_LEVEL=$detailMessage its CPU level ${hostCPULevel} is 
lower than the VM requires ${vmCPULevel}
 VAR__DETAIL__SWAP_VALUE_ILLEGAL=$detailMessage its swap value was illegal
-VAR__DETAIL__NOT_ENOUGH_MEMORY=$detailMessage it has insufficient free memory 
to run the VM
+VAR__DETAIL__NOT_ENOUGH_MEMORY=$detailMessage has availabe ${availableMem} MB 
memory. Insufficient free memory to run the VM
 VAR__DETAIL__NOT_MEMORY_PINNED_NUMA=$detailMessage cannot accommodate memory 
of VM's pinned virtual NUMA nodes within host's physical NUMA nodes.
 VAR__DETAIL__NOT_ENOUGH_CORES=$detailMessage it does not have enough cores to 
run the VM
 VAR__DETAIL__NUMA_PINNING_FAILED=$detailMessage it has insufficient NUMA node 
free memory to run the VM


-- 
To view, visit https://gerrit.ovirt.org/38399
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia5f5280c43820732a36a235024ec5c887c9fcb98
Gerrit-PatchSet: 1
Gerrit-Project: ovirt-engine
Gerrit-Branch: ovirt-engine-3.5
Gerrit-Owner: Dudi Maroshi <[email protected]>
_______________________________________________
Engine-patches mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/engine-patches

Reply via email to