[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-10-25 Thread rhtyd
GitHub user rhtyd opened a pull request:

https://github.com/apache/cloudstack/pull/1729

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack vmware-memleak-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1729.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1729


commit dfabcc8a5eb0f0b78003c2f849f32e899a74ad48
Author: Rohit Yadav 
Date:   2016-10-25T09:50:33Z

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-10-25 Thread rhtyd
GitHub user rhtyd reopened a pull request:

https://github.com/apache/cloudstack/pull/1729

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

@blueorangutan package

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shapeblue/cloudstack vmware-memleak-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/cloudstack/pull/1729.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1729


commit ed964aa2a61e3a36fcb9b5bd8fb80c3f1434
Author: Rohit Yadav 
Date:   2016-10-25T09:50:33Z

CLOUDSTACK-9564: Fix memory leaks in VmwareContextPool

In a recent management server crash, it was found that the largest 
contributor
to memory leak was in VmwareContextPool where a registry is held (arraylist)
that grows indefinitely. The list itself is not used anywhere or consumed. 
There
exists a hashmap (pool) that returns a list of contexts for existing poolkey
(address/username) that is used instead.

This fixes the issue by removing the arraylist registry, and limiting the
length of the context list for a given poolkey.

Signed-off-by: Rohit Yadav 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-10-25 Thread rhtyd
Github user rhtyd closed the pull request at:

https://github.com/apache/cloudstack/pull/1729


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-11-01 Thread abhinandanprateek
Github user abhinandanprateek commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1729#discussion_r85901583
  
--- Diff: 
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContextPool.java ---
@@ -45,76 +43,74 @@ public VmwareContextPool() {
 this(DEFAULT_IDLE_QUEUE_LENGTH, DEFAULT_CHECK_INTERVAL);
 }
 
-public VmwareContextPool(int maxIdleQueueLength) {
-this(maxIdleQueueLength, DEFAULT_CHECK_INTERVAL);
-}
-
 public VmwareContextPool(int maxIdleQueueLength, long 
idleCheckIntervalMs) {
-_pool = new HashMap>();
+_pool = new HashMap>();
 
 _maxIdleQueueLength = maxIdleQueueLength;
 _idleCheckIntervalMs = idleCheckIntervalMs;
 
 _timer.scheduleAtFixedRate(getTimerTask(), _idleCheckIntervalMs, 
_idleCheckIntervalMs);
 }
 
-public void registerOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.add(context);
-}
-}
-
-public void unregisterOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.remove(context);
+public VmwareContext getContext(final String vCenterAddress, final 
String vCenterUserName) {
+final String poolKey = composePoolKey(vCenterAddress, 
vCenterUserName);
+if (Strings.isNullOrEmpty(poolKey)) {
+return null;
 }
-}
-
-public VmwareContext getContext(String vCenterAddress, String 
vCenterUserName) {
-String poolKey = composePoolKey(vCenterAddress, vCenterUserName);
 synchronized (this) {
-List l = _pool.get(poolKey);
-if (l == null)
-return null;
-
-if (l.size() > 0) {
-VmwareContext context = l.remove(0);
-context.setPoolInfo(this, poolKey);
-
-if (s_logger.isTraceEnabled())
-s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + l.size() + ", outstanding count: 
" +
-VmwareContext.getOutstandingContextCount());
+final Queue ctxList = _pool.get(poolKey);
+if (ctxList != null && ctxList.size() > 0) {
+final VmwareContext context = ctxList.remove();
+if (context != null) {
+context.setPoolInfo(this, poolKey);
+}
+if (s_logger.isTraceEnabled()) {
+s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + ctxList.size() + ", outstanding 
count: " +
+VmwareContext.getOutstandingContextCount());
+}
 return context;
 }
-
-// TODO, we need to control the maximum number of outstanding 
VmwareContext object in the future
 return null;
 }
 }
 
-public void returnContext(VmwareContext context) {
+public void registerContext(final VmwareContext context) {
 assert (context.getPool() == this);
 assert (context.getPoolKey() != null);
 synchronized (this) {
-List l = _pool.get(context.getPoolKey());
-if (l == null) {
-l = new ArrayList();
-_pool.put(context.getPoolKey(), l);
+Queue ctxList = _pool.get(context.getPoolKey());
+if (ctxList == null) {
+ctxList = EvictingQueue.create(_maxIdleQueueLength);
+_pool.put(context.getPoolKey(), ctxList);
 }
 
-if (l.size() < _maxIdleQueueLength) {
-context.clearStockObjects();
-l.add(context);
+if (ctxList.size() > _maxIdleQueueLength) {
+final VmwareContext oldestContext = ctxList.remove();
+if (oldestContext != null) {
+try {
+oldestContext.close();
+} catch (Throwable t) {
+s_logger.error("Unexpected exception caught while 
trying to purge oldest VmwareContext", t);
+}
--- End diff --

Although we are using a bounded queue now and that will prevent memory 
leaks, do we know why it should be throwing old elements -or- why the contexts 
are not cleaned up programmatically instead of forcefully restricting the size ?


---
If your project is set up for it, you can reply to this email and have your
reply appea

[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-11-01 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1729#discussion_r85902083
  
--- Diff: 
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContextPool.java ---
@@ -45,76 +43,74 @@ public VmwareContextPool() {
 this(DEFAULT_IDLE_QUEUE_LENGTH, DEFAULT_CHECK_INTERVAL);
 }
 
-public VmwareContextPool(int maxIdleQueueLength) {
-this(maxIdleQueueLength, DEFAULT_CHECK_INTERVAL);
-}
-
 public VmwareContextPool(int maxIdleQueueLength, long 
idleCheckIntervalMs) {
-_pool = new HashMap>();
+_pool = new HashMap>();
 
 _maxIdleQueueLength = maxIdleQueueLength;
 _idleCheckIntervalMs = idleCheckIntervalMs;
 
 _timer.scheduleAtFixedRate(getTimerTask(), _idleCheckIntervalMs, 
_idleCheckIntervalMs);
 }
 
-public void registerOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.add(context);
-}
-}
-
-public void unregisterOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.remove(context);
+public VmwareContext getContext(final String vCenterAddress, final 
String vCenterUserName) {
+final String poolKey = composePoolKey(vCenterAddress, 
vCenterUserName);
+if (Strings.isNullOrEmpty(poolKey)) {
+return null;
 }
-}
-
-public VmwareContext getContext(String vCenterAddress, String 
vCenterUserName) {
-String poolKey = composePoolKey(vCenterAddress, vCenterUserName);
 synchronized (this) {
-List l = _pool.get(poolKey);
-if (l == null)
-return null;
-
-if (l.size() > 0) {
-VmwareContext context = l.remove(0);
-context.setPoolInfo(this, poolKey);
-
-if (s_logger.isTraceEnabled())
-s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + l.size() + ", outstanding count: 
" +
-VmwareContext.getOutstandingContextCount());
+final Queue ctxList = _pool.get(poolKey);
+if (ctxList != null && ctxList.size() > 0) {
+final VmwareContext context = ctxList.remove();
+if (context != null) {
+context.setPoolInfo(this, poolKey);
+}
+if (s_logger.isTraceEnabled()) {
+s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + ctxList.size() + ", outstanding 
count: " +
+VmwareContext.getOutstandingContextCount());
+}
 return context;
 }
-
-// TODO, we need to control the maximum number of outstanding 
VmwareContext object in the future
 return null;
 }
 }
 
-public void returnContext(VmwareContext context) {
+public void registerContext(final VmwareContext context) {
 assert (context.getPool() == this);
 assert (context.getPoolKey() != null);
 synchronized (this) {
-List l = _pool.get(context.getPoolKey());
-if (l == null) {
-l = new ArrayList();
-_pool.put(context.getPoolKey(), l);
+Queue ctxList = _pool.get(context.getPoolKey());
+if (ctxList == null) {
+ctxList = EvictingQueue.create(_maxIdleQueueLength);
--- End diff --

Yes. There may be a case, where two threads may be trying to create 
evicting queue for a poolKey.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-11-01 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1729#discussion_r85902494
  
--- Diff: 
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContextPool.java ---
@@ -45,76 +43,74 @@ public VmwareContextPool() {
 this(DEFAULT_IDLE_QUEUE_LENGTH, DEFAULT_CHECK_INTERVAL);
 }
 
-public VmwareContextPool(int maxIdleQueueLength) {
-this(maxIdleQueueLength, DEFAULT_CHECK_INTERVAL);
-}
-
 public VmwareContextPool(int maxIdleQueueLength, long 
idleCheckIntervalMs) {
-_pool = new HashMap>();
+_pool = new HashMap>();
 
 _maxIdleQueueLength = maxIdleQueueLength;
 _idleCheckIntervalMs = idleCheckIntervalMs;
 
 _timer.scheduleAtFixedRate(getTimerTask(), _idleCheckIntervalMs, 
_idleCheckIntervalMs);
 }
 
-public void registerOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.add(context);
-}
-}
-
-public void unregisterOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.remove(context);
+public VmwareContext getContext(final String vCenterAddress, final 
String vCenterUserName) {
+final String poolKey = composePoolKey(vCenterAddress, 
vCenterUserName);
+if (Strings.isNullOrEmpty(poolKey)) {
+return null;
 }
-}
-
-public VmwareContext getContext(String vCenterAddress, String 
vCenterUserName) {
-String poolKey = composePoolKey(vCenterAddress, vCenterUserName);
 synchronized (this) {
-List l = _pool.get(poolKey);
-if (l == null)
-return null;
-
-if (l.size() > 0) {
-VmwareContext context = l.remove(0);
-context.setPoolInfo(this, poolKey);
-
-if (s_logger.isTraceEnabled())
-s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + l.size() + ", outstanding count: 
" +
-VmwareContext.getOutstandingContextCount());
+final Queue ctxList = _pool.get(poolKey);
+if (ctxList != null && ctxList.size() > 0) {
+final VmwareContext context = ctxList.remove();
+if (context != null) {
+context.setPoolInfo(this, poolKey);
+}
+if (s_logger.isTraceEnabled()) {
+s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + ctxList.size() + ", outstanding 
count: " +
+VmwareContext.getOutstandingContextCount());
+}
 return context;
 }
-
-// TODO, we need to control the maximum number of outstanding 
VmwareContext object in the future
 return null;
 }
 }
 
-public void returnContext(VmwareContext context) {
+public void registerContext(final VmwareContext context) {
 assert (context.getPool() == this);
 assert (context.getPoolKey() != null);
 synchronized (this) {
-List l = _pool.get(context.getPoolKey());
-if (l == null) {
-l = new ArrayList();
-_pool.put(context.getPoolKey(), l);
+Queue ctxList = _pool.get(context.getPoolKey());
+if (ctxList == null) {
+ctxList = EvictingQueue.create(_maxIdleQueueLength);
+_pool.put(context.getPoolKey(), ctxList);
 }
 
-if (l.size() < _maxIdleQueueLength) {
-context.clearStockObjects();
-l.add(context);
+if (ctxList.size() > _maxIdleQueueLength) {
+final VmwareContext oldestContext = ctxList.remove();
+if (oldestContext != null) {
+try {
+oldestContext.close();
+} catch (Throwable t) {
+s_logger.error("Unexpected exception caught while 
trying to purge oldest VmwareContext", t);
+}
--- End diff --

In the `registerContext` method we check if the queue is full; in case it 
is full, we remove the oldest element and close it properly. I'll ping you on 
the lines where it happens.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature

[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-11-01 Thread rhtyd
Github user rhtyd commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1729#discussion_r85903435
  
--- Diff: 
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContextPool.java ---
@@ -45,76 +43,74 @@ public VmwareContextPool() {
 this(DEFAULT_IDLE_QUEUE_LENGTH, DEFAULT_CHECK_INTERVAL);
 }
 
-public VmwareContextPool(int maxIdleQueueLength) {
-this(maxIdleQueueLength, DEFAULT_CHECK_INTERVAL);
-}
-
 public VmwareContextPool(int maxIdleQueueLength, long 
idleCheckIntervalMs) {
-_pool = new HashMap>();
+_pool = new HashMap>();
 
 _maxIdleQueueLength = maxIdleQueueLength;
 _idleCheckIntervalMs = idleCheckIntervalMs;
 
 _timer.scheduleAtFixedRate(getTimerTask(), _idleCheckIntervalMs, 
_idleCheckIntervalMs);
 }
 
-public void registerOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.add(context);
-}
-}
-
-public void unregisterOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.remove(context);
+public VmwareContext getContext(final String vCenterAddress, final 
String vCenterUserName) {
+final String poolKey = composePoolKey(vCenterAddress, 
vCenterUserName);
+if (Strings.isNullOrEmpty(poolKey)) {
+return null;
 }
-}
-
-public VmwareContext getContext(String vCenterAddress, String 
vCenterUserName) {
-String poolKey = composePoolKey(vCenterAddress, vCenterUserName);
 synchronized (this) {
-List l = _pool.get(poolKey);
-if (l == null)
-return null;
-
-if (l.size() > 0) {
-VmwareContext context = l.remove(0);
-context.setPoolInfo(this, poolKey);
-
-if (s_logger.isTraceEnabled())
-s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + l.size() + ", outstanding count: 
" +
-VmwareContext.getOutstandingContextCount());
+final Queue ctxList = _pool.get(poolKey);
+if (ctxList != null && ctxList.size() > 0) {
+final VmwareContext context = ctxList.remove();
+if (context != null) {
+context.setPoolInfo(this, poolKey);
+}
+if (s_logger.isTraceEnabled()) {
+s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + ctxList.size() + ", outstanding 
count: " +
+VmwareContext.getOutstandingContextCount());
+}
 return context;
 }
-
-// TODO, we need to control the maximum number of outstanding 
VmwareContext object in the future
 return null;
 }
 }
 
-public void returnContext(VmwareContext context) {
+public void registerContext(final VmwareContext context) {
 assert (context.getPool() == this);
 assert (context.getPoolKey() != null);
 synchronized (this) {
-List l = _pool.get(context.getPoolKey());
-if (l == null) {
-l = new ArrayList();
-_pool.put(context.getPoolKey(), l);
+Queue ctxList = _pool.get(context.getPoolKey());
+if (ctxList == null) {
+ctxList = EvictingQueue.create(_maxIdleQueueLength);
+_pool.put(context.getPoolKey(), ctxList);
 }
 
-if (l.size() < _maxIdleQueueLength) {
-context.clearStockObjects();
-l.add(context);
+if (ctxList.size() >= _maxIdleQueueLength) {
+final VmwareContext oldestContext = ctxList.remove();
+if (oldestContext != null) {
+try {
+oldestContext.close();
+} catch (Throwable t) {
--- End diff --

@abhinandanprateek ^^ here we close oldest context, before adding a new 
one. The queue provided the FIFO semantics to purge old contexts and keep new 
ones around, which is why I switched from previously used arraylist.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrast

[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-11-01 Thread abhinandanprateek
Github user abhinandanprateek commented on a diff in the pull request:

https://github.com/apache/cloudstack/pull/1729#discussion_r85904768
  
--- Diff: 
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContextPool.java ---
@@ -45,76 +43,74 @@ public VmwareContextPool() {
 this(DEFAULT_IDLE_QUEUE_LENGTH, DEFAULT_CHECK_INTERVAL);
 }
 
-public VmwareContextPool(int maxIdleQueueLength) {
-this(maxIdleQueueLength, DEFAULT_CHECK_INTERVAL);
-}
-
 public VmwareContextPool(int maxIdleQueueLength, long 
idleCheckIntervalMs) {
-_pool = new HashMap>();
+_pool = new HashMap>();
 
 _maxIdleQueueLength = maxIdleQueueLength;
 _idleCheckIntervalMs = idleCheckIntervalMs;
 
 _timer.scheduleAtFixedRate(getTimerTask(), _idleCheckIntervalMs, 
_idleCheckIntervalMs);
 }
 
-public void registerOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.add(context);
-}
-}
-
-public void unregisterOutstandingContext(VmwareContext context) {
-assert (context != null);
-synchronized (this) {
-_outstandingRegistry.remove(context);
+public VmwareContext getContext(final String vCenterAddress, final 
String vCenterUserName) {
+final String poolKey = composePoolKey(vCenterAddress, 
vCenterUserName);
+if (Strings.isNullOrEmpty(poolKey)) {
+return null;
 }
-}
-
-public VmwareContext getContext(String vCenterAddress, String 
vCenterUserName) {
-String poolKey = composePoolKey(vCenterAddress, vCenterUserName);
 synchronized (this) {
-List l = _pool.get(poolKey);
-if (l == null)
-return null;
-
-if (l.size() > 0) {
-VmwareContext context = l.remove(0);
-context.setPoolInfo(this, poolKey);
-
-if (s_logger.isTraceEnabled())
-s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + l.size() + ", outstanding count: 
" +
-VmwareContext.getOutstandingContextCount());
+final Queue ctxList = _pool.get(poolKey);
+if (ctxList != null && ctxList.size() > 0) {
+final VmwareContext context = ctxList.remove();
+if (context != null) {
+context.setPoolInfo(this, poolKey);
+}
+if (s_logger.isTraceEnabled()) {
+s_logger.trace("Return a VmwareContext from the idle 
pool: " + poolKey + ". current pool size: " + ctxList.size() + ", outstanding 
count: " +
+VmwareContext.getOutstandingContextCount());
+}
 return context;
 }
-
-// TODO, we need to control the maximum number of outstanding 
VmwareContext object in the future
 return null;
 }
 }
 
-public void returnContext(VmwareContext context) {
+public void registerContext(final VmwareContext context) {
 assert (context.getPool() == this);
 assert (context.getPoolKey() != null);
 synchronized (this) {
-List l = _pool.get(context.getPoolKey());
-if (l == null) {
-l = new ArrayList();
-_pool.put(context.getPoolKey(), l);
+Queue ctxList = _pool.get(context.getPoolKey());
+if (ctxList == null) {
+ctxList = EvictingQueue.create(_maxIdleQueueLength);
--- End diff --

Since all access is in synchronized blocks this is not required. But a 
synchronised evictingqueue could have let you avoid some of synchronised blocks.

Again, this is not required as it is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] cloudstack pull request #1729: CLOUDSTACK-9564: Fix memory leaks in VmwareCo...

2016-12-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/cloudstack/pull/1729


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---