Re: [Xen-devel] [PATCH v3 1/1] tools/hotplug: Scan xenstore once when attaching shared images files

2015-10-07 Thread Mike Latimer
On Wednesday, October 07, 2015 12:52:02 PM Ian Campbell wrote: > Applied. > > Mike, FWIW for singleton patches it is normally ok to dispense with the 0/1 > mail and to just send the patch by itself. If there is commentary which > doesn't belong in the commit message you can put it below a "---"

[Xen-devel] [PATCH v3 1/1] tools/hotplug: Scan xenstore once when attaching shared images files

2015-10-02 Thread Mike Latimer
once, and major and minor numbers from every vbd are checked against the list. If a match is found, the mode of that vbd is checked for compatibility with the mode of the device being attached. Signed-off-by: Mike Latimer <mlati...@suse.com> --- tools/hotplug/Linux/bloc

[Xen-devel] [PATCH v3 0/1] Block script performance with shared image files

2015-10-02 Thread Mike Latimer
Hi, V3 of this patch modifies the comments on check_sharing to document the change in the return string. This change was necessary to allow the error string in check_file_sharing to return the device causing the sharing conflict. Thanks, Mike Mike Latimer (1): tools/hotplug: Scan xenstore

Re: [Xen-devel] [PATCH 1/1] tools/hotplug: Scan xenstore once when attaching shared images files

2015-10-01 Thread Mike Latimer
Hi George, On Thursday, October 01, 2015 10:51:08 AM George Dunlap wrote: > >then > > -echo 'local' > > +echo "local $d" > > return > >fi > > fi > > @@ -90,13 +107,13 @@ check_sharing() > > do > >d=$(xenstore_read_default

Re: [Xen-devel] [PATCH 1/1] tools/hotplug: Scan xenstore once when attaching shared images files

2015-10-01 Thread Mike Latimer
Hi again, On Thursday, October 01, 2015 10:51:08 AM George Dunlap wrote: > > > > - if [ "$d" = "$devmm" ] > > + if [[ "$devmm" == *"$d,"* ]] > > Style nit: using [[ instead of [. TBH I prefer [[, but it's probably > better to be consistent with the rest of the file. I was about to

[Xen-devel] [PATCH v2 1/1] tools/hotplug: Scan xenstore once when attaching shared images files

2015-10-01 Thread Mike Latimer
once, and major and minor numbers from every vbd are checked against the list. If a match is found, the mode of that vbd is checked for compatibility with the mode of the device being attached. Signed-off-by: Mike Latimer <mlati...@suse.com> --- tools/hotplug/Linux/bloc

[Xen-devel] [PATCH v2 0/1] Block script performance with shared image files

2015-10-01 Thread Mike Latimer
and 1:11. Finally, I added a more complete description of the problem to the patch itself. Thanks, Mike Mike Latimer (1): tools/hotplug: Scan xenstore once when attaching shared images files tools/hotplug/Linux/block | 76 +++ 1 file changed, 50

[Xen-devel] [PATCH 1/1] tools/hotplug: Scan xenstore once when attaching shared images files

2015-09-30 Thread Mike Latimer
. Signed-off-by: Mike Latimer <mlati...@suse.com> --- tools/hotplug/Linux/block | 67 +-- 1 file changed, 41 insertions(+), 26 deletions(-) diff --git a/tools/hotplug/Linux/block b/tools/hotplug/Linux/block index 8d2ee9d..aef051c 100644 --- a

Re: [Xen-devel] Shared image files and block script performance

2015-09-29 Thread Mike Latimer
Hi Ian, On Tuesday, September 29, 2015 10:25:32 AM Ian Campbell wrote: > On Mon, 2015-09-28 at 17:14 -0600, Mike Latimer wrote: > > Any better options or ideas? > > Is part of the problem that shell is a terrible choice for this kind of > check? There is some truth to th

[Xen-devel] Shared image files and block script performance

2015-09-28 Thread Mike Latimer
Hi, In an environment with read-only image files being shared across domains, the block script becomes exponentially slower with every block attached. While this is irritating with a few domains, it becomes very problematic with hundreds of domains. Part of the issue was mentioned in a udev

Re: [Xen-devel] [PATCH 0/4] fix freemem loop

2015-03-05 Thread Mike Latimer
On Thursday, March 05, 2015 05:49:35 PM Ian Campbell wrote: On Tue, 2015-03-03 at 11:08 +, Stefano Stabellini wrote: Hi all, this patch series fixes the freemem loop on machines with very large amount of memory, where the current wait time is not enough. In order to be able to

Re: [Xen-devel] [PATCH 0/4] fix freemem loop

2015-03-04 Thread Mike Latimer
On Tuesday, March 03, 2015 02:54:50 PM Mike Latimer wrote: Thanks for all the help and patience as we've worked through this. Ack to the whole series: Acked-by: Mike Latimer mlati...@suse.com I guess the more correct response is: Reviewed-by: Mike Latimer mlati...@suse.com Tested

Re: [Xen-devel] [PATCH 0/4] fix freemem loop

2015-03-03 Thread Mike Latimer
memory is freed. (Using dom0_mem is still a preferred option, as the ballooning delay can be significant.) Thanks for all the help and patience as we've worked through this. Ack to the whole series: Acked-by: Mike Latimer mlati...@suse.com -Mike ___ Xen

Re: [Xen-devel] freemem-slack and large memory environments

2015-03-02 Thread Mike Latimer
On Monday, March 02, 2015 06:04:11 AM Jan Beulich wrote: Of course users could just use dom0_mem and get down with it. I don't think we should make this a requirement for correct operation. Exactly. I think from a best practices perspective, dom0_mem is still the recommended approach. It

Re: [Xen-devel] freemem-slack and large memory environments

2015-03-02 Thread Mike Latimer
On Monday, March 02, 2015 04:15:41 PM Stefano Stabellini wrote: On Mon, 2 Mar 2015, Ian Campbell wrote: ? Continue as long as progress is being made is exactly what 2563bca1154 libxl: Wait for ballooning if free memory is increasing was trying to implement, so it certainly was the idea

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-27 Thread Mike Latimer
On Friday, February 27, 2015 08:28:49 AM Mike Latimer wrote: On Friday, February 27, 2015 10:52:17 AM Stefano Stabellini wrote: On Thu, 26 Feb 2015, Mike Latimer wrote: libxl_set_memory_target = 1 The new memory target is set for dom0 successfully. libxl_wait_for_free_memory

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-27 Thread Mike Latimer
On Friday, February 27, 2015 11:29:12 AM Mike Latimer wrote: On Friday, February 27, 2015 08:28:49 AM Mike Latimer wrote: After adding 2048aeec, dom0's target is lowered by the required amount (e.g. 64GB), but as dom0 cannot balloon down fast enough, libxl_wait_for_memory_target returns -5

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-26 Thread Mike Latimer
On Wednesday, February 25, 2015 02:09:50 PM Stefano Stabellini wrote: Is the upshot that Mike doesn't need to do anything further with his patch (i.e. can drop it)? I think so? Yes, I think so. Maybe he could help out testing the patches I am going to write :-) Sorry for not responding to

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-26 Thread Mike Latimer
On Thursday, February 26, 2015 03:57:54 PM Ian Campbell wrote: On Thu, 2015-02-26 at 08:36 -0700, Mike Latimer wrote: There is still one aspect of my original patch that is important. As the code currently stands, the target for dom0 is set lower during each iteration of the loop. Unless

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-26 Thread Mike Latimer
(Sorry for the delayed response, dealing with ENOTIME.) On Thursday, February 26, 2015 05:47:21 PM Ian Campbell wrote: On Thu, 2015-02-26 at 10:38 -0700, Mike Latimer wrote: rc = libxl_set_memory_target(ctx, 0, free_memkb - need_memkb, 1, 0); I think so. In essence we just need to update

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-26 Thread Mike Latimer
On Thursday, February 26, 2015 05:53:06 PM Stefano Stabellini wrote: What is the return value of libxl_set_memory_target and libxl_wait_for_free_memory in that case? Isn't it just a matter of properly handle the return values? The return from libxl_set_memory_target is 0, as the assignment

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-26 Thread Mike Latimer
On Thursday, February 26, 2015 01:45:16 PM Mike Latimer wrote: On Thursday, February 26, 2015 05:53:06 PM Stefano Stabellini wrote: What is the return value of libxl_set_memory_target and libxl_wait_for_free_memory in that case? Isn't it just a matter of properly handle the return values

Re: [Xen-devel] [PATCH v3] libxl: Wait for ballooning if free memory is increasing

2015-02-13 Thread Mike Latimer
Hi Wei, On Friday, February 13, 2015 11:01:41 AM Wei Liu wrote: On Tue, Feb 10, 2015 at 09:17:23PM -0700, Mike Latimer wrote: Prior to my changes, this issue would only be noticed when starting very large domains - due to the loop being limited to 3 iterations. (For example, when

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-13 Thread Mike Latimer
Hi Wei, On Friday, February 13, 2015 11:13:50 AM Wei Liu wrote: On Tue, Feb 10, 2015 at 02:34:27PM -0700, Mike Latimer wrote: On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote: It seems that there are two approaches to resolve this: - Introduce a hard limit on freemem-slack

Re: [Xen-devel] [PATCH v3] libxl: Wait for ballooning if free memory is increasing

2015-02-10 Thread Mike Latimer
On Thursday, February 05, 2015 12:45:53 PM Ian Campbell wrote: On Mon, 2015-02-02 at 08:17 -0700, Mike Latimer wrote: On Monday, February 02, 2015 02:35:39 PM Ian Campbell wrote: On Fri, 2015-01-30 at 14:01 -0700, Mike Latimer wrote: During domain startup, all required memory ballooning

Re: [Xen-devel] freemem-slack and large memory environments

2015-02-10 Thread Mike Latimer
On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote: While testing commit 2563bca1, I found that libxl_get_free_memory returns 0 until there is more free memory than required for freemem-slack. This means that during the domain creation process, freed memory is first set aside

[Xen-devel] freemem-slack and large memory environments

2015-02-09 Thread Mike Latimer
Hi, While testing commit 2563bca1, I found that libxl_get_free_memory returns 0 until there is more free memory than required for freemem-slack. This means that during the domain creation process, freed memory is first set aside for freemem-slack, then marked as truly free for consumption. On

Re: [Xen-devel] [PATCH v3] libxl: Wait for ballooning if free memory is increasing

2015-02-02 Thread Mike Latimer
On Monday, February 02, 2015 02:35:39 PM Ian Campbell wrote: On Fri, 2015-01-30 at 14:01 -0700, Mike Latimer wrote: During domain startup, all required memory ballooning must complete within a maximum window of 33 seconds (3 retries, 11 seconds of delay). If not, domain creation is aborted

Re: [Xen-devel] [PATCH] libxl: Wait for ballooning if free memory is increasing

2015-01-30 Thread Mike Latimer
On Thursday, January 29, 2015 10:14:26 AM Ian Campbell wrote: I'm thinking it would be clearer if the comment and the condition were logically inverted. e.g.: /* * If the amount of free mem has increased on this iteration (i.e. * some progress has been made) then reset the

Re: [Xen-devel] [PATCH] libxl: Wait for ballooning if free memory is increasing

2015-01-28 Thread Mike Latimer
On Wednesday, January 28, 2015 01:05:25 PM Ian Campbell wrote: On Wed, 2015-01-21 at 22:22 -0700, Mike Latimer wrote: Sorry for the delay. No problem! Thanks for the comments. @@ -2228,7 +2230,13 @@ static int freemem(uint32_t domid, libxl_domain_build_info *b_info) if (rc 0

Re: [Xen-devel] [PATCH] libxl: Wait for ballooning if free memory is increasing

2015-01-27 Thread Mike Latimer
On Wednesday, January 21, 2015 10:22:53 PM Mike Latimer wrote: During domain startup, all required memory ballooning must complete within a maximum window of 33 seconds (3 retries, 11 seconds of delay). If not, domain creation is aborted with a 'failed to free memory' error. In order

[Xen-devel] [PATCH] libxl: Wait for ballooning if free memory is increasing

2015-01-21 Thread Mike Latimer
During domain startup, all required memory ballooning must complete within a maximum window of 33 seconds (3 retries, 11 seconds of delay). If not, domain creation is aborted with a 'failed to free memory' error. In order to accommodate large domains or slower hardware (which require

Re: [Xen-devel] xl only waits 33 seconds for ballooning to complete

2015-01-12 Thread Mike Latimer
On Monday, January 12, 2015 05:29:25 PM George Dunlap wrote: When I said 10s seems very conservative, I meant, 10s should be by far long enough for something to happen. If you can't free up at least 1k in 30s, then there is certainly something very unusual with your system. So I was

[Xen-devel] xl only waits 33 seconds for ballooning to complete

2015-01-06 Thread Mike Latimer
Hi, In a previous post (1), I mentioned issues seen while ballooning a large amount of memory. In the current code, the ballooning process only has 33 seconds to complete, or the xl operation (i.e. domain create) will fail. When a lot of ballooning is required, or the host is very slow to

[Xen-devel] Ballooning dom0: insufficient memory (libxl) or CPU soft lockups (libvirt)

2014-12-13 Thread Mike Latimer
Hi, I've recently been testing large memory (64GB - 1TB) domains, and encountering CPU soft lockups while dom0 is ballooning down to free memory for the domain. The root of the issue also exposes a difference between libxl and libvirt. When creating a domain using xl, if ballooning is enabled