On 2/22/19 7:02 PM, Alexander Duyck wrote:
> On Mon, Feb 4, 2019 at 1:47 PM Nitesh Narayan Lal wrote:
>> The following patch-set proposes an efficient mechanism for handing freed
>> memory between the guest and the host. It enables the guests with no page
>> cache to rapidly free and reclaims me
On Mon, Feb 4, 2019 at 1:47 PM Nitesh Narayan Lal wrote:
>
> The following patch-set proposes an efficient mechanism for handing freed
> memory between the guest and the host. It enables the guests with no page
> cache to rapidly free and reclaims memory to and from the host respectively.
>
> Be
>> I can't follow. We are talking about something as simple as a minimum
>> page granularity here that can easily be configured. Nothing that
>> screams for different implementations. But I get your point, we could
>> tune for different architectures.
>
> I was thinking about the guest side of t
On Tue, Feb 19, 2019 at 01:57:14PM -0800, Alexander Duyck wrote:
> On Tue, Feb 19, 2019 at 10:32 AM David Hildenbrand wrote:
> >
> > >>> This essentially just ends up being another trade-off of CPU versus
> > >>> memory though. Assuming we aren't using THP we are going to take a
> > >>> penalty in
On Tue, Feb 19, 2019 at 10:32 AM David Hildenbrand wrote:
>
> >>> This essentially just ends up being another trade-off of CPU versus
> >>> memory though. Assuming we aren't using THP we are going to take a
> >>> penalty in terms of performance but could then free individual pages
> >>> less than
On Tue, Feb 19, 2019 at 09:21:20PM +0100, David Hildenbrand wrote:
> On 19.02.19 21:17, Michael S. Tsirkin wrote:
> > On Tue, Feb 19, 2019 at 09:02:52PM +0100, David Hildenbrand wrote:
> >> On 19.02.19 20:58, Michael S. Tsirkin wrote:
> >>> On Tue, Feb 19, 2019 at 10:06:35AM -0800, Alexander Duyck
On 19.02.19 21:17, Michael S. Tsirkin wrote:
> On Tue, Feb 19, 2019 at 09:02:52PM +0100, David Hildenbrand wrote:
>> On 19.02.19 20:58, Michael S. Tsirkin wrote:
>>> On Tue, Feb 19, 2019 at 10:06:35AM -0800, Alexander Duyck wrote:
> I tend to like an asynchronous reporting approach as discussed
On Tue, Feb 19, 2019 at 09:02:52PM +0100, David Hildenbrand wrote:
> On 19.02.19 20:58, Michael S. Tsirkin wrote:
> > On Tue, Feb 19, 2019 at 10:06:35AM -0800, Alexander Duyck wrote:
> >>> I tend to like an asynchronous reporting approach as discussed in this
> >>> thread, we would have to see if N
On 19.02.19 20:58, Michael S. Tsirkin wrote:
> On Tue, Feb 19, 2019 at 10:06:35AM -0800, Alexander Duyck wrote:
>>> I tend to like an asynchronous reporting approach as discussed in this
>>> thread, we would have to see if Nitesh could get it implemented.
>>
>> I agree it would be great if it could
On Tue, Feb 19, 2019 at 10:06:35AM -0800, Alexander Duyck wrote:
> > I tend to like an asynchronous reporting approach as discussed in this
> > thread, we would have to see if Nitesh could get it implemented.
>
> I agree it would be great if it could work. However I have concerns
> given that work
>>> This essentially just ends up being another trade-off of CPU versus
>>> memory though. Assuming we aren't using THP we are going to take a
>>> penalty in terms of performance but could then free individual pages
>>> less than HUGETLB_PAGE_ORDER, but the CPU utilization is going to be
>>> much h
On Mon, Feb 18, 2019 at 11:55 PM David Hildenbrand wrote:
>
> On 19.02.19 01:01, Alexander Duyck wrote:
> > On Mon, Feb 18, 2019 at 1:04 PM David Hildenbrand wrote:
> >>
> >> On 18.02.19 21:40, Nitesh Narayan Lal wrote:
> >>> On 2/18/19 3:31 PM, Michael S. Tsirkin wrote:
> On Mon, Feb 18, 20
On Mon, Feb 18, 2019 at 6:46 PM Andrea Arcangeli wrote:
>
> Hello,
>
> On Mon, Feb 18, 2019 at 03:47:22PM -0800, Alexander Duyck wrote:
> > essentially fragmented them. I guess hugepaged went through and
> > started trying to reassemble the huge pages and as a result there have
> > been apps that
Also one reason why I am not a fan of working with anything less than
PMD order is because there have been issues in the past with false
memory leaks being created when hints were provided on THP pages that
essentially fragmented them. I guess hugepaged went through and
>>>
On 19.02.19 15:40, Michael S. Tsirkin wrote:
> On Tue, Feb 19, 2019 at 09:06:01AM +0100, David Hildenbrand wrote:
>> On 19.02.19 00:47, Alexander Duyck wrote:
>>> On Mon, Feb 18, 2019 at 9:42 AM David Hildenbrand wrote:
On 18.02.19 18:31, Alexander Duyck wrote:
> On Mon, Feb 18, 2019
On Tue, Feb 19, 2019 at 09:06:01AM +0100, David Hildenbrand wrote:
> On 19.02.19 00:47, Alexander Duyck wrote:
> > On Mon, Feb 18, 2019 at 9:42 AM David Hildenbrand wrote:
> >>
> >> On 18.02.19 18:31, Alexander Duyck wrote:
> >>> On Mon, Feb 18, 2019 at 8:59 AM David Hildenbrand
> >>> wrote:
> >
On 19.02.19 15:17, Nitesh Narayan Lal wrote:
> On 2/19/19 8:03 AM, David Hildenbrand wrote:
>> There are two main ways to avoid allocation:
>> 1. do not add extra data on top of each chunk passed
> If I am not wrong then this is close to what we have right now.
Yes, minus the kthre
On 2/19/19 8:03 AM, David Hildenbrand wrote:
> There are two main ways to avoid allocation:
> 1. do not add extra data on top of each chunk passed
If I am not wrong then this is close to what we have right now.
>>> Yes, minus the kthread(s) and eventually with some sort of memory
>>> a
There are two main ways to avoid allocation:
1. do not add extra data on top of each chunk passed
>>> If I am not wrong then this is close to what we have right now.
>> Yes, minus the kthread(s) and eventually with some sort of memory
>> allocation for the request. Once you're asynchronou
On 2/18/19 9:46 PM, Andrea Arcangeli wrote:
> Hello,
>
> On Mon, Feb 18, 2019 at 03:47:22PM -0800, Alexander Duyck wrote:
>> essentially fragmented them. I guess hugepaged went through and
>> started trying to reassemble the huge pages and as a result there have
>> been apps that ended up consumin
On 2/18/19 4:04 PM, David Hildenbrand wrote:
> On 18.02.19 21:40, Nitesh Narayan Lal wrote:
>> On 2/18/19 3:31 PM, Michael S. Tsirkin wrote:
>>> On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
> So I'm fine with a simple implementation but the interface needs to
On 19.02.19 00:47, Alexander Duyck wrote:
> On Mon, Feb 18, 2019 at 9:42 AM David Hildenbrand wrote:
>>
>> On 18.02.19 18:31, Alexander Duyck wrote:
>>> On Mon, Feb 18, 2019 at 8:59 AM David Hildenbrand wrote:
On 18.02.19 17:49, Michael S. Tsirkin wrote:
> On Sat, Feb 16, 2019 at 10
On 19.02.19 01:01, Alexander Duyck wrote:
> On Mon, Feb 18, 2019 at 1:04 PM David Hildenbrand wrote:
>>
>> On 18.02.19 21:40, Nitesh Narayan Lal wrote:
>>> On 2/18/19 3:31 PM, Michael S. Tsirkin wrote:
On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
>> So I'm fine w
On Mon, Feb 18, 2019 at 03:47:22PM -0800, Alexander Duyck wrote:
> > > So far with my patch set that hints at the PMD level w/ THP enabled I
> > > am not really seeing that much overhead for the hypercalls. The bigger
> > > piece that is eating up CPU time is all the page faults and page
> > > zero
Hello,
On Mon, Feb 18, 2019 at 03:47:22PM -0800, Alexander Duyck wrote:
> essentially fragmented them. I guess hugepaged went through and
> started trying to reassemble the huge pages and as a result there have
> been apps that ended up consuming more memory than they would have
> otherwise since
On Mon, Feb 18, 2019 at 1:04 PM David Hildenbrand wrote:
>
> On 18.02.19 21:40, Nitesh Narayan Lal wrote:
> > On 2/18/19 3:31 PM, Michael S. Tsirkin wrote:
> >> On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
> So I'm fine with a simple implementation but the interface
On Mon, Feb 18, 2019 at 9:42 AM David Hildenbrand wrote:
>
> On 18.02.19 18:31, Alexander Duyck wrote:
> > On Mon, Feb 18, 2019 at 8:59 AM David Hildenbrand wrote:
> >>
> >> On 18.02.19 17:49, Michael S. Tsirkin wrote:
> >>> On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
> >>>
On 18.02.19 21:40, Nitesh Narayan Lal wrote:
> On 2/18/19 3:31 PM, Michael S. Tsirkin wrote:
>> On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
So I'm fine with a simple implementation but the interface needs to
allow the hypervisor to process hints in parallel
On 18.02.19 21:31, Michael S. Tsirkin wrote:
> On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
>>> So I'm fine with a simple implementation but the interface needs to
>>> allow the hypervisor to process hints in parallel while guest is
>>> running. We can then fix an
On 2/18/19 3:31 PM, Michael S. Tsirkin wrote:
> On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
>>> So I'm fine with a simple implementation but the interface needs to
>>> allow the hypervisor to process hints in parallel while guest is
>>> running. We can then fix a
On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote:
> > So I'm fine with a simple implementation but the interface needs to
> > allow the hypervisor to process hints in parallel while guest is
> > running. We can then fix any issues on hypervisor without breaking
> >
> So I'm fine with a simple implementation but the interface needs to
> allow the hypervisor to process hints in parallel while guest is
> running. We can then fix any issues on hypervisor without breaking
> guests.
Yes, I am fine with defining an interface that theoretic
On Mon, Feb 18, 2019 at 08:35:36PM +0100, David Hildenbrand wrote:
> On 18.02.19 20:16, Michael S. Tsirkin wrote:
> > On Mon, Feb 18, 2019 at 07:29:44PM +0100, David Hildenbrand wrote:
> >>>
> >
> > But really what business has something that is supposedly
> > an optimization blocking a
On 18.02.19 20:16, Michael S. Tsirkin wrote:
> On Mon, Feb 18, 2019 at 07:29:44PM +0100, David Hildenbrand wrote:
>>>
>
> But really what business has something that is supposedly
> an optimization blocking a VCPU? We are just freeing up
> lots of memory why is it a good idea to slo
On Mon, Feb 18, 2019 at 07:29:44PM +0100, David Hildenbrand wrote:
> >
> >>>
> >>> But really what business has something that is supposedly
> >>> an optimization blocking a VCPU? We are just freeing up
> >>> lots of memory why is it a good idea to slow that
> >>> process down?
> >>
> >> I first w
On 18.02.19 18:54, Michael S. Tsirkin wrote:
> On Mon, Feb 18, 2019 at 05:59:06PM +0100, David Hildenbrand wrote:
>> On 18.02.19 17:49, Michael S. Tsirkin wrote:
>>> On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
It would be worth a try. My feeling is that a synchronous rep
On Mon, Feb 18, 2019 at 09:31:13AM -0800, Alexander Duyck wrote:
> > Optimization of space comes with a price (here: execution time).
>
> One thing to keep in mind though is that if you are already having to
> pull pages in and out of swap on the host in order be able to provide
> enough memory fo
On Mon, Feb 18, 2019 at 05:59:06PM +0100, David Hildenbrand wrote:
> On 18.02.19 17:49, Michael S. Tsirkin wrote:
> > On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
> >> It would be worth a try. My feeling is that a synchronous report after
> >> e.g. 512 frees should be acceptab
On 18.02.19 18:31, Alexander Duyck wrote:
> On Mon, Feb 18, 2019 at 8:59 AM David Hildenbrand wrote:
>>
>> On 18.02.19 17:49, Michael S. Tsirkin wrote:
>>> On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
It would be worth a try. My feeling is that a synchronous report after
On Mon, Feb 18, 2019 at 8:59 AM David Hildenbrand wrote:
>
> On 18.02.19 17:49, Michael S. Tsirkin wrote:
> > On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
> >> It would be worth a try. My feeling is that a synchronous report after
> >> e.g. 512 frees should be acceptable, as
On 18.02.19 17:49, Michael S. Tsirkin wrote:
> On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
>> It would be worth a try. My feeling is that a synchronous report after
>> e.g. 512 frees should be acceptable, as it seems to be acceptable on
>> s390x. (basically always enabled, no
On Sat, Feb 16, 2019 at 10:40:15AM +0100, David Hildenbrand wrote:
> It would be worth a try. My feeling is that a synchronous report after
> e.g. 512 frees should be acceptable, as it seems to be acceptable on
> s390x. (basically always enabled, nobody complains).
What slips under the radar on an
On 18.02.19 16:50, Nitesh Narayan Lal wrote:
>
> On 2/16/19 4:40 AM, David Hildenbrand wrote:
>> On 04.02.19 21:18, Nitesh Narayan Lal wrote:
>>
>> Hi Nitesh,
>>
>> I thought again about how s390x handles free page hinting. As that seems
>> to work just fine, I guess sticking to a similar model ma
On 2/16/19 4:40 AM, David Hildenbrand wrote:
> On 04.02.19 21:18, Nitesh Narayan Lal wrote:
>
> Hi Nitesh,
>
> I thought again about how s390x handles free page hinting. As that seems
> to work just fine, I guess sticking to a similar model makes sense.
>
>
> I already explained in this thread how
On 02/18/2019 10:36 AM, Wei Wang wrote:
On 02/15/2019 05:41 PM, David Hildenbrand wrote:
On 15.02.19 10:05, Wang, Wei W wrote:
On Thursday, February 14, 2019 5:43 PM, David Hildenbrand wrote:
Yes indeed, that is the important bit. They must not be put pack to
the
buddy before they have been pr
On 02/15/2019 05:41 PM, David Hildenbrand wrote:
On 15.02.19 10:05, Wang, Wei W wrote:
On Thursday, February 14, 2019 5:43 PM, David Hildenbrand wrote:
Yes indeed, that is the important bit. They must not be put pack to the
buddy before they have been processed by the hypervisor. But as the pag
On 04.02.19 21:18, Nitesh Narayan Lal wrote:
Hi Nitesh,
I thought again about how s390x handles free page hinting. As that seems
to work just fine, I guess sticking to a similar model makes sense.
I already explained in this thread how it works on s390x, a short summary:
1. Each VCPU has a buf
On 2/15/19 4:05 AM, Wang, Wei W wrote:
> On Thursday, February 14, 2019 5:43 PM, David Hildenbrand wrote:
>> Yes indeed, that is the important bit. They must not be put pack to the
>> buddy before they have been processed by the hypervisor. But as the pages
>> are not in the buddy, no one allocati
On 15.02.19 10:05, Wang, Wei W wrote:
> On Thursday, February 14, 2019 5:43 PM, David Hildenbrand wrote:
>> Yes indeed, that is the important bit. They must not be put pack to the
>> buddy before they have been processed by the hypervisor. But as the pages
>> are not in the buddy, no one allocating
On 15.02.19 10:15, Wang, Wei W wrote:
> On Thursday, February 14, 2019 6:01 PM, David Hildenbrand wrote:
>> And how to preload without locking?
>
> The memory is preload per-CPU. It's usually called outside the lock.
Right, that works as long as only a fixed amount of pages is needed. I
remember
On Thursday, February 14, 2019 6:01 PM, David Hildenbrand wrote:
> And how to preload without locking?
The memory is preload per-CPU. It's usually called outside the lock.
Best,
Wei
On Thursday, February 14, 2019 5:43 PM, David Hildenbrand wrote:
> Yes indeed, that is the important bit. They must not be put pack to the
> buddy before they have been processed by the hypervisor. But as the pages
> are not in the buddy, no one allocating a page will stumble over such a page
> and
On 2/14/19 3:48 AM, Wang, Wei W wrote:
> On Wednesday, February 13, 2019 8:07 PM, Nitesh Narayan Lal wrote:
>> Once the host free the pages. All the isolated pages are returned back
>> to the buddy. (This is implemented in hyperlist_ready())
> This actually has the same issue: the isolated pages
On 14.02.19 11:00, David Hildenbrand wrote:
> On 14.02.19 10:08, Wang, Wei W wrote:
>> On Wednesday, February 13, 2019 5:19 PM, David Hildenbrand wrote:
>>> If you have to resize/alloc/coordinate who will report, you will need
>>> locking.
>>> Especially, I doubt that there is an atomic xbitmap (
On 14.02.19 10:08, Wang, Wei W wrote:
> On Wednesday, February 13, 2019 5:19 PM, David Hildenbrand wrote:
>> If you have to resize/alloc/coordinate who will report, you will need
>> locking.
>> Especially, I doubt that there is an atomic xbitmap (prove me wrong :) ).
>
> Yes, we need change xbit
On 14.02.19 09:48, Wang, Wei W wrote:
> On Wednesday, February 13, 2019 8:07 PM, Nitesh Narayan Lal wrote:
>> Once the host free the pages. All the isolated pages are returned back
>> to the buddy. (This is implemented in hyperlist_ready())
>
> This actually has the same issue: the isolated pages
On 14.02.19 10:12, Wang, Wei W wrote:
> On Thursday, February 14, 2019 1:22 AM, Nitesh Narayan Lal wrote:
>> In normal condition yes we would not like to report any memory when the
>> guest is already under memory pressure.
>>
>> I am not sure about the scenario where both guest and the host are un
On Wednesday, February 13, 2019 5:19 PM, David Hildenbrand wrote:
> If you have to resize/alloc/coordinate who will report, you will need locking.
> Especially, I doubt that there is an atomic xbitmap (prove me wrong :) ).
Yes, we need change xbitmap to support it.
Just thought of another option
On Wednesday, February 13, 2019 8:07 PM, Nitesh Narayan Lal wrote:
> Once the host free the pages. All the isolated pages are returned back
> to the buddy. (This is implemented in hyperlist_ready())
This actually has the same issue: the isolated pages have to wait to return to
the buddy
after th
On Wed, Feb 13, 2019 at 06:59:24PM +0100, David Hildenbrand wrote:
> >>>
> Nitesh uses MADV_FREE here (as far as I recall :) ), to only mark pages
> as
> candidates for removal and if the host is low on memory, only scanning
> the
> guest page tables is sufficient to free
>>>
Nitesh uses MADV_FREE here (as far as I recall :) ), to only mark pages as
candidates for removal and if the host is low on memory, only scanning the
guest page tables is sufficient to free up memory.
But both points might just be an implementation detail in the example
On 2/13/19 12:09 PM, Michael S. Tsirkin wrote:
> On Wed, Feb 13, 2019 at 07:17:13AM -0500, Nitesh Narayan Lal wrote:
>> On 2/13/19 4:19 AM, David Hildenbrand wrote:
>>> On 13.02.19 09:55, Wang, Wei W wrote:
On Tuesday, February 12, 2019 5:24 PM, David Hildenbrand wrote:
> Global means all
On Wed, Feb 13, 2019 at 10:19:05AM +0100, David Hildenbrand wrote:
> On 13.02.19 09:55, Wang, Wei W wrote:
> > On Tuesday, February 12, 2019 5:24 PM, David Hildenbrand wrote:
> >> Global means all VCPUs will be competing potentially for a single lock when
> >> freeing/allocating a page, no? What if
On Wed, Feb 13, 2019 at 07:17:13AM -0500, Nitesh Narayan Lal wrote:
>
> On 2/13/19 4:19 AM, David Hildenbrand wrote:
> > On 13.02.19 09:55, Wang, Wei W wrote:
> >> On Tuesday, February 12, 2019 5:24 PM, David Hildenbrand wrote:
> >>> Global means all VCPUs will be competing potentially for a singl
On 2/13/19 4:19 AM, David Hildenbrand wrote:
> On 13.02.19 09:55, Wang, Wei W wrote:
>> On Tuesday, February 12, 2019 5:24 PM, David Hildenbrand wrote:
>>> Global means all VCPUs will be competing potentially for a single lock when
>>> freeing/allocating a page, no? What if you have 64VCPUs alloca
On 2/13/19 4:00 AM, Wang, Wei W wrote:
> On Tuesday, February 5, 2019 4:19 AM, Nitesh Narayan Lal wrote:
>> The following patch-set proposes an efficient mechanism for handing freed
>> memory between the guest and the host. It enables the guests with no page
>> cache to rapidly free and reclaims m
On 13.02.19 09:55, Wang, Wei W wrote:
> On Tuesday, February 12, 2019 5:24 PM, David Hildenbrand wrote:
>> Global means all VCPUs will be competing potentially for a single lock when
>> freeing/allocating a page, no? What if you have 64VCPUs allocating/freeing
>> memory like crazy?
>
> I think the
On Tuesday, February 5, 2019 4:19 AM, Nitesh Narayan Lal wrote:
> The following patch-set proposes an efficient mechanism for handing freed
> memory between the guest and the host. It enables the guests with no page
> cache to rapidly free and reclaims memory to and from the host respectively.
>
>
On Tuesday, February 12, 2019 5:24 PM, David Hildenbrand wrote:
> Global means all VCPUs will be competing potentially for a single lock when
> freeing/allocating a page, no? What if you have 64VCPUs allocating/freeing
> memory like crazy?
I think the key point is that the 64 vcpus won't allocate/
On 12.02.19 18:24, Nitesh Narayan Lal wrote:
>
> On 2/12/19 4:24 AM, David Hildenbrand wrote:
>> On 12.02.19 10:03, Wang, Wei W wrote:
>>> On Tuesday, February 5, 2019 4:19 AM, Nitesh Narayan Lal wrote:
The following patch-set proposes an efficient mechanism for handing freed
memory betw
On 2/12/19 4:24 AM, David Hildenbrand wrote:
> On 12.02.19 10:03, Wang, Wei W wrote:
>> On Tuesday, February 5, 2019 4:19 AM, Nitesh Narayan Lal wrote:
>>> The following patch-set proposes an efficient mechanism for handing freed
>>> memory between the guest and the host. It enables the guests wit
On 12.02.19 10:03, Wang, Wei W wrote:
> On Tuesday, February 5, 2019 4:19 AM, Nitesh Narayan Lal wrote:
>> The following patch-set proposes an efficient mechanism for handing freed
>> memory between the guest and the host. It enables the guests with no page
>> cache to rapidly free and reclaims mem
On Tuesday, February 5, 2019 4:19 AM, Nitesh Narayan Lal wrote:
> The following patch-set proposes an efficient mechanism for handing freed
> memory between the guest and the host. It enables the guests with no page
> cache to rapidly free and reclaims memory to and from the host respectively.
>
>
73 matches
Mail list logo