On Tue, Apr 02, 2019 at 04:43:03PM -0700, Alexander Duyck wrote:
> Yes, but hopefully it should be a small enough amount that nobody will
> notice. In many cases devices such as NICs can consume much more than
> this regularly for just their Rx buffers and it is not an issue. There
> has to be a ce
On 03.04.19 01:43, Alexander Duyck wrote:
> On Tue, Apr 2, 2019 at 11:53 AM David Hildenbrand wrote:
>>
> Why do we need them running in parallel for a single guest? I don't
> think we need the hints so quickly that we would need to have multiple
> VCPUs running in parallel to provide
On Tue, Apr 2, 2019 at 11:53 AM David Hildenbrand wrote:
>
> >>> Why do we need them running in parallel for a single guest? I don't
> >>> think we need the hints so quickly that we would need to have multiple
> >>> VCPUs running in parallel to provide hints. In addition as it
> >>> currently stan
On 02.04.19 21:49, Michael S. Tsirkin wrote:
> On Tue, Apr 02, 2019 at 08:21:30PM +0200, David Hildenbrand wrote:
>> The other extreme is a system that barely frees (MAX_ORDER - X) pages,
>> however your thread will waste cycles scanning for such.
>
> I don't think we need to scan as such. An arch
On Tue, Apr 2, 2019 at 10:53 AM Michael S. Tsirkin wrote:
>
> On Tue, Apr 02, 2019 at 10:45:43AM -0700, Alexander Duyck wrote:
> > We went through this back in the day with
> > networking. Adding more buffers is not the solution. The solution is
> > to have a way to gracefully recover and keep our
On Tue, Apr 02, 2019 at 08:21:30PM +0200, David Hildenbrand wrote:
> The other extreme is a system that barely frees (MAX_ORDER - X) pages,
> however your thread will waste cycles scanning for such.
I don't think we need to scan as such. An arch hook
that queues a job to a wq only when there's wor
>>> Why do we need them running in parallel for a single guest? I don't
>>> think we need the hints so quickly that we would need to have multiple
>>> VCPUs running in parallel to provide hints. In addition as it
>>> currently stands in order to get pages into and out of the buddy
>>> allocator we
On 02.04.19 19:45, Alexander Duyck wrote:
> On Tue, Apr 2, 2019 at 10:09 AM David Hildenbrand wrote:
>>
>> On 02.04.19 18:18, Alexander Duyck wrote:
>>> n Tue, Apr 2, 2019 at 8:57 AM David Hildenbrand wrote:
On 02.04.19 17:25, Michael S. Tsirkin wrote:
> On Tue, Apr 02, 2019 at 08:0
On Tue, Apr 02, 2019 at 10:45:43AM -0700, Alexander Duyck wrote:
> We went through this back in the day with
> networking. Adding more buffers is not the solution. The solution is
> to have a way to gracefully recover and keep our hinting latency and
> buffer bloat to a minimum.
That's an interest
On Tue, Apr 2, 2019 at 10:09 AM David Hildenbrand wrote:
>
> On 02.04.19 18:18, Alexander Duyck wrote:
> > n Tue, Apr 2, 2019 at 8:57 AM David Hildenbrand wrote:
> >>
> >> On 02.04.19 17:25, Michael S. Tsirkin wrote:
> >>> On Tue, Apr 02, 2019 at 08:04:00AM -0700, Alexander Duyck wrote:
> Ba
On Tue, Apr 2, 2019 at 8:56 AM David Hildenbrand wrote:
>
> On 02.04.19 17:04, Alexander Duyck wrote:
> > On Tue, Apr 2, 2019 at 12:42 AM David Hildenbrand wrote:
> >>
> >> On 01.04.19 22:56, Alexander Duyck wrote:
> >>> On Mon, Apr 1, 2019 at 7:47 AM Michael S. Tsirkin wrote:
>
> On M
On 02.04.19 18:18, Alexander Duyck wrote:
> n Tue, Apr 2, 2019 at 8:57 AM David Hildenbrand wrote:
>>
>> On 02.04.19 17:25, Michael S. Tsirkin wrote:
>>> On Tue, Apr 02, 2019 at 08:04:00AM -0700, Alexander Duyck wrote:
Basically what we would be doing is providing a means for
incremental
On 02.04.19 17:04, Alexander Duyck wrote:
> On Tue, Apr 2, 2019 at 12:42 AM David Hildenbrand wrote:
>>
>> On 01.04.19 22:56, Alexander Duyck wrote:
>>> On Mon, Apr 1, 2019 at 7:47 AM Michael S. Tsirkin wrote:
On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
>> The
n Tue, Apr 2, 2019 at 8:57 AM David Hildenbrand wrote:
>
> On 02.04.19 17:25, Michael S. Tsirkin wrote:
> > On Tue, Apr 02, 2019 at 08:04:00AM -0700, Alexander Duyck wrote:
> >> Basically what we would be doing is providing a means for
> >> incrementally transitioning the buddy memory into the idl
On 02.04.19 17:25, Michael S. Tsirkin wrote:
> On Tue, Apr 02, 2019 at 08:04:00AM -0700, Alexander Duyck wrote:
>> Basically what we would be doing is providing a means for
>> incrementally transitioning the buddy memory into the idle/offline
>> state to reduce guest memory overhead. It would requi
On 02.04.19 17:04, Alexander Duyck wrote:
> On Tue, Apr 2, 2019 at 12:42 AM David Hildenbrand wrote:
>>
>> On 01.04.19 22:56, Alexander Duyck wrote:
>>> On Mon, Apr 1, 2019 at 7:47 AM Michael S. Tsirkin wrote:
On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
>> The
On Tue, Apr 02, 2019 at 08:04:00AM -0700, Alexander Duyck wrote:
> Basically what we would be doing is providing a means for
> incrementally transitioning the buddy memory into the idle/offline
> state to reduce guest memory overhead. It would require one function
> that would walk the free page li
On Tue, Apr 2, 2019 at 12:42 AM David Hildenbrand wrote:
>
> On 01.04.19 22:56, Alexander Duyck wrote:
> > On Mon, Apr 1, 2019 at 7:47 AM Michael S. Tsirkin wrote:
> >>
> >> On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
> The interesting thing is most probably: Will the
On 01.04.19 22:56, Alexander Duyck wrote:
> On Mon, Apr 1, 2019 at 7:47 AM Michael S. Tsirkin wrote:
>>
>> On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
The interesting thing is most probably: Will the hinting size usually be
reasonable small? At least I guess a gues
On Mon, Apr 1, 2019 at 7:47 AM Michael S. Tsirkin wrote:
>
> On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
> > > The interesting thing is most probably: Will the hinting size usually be
> > > reasonable small? At least I guess a guest with 4TB of RAM will not
> > > suddenly ge
On 01.04.19 16:47, Michael S. Tsirkin wrote:
> On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
>>> The interesting thing is most probably: Will the hinting size usually be
>>> reasonable small? At least I guess a guest with 4TB of RAM will not
>>> suddenly get a hinting size of h
On Mon, Apr 01, 2019 at 04:11:42PM +0200, David Hildenbrand wrote:
> > The interesting thing is most probably: Will the hinting size usually be
> > reasonable small? At least I guess a guest with 4TB of RAM will not
> > suddenly get a hinting size of hundreds of GB. Most probably also only
> > some
On Mon, Apr 01, 2019 at 04:09:32PM +0200, David Hildenbrand wrote:
> >
> > When you say yield, I would guess that would involve config space access
> > to the balloon to flush out outstanding hints?
>
> I rather meant yield your CPU to the hypervisor, so it can process
> hinting requests faster (
On 01.04.19 16:09, David Hildenbrand wrote:
>>> Thinking about your approach, there is one elementary thing to notice:
>>>
>>> Giving the guest pages from the buffer while hinting requests are being
>>> processed means that the guest can and will temporarily make use of more
>>> memory than desired
>> Thinking about your approach, there is one elementary thing to notice:
>>
>> Giving the guest pages from the buffer while hinting requests are being
>> processed means that the guest can and will temporarily make use of more
>> memory than desired. Essentially up to the point where MADV_FREE is
On Mon, Apr 01, 2019 at 10:17:51AM +0200, David Hildenbrand wrote:
> On 29.03.19 17:51, Michael S. Tsirkin wrote:
> > On Fri, Mar 29, 2019 at 04:45:58PM +0100, David Hildenbrand wrote:
> >> On 29.03.19 16:37, David Hildenbrand wrote:
> >>> On 29.03.19 16:08, Michael S. Tsirkin wrote:
> On Fri,
On 29.03.19 17:51, Michael S. Tsirkin wrote:
> On Fri, Mar 29, 2019 at 04:45:58PM +0100, David Hildenbrand wrote:
>> On 29.03.19 16:37, David Hildenbrand wrote:
>>> On 29.03.19 16:08, Michael S. Tsirkin wrote:
On Fri, Mar 29, 2019 at 03:24:24PM +0100, David Hildenbrand wrote:
>
> We ha
On Fri, Mar 29, 2019 at 04:45:58PM +0100, David Hildenbrand wrote:
> On 29.03.19 16:37, David Hildenbrand wrote:
> > On 29.03.19 16:08, Michael S. Tsirkin wrote:
> >> On Fri, Mar 29, 2019 at 03:24:24PM +0100, David Hildenbrand wrote:
> >>>
> >>> We had a very simple idea in mind: As long as a hinti
On Fri, Mar 29, 2019 at 04:37:46PM +0100, David Hildenbrand wrote:
> Just so we understand each other. What you mean with "appended to guest
> memory" is "append to the guest memory size", not actually "append
> memory via virtio-balloon", like adding memory regions and stuff.
>
> Instead of "-m 4
On 29.03.19 16:37, David Hildenbrand wrote:
> On 29.03.19 16:08, Michael S. Tsirkin wrote:
>> On Fri, Mar 29, 2019 at 03:24:24PM +0100, David Hildenbrand wrote:
>>>
>>> We had a very simple idea in mind: As long as a hinting request is
>>> pending, don't actually trigger any OOM activity, but wait
On 29.03.19 16:08, Michael S. Tsirkin wrote:
> On Fri, Mar 29, 2019 at 03:24:24PM +0100, David Hildenbrand wrote:
>>
>> We had a very simple idea in mind: As long as a hinting request is
>> pending, don't actually trigger any OOM activity, but wait for it to be
>> processed. Can be done using simpl
On Fri, Mar 29, 2019 at 03:24:24PM +0100, David Hildenbrand wrote:
>
> We had a very simple idea in mind: As long as a hinting request is
> pending, don't actually trigger any OOM activity, but wait for it to be
> processed. Can be done using simple atomic variable.
>
> This is a scenario that wi
On 29.03.19 14:26, Michael S. Tsirkin wrote:
> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote:
>> The following patch-set proposes an efficient mechanism for handing freed
>> memory between the guest and the host. It enables the guests with no page
>> cache to rapidly free and
On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote:
> The following patch-set proposes an efficient mechanism for handing freed
> memory between the guest and the host. It enables the guests with no page
> cache to rapidly free and reclaims memory to and from the host respectively
34 matches
Mail list logo