++
> .../mm/restrictedmem_testmod/Makefile | 21 +++
> .../restrictedmem_testmod.c | 89 +++
> tools/testing/selftests/mm/run_vmtests.sh | 6 +
> 18 files changed, 454 insertions(+), 27 deletions(-)
> create mode 100644 tools/te
then sure. But I want to see a strong justification for any
> more header file cleanups.
I agree. It usually requires unexpected combination of config options to
uncover some nasty include dependencies. So these patches might break
build while their additional value is quite questionable.
--
Michal Hocko
SUSE Labs
n is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
>
> Signed-off-by: Wei Wang
> Signe
On Mon 28-08-17 15:33:26, Michal Hocko wrote:
> On Mon 28-08-17 18:08:32, Wei Wang wrote:
> > This patch adds support to walk through the free page blocks in the
> > system and report them via a callback function. Some page blocks may
> > leave the free list after zone->lo
ly
iterate over remaining orders just to realize there is nothing to be
done for those...
> + }
> + spin_unlock_irqrestore(&zone->lock, flags);
> + }
> + }
> + }
> +}
> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
--
Michal Hocko
SUSE Labs
> 1. break out of list
> 2. remove page from the list
As I've said before this has to be a read only API. You cannot simply
fiddle with the page allocator internals under its feet.
> So I would make the callback bool, and I would use
> list_for_each_entry_safe.
If a bool would tell to break out of the loop then I agree. This sounds
useful.
--
Michal Hocko
SUSE Labs
On Mon 21-08-17 14:12:47, Wei Wang wrote:
> On 08/18/2017 09:46 PM, Michal Hocko wrote:
[...]
> >>+/**
> >>+ * walk_free_mem_block - Walk through the free page blocks in the system
> >>+ * @opaque1: the context passed from the caller
> >>+ * @min_order: t
't be the first time I have seen
something like that.
> Signed-off-by: Wei Wang
> Signed-off-by: Liang Li
> Cc: Michal Hocko
> Cc: Michael S. Tsirkin
> ---
> include/linux/mm.h | 6 ++
> mm/page_alloc.c| 44
> 2
cks tend to surivive for
longer. So I assume you would only care about larger free blocks. This
will also make the call cheaper.
--
Michal Hocko
SUSE Labs
On Wed 26-07-17 10:22:23, Wei Wang wrote:
> On 07/25/2017 10:53 PM, Michal Hocko wrote:
> >On Tue 25-07-17 14:47:16, Wang, Wei W wrote:
> >>On Tuesday, July 25, 2017 8:42 PM, hal Hocko wrote:
> >>>On Tue 25-07-17 19:56:24, Wei Wang wrote:
> >>>>On 07/25
On Tue 25-07-17 14:47:16, Wang, Wei W wrote:
> On Tuesday, July 25, 2017 8:42 PM, hal Hocko wrote:
> > On Tue 25-07-17 19:56:24, Wei Wang wrote:
> > > On 07/25/2017 07:25 PM, Michal Hocko wrote:
> > > >On Tue 25-07-17 17:32:00, Wei Wang wrote:
> > > >&
On Tue 25-07-17 19:56:24, Wei Wang wrote:
> On 07/25/2017 07:25 PM, Michal Hocko wrote:
> >On Tue 25-07-17 17:32:00, Wei Wang wrote:
> >>On 07/24/2017 05:00 PM, Michal Hocko wrote:
> >>>On Wed 19-07-17 20:01:18, Wei Wang wrote:
> >>>>On 07/19/2017 04:13
On Tue 25-07-17 17:32:00, Wei Wang wrote:
> On 07/24/2017 05:00 PM, Michal Hocko wrote:
> >On Wed 19-07-17 20:01:18, Wei Wang wrote:
> >>On 07/19/2017 04:13 PM, Michal Hocko wrote:
> >[...
> >>>All you should need is the check for the page reference count, no
On Wed 19-07-17 20:01:18, Wei Wang wrote:
> On 07/19/2017 04:13 PM, Michal Hocko wrote:
[...
> >All you should need is the check for the page reference count, no? I
> >assume you do some sort of pfn walk and so you should be able to get an
> >access to the struct page.
>
rence count, no? I
assume you do some sort of pfn walk and so you should be able to get an
access to the struct page.
--
Michal Hocko
SUSE Labs
On Fri 14-07-17 22:17:13, Michael S. Tsirkin wrote:
> On Fri, Jul 14, 2017 at 02:30:23PM +0200, Michal Hocko wrote:
> > On Wed 12-07-17 20:40:19, Wei Wang wrote:
> > > This patch adds support for reporting blocks of pages on the free list
> > > specified by the caller
On Fri 14-07-17 14:30:23, Michal Hocko wrote:
> On Wed 12-07-17 20:40:19, Wei Wang wrote:
> > This patch adds support for reporting blocks of pages on the free list
> > specified by the caller.
> >
> > As pages can leave the free list during this call or immediately
&
t;
> static inline int zref_in_nodemask(struct zoneref *zref, nodemask_t *nodes)
> {
> --
> 2.7.4
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
; + *page = list_next_entry((*page), lru);
> + ret = 0;
> +out:
> + spin_unlock_irqrestore(&this_zone->lock, flags);
> + return ret;
> +}
> +EXPORT_SYMBOL(report_unused_page_block);
> +
> +#endif
> +
> static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
> {
> zoneref->zone = zone;
> --
> 2.7.4
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
--
Michal Hocko
SUSE Labs
y as not MOVABLE, so DIMM might be
> temporally or permanently pinned by kernel allocations.
Yes and that will be always the case as long as you allow kernel
allocations to use that memory. I do not know of any other way to work
this around than online the specific memory range as movable.
--
Michal Hocko
SUSE Labs
20 matches
Mail list logo