319: reenable ==> re-enable
>
> Signed-off-by: Shen Lichuan
Reviewed-by: Pankaj Gupta
> ---
> drivers/nvdimm/nd_virtio.c | 2 +-
> drivers/nvdimm/pfn_devs.c | 2 +-
> drivers/nvdimm/pmem.c | 2 +-
> 3 files changed, 3 insertions(+), 3 deletions(-)
>
>
+CC MST
> > Philip Chen wrote:
> > > Hi maintainers,
> > >
> > > Can anyone let me know if this patch makes sense?
> > > Any comment/feedback is appreciated.
> > > Thanks in advance!
> >
> > I'm not an expert on virtio but the code looks ok on the surface. I've
> > discussed this with Dan a bit a
d-off-by: Philip Chen
Looks good to me.
Acked-by: Pankaj Gupta ---
> v3:
> - Fix a typo in the comment (s/acticated/activated/)
>
> v2:
> - Remove change id from the patch description
> - Add more details to the patch description
>
> drivers/nvdimm/nd_virtio.c | 9
> > > > > > > > Compute the numa information for a virtio_pmem device from the
> > > > > > > > memory
> > > > > > > > range of the device. Previously, the target_node was always 0
> > > > > > > > since
> > > > > > > > the ndr_desc.target_node field was never explicitly set. The
> > > > > > > > c
> > Pankaj Gupta wrote:
> > > > > > Compute the numa information for a virtio_pmem device from the
> > > > > > memory
> > > > > > range of the device. Previously, the target_node was always 0 since
> > > > > > the
> > > Compute the numa information for a virtio_pmem device from the memory
> > > range of the device. Previously, the target_node was always 0 since
> > > the ndr_desc.target_node field was never explicitly set. The code for
> > > computing the numa node is taken from cxl_pmem_region_probe in
> >
> Compute the numa information for a virtio_pmem device from the memory
> range of the device. Previously, the target_node was always 0 since
> the ndr_desc.target_node field was never explicitly set. The code for
> computing the numa node is taken from cxl_pmem_region_probe in
> drivers/cxl/pmem.c
es, thus adhers to flush coalscing logic. This is
> > >
> > > s/adhers/adheres/
> > >
> > > s/coalscing/coalescing/
> > >
> > > > important for maintaining the flush request order with request
> > > > coalscing.
> >
> > > > Return from "pmem_submit_bio" when asynchronous flush is
> > > > still in progress in other context.
> > > >
> > > > Signed-off-by: Pankaj Gupta
> > > > ---
> > > > drivers/nvdimm/pmem.c| 15 ++
cing/
>
> > important for maintaining the flush request order with request coalscing.
>
> s/coalscing/coalescing/
o.k. Sorry for the spelling mistakes.
>
> >
> > Signed-off-by: Pankaj Gupta
> > ---
> > drivers/nvdimm/nd_virtio.c | 74
> >
> > Return from "pmem_submit_bio" when asynchronous flush is
> > still in progress in other context.
> >
> > Signed-off-by: Pankaj Gupta
> > ---
> > drivers/nvdimm/pmem.c| 15 ---
> > drivers/nvdimm/region_
declare'INIT_WORK' only once.
> - More testing and bug fix.
>
> [1] https://marc.info/?l=linux-kernel&m=157446316409937&w=2
>
> Pankaj Gupta (2):
> virtio-pmem: Async virtio-pmem flush
> pmem: enable pmem_submit_bio for asynchronous flush
>
> driv
Return from "pmem_submit_bio" when asynchronous flush is
still in progress in other context.
Signed-off-by: Pankaj Gupta
---
drivers/nvdimm/pmem.c| 15 ---
drivers/nvdimm/region_devs.c | 4 +++-
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/driv
queue). For all
the requests come between ongoing flush and new flush start time, only
single flush executes, thus adhers to flush coalscing logic. This is
important for maintaining the flush request order with request coalscing.
Signed-off-by: Pankaj Gupta
---
drivers/nvdimm/nd_virtio.c | 74
.
[1] https://marc.info/?l=linux-kernel&m=157446316409937&w=2
Pankaj Gupta (2):
virtio-pmem: Async virtio-pmem flush
pmem: enable pmem_submit_bio for asynchronous flush
drivers/nvdimm/nd_virtio.c | 74 +++-
drivers/nvdimm/pmem.c| 15 +++
From: Pankaj Gupta
Adding myself as virtio-pmem maintainer and also adding virtualization
mailing list entry for virtio specific bits. Helps to get notified for
appropriate bug fixes & enhancements.
Signed-off-by: Pankaj Gupta
---
MAINTAINERS | 7 +++
1 file changed, 7 insertions(+)
Friendly ping!
Thanks,
Pankaj
On Thu, 19 Aug 2021 at 13:08, Pankaj Gupta wrote:
>
> Gentle ping.
>
> >
> > Jeff reported preflush order issue with the existing implementation
> > of virtio pmem preflush. Dan suggested[1] to implement asynchronous flush
> > fo
> > > > > > Implement asynchronous flush for virtio pmem using work queue
> > > > > > to solve the preflush ordering issue. Also, coalesce the flush
> > > > > > requests when a flush is already in process.
> > > > > >
>
en a flush is already in process.
> > > >
> > > > Signed-off-by: Pankaj Gupta
> > > > ---
> > > > drivers/nvdimm/nd_virtio.c | 72
> > > > drivers/nvdimm/virtio_pmem.c | 10 -
> > > >
Hi Dan,
Thank you for the review. Please see my reply inline.
> > Implement asynchronous flush for virtio pmem using work queue
> > to solve the preflush ordering issue. Also, coalesce the flush
> > requests when a flush is already in process.
> >
> &g
flush ordering issue and also makes the flush
> > asynchronous from the submitting thread POV.
> >
> > Submitting this patch series for feeback and is in WIP. I have
> > done basic testing and currently doing more testing.
> >
> > Pankaj Gupta (2):
> >
0-28a1-4f7d-f944-cfd7d81c3...@redhat.com/
>
> Cc: Andrew Morton
> Cc: "K. Y. Srinivasan"
> Cc: Haiyang Zhang
> Cc: Stephen Hemminger
> Cc: Wei Liu
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Boris Ostrovsky
> Cc: Juergen Gross
>
ARN |
> __GFP_RETRY_MAYFAIL);
> - if (!vs) {
> - vs = vzalloc(sizeof(*vs));
> - if (!vs)
> - goto err_vs;
> - }
> + vs = kvzalloc(sizeof(*vs), GFP_KERNEL);
> + if (!vs)
> + goto err_vs;
>
> vqs = kmalloc_array(VHOST_SCSI_MAX_VQ, sizeof(*vqs), GFP_KERNEL);
> if (!vqs)
Acked-by: Pankaj Gupta
r_pages);
> + __dec_lruvec_state(from_vec, NR_ANON_THPS);
> + __inc_lruvec_state(to_vec, NR_ANON_THPS);
> }
>
> }
Acked-by: Pankaj Gupta
}
> return false;
> }
> +#else
> +static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages)
> +{
> + return false;
> +}
> +#endif
>
> #ifdef CONFIG_CMA
> static long check_and_migrate_cma_pages(struct mm_struct *mm,
Reviewed-by: Pankaj Gupta
> Although the ratio of the slab is one, we also should read the ratio
> from the related memory_stats instead of hard-coding. And the local
> variable of size is already the value of slab_unreclaimable. So we
> do not need to read again.
>
> We can drop the ratio in struct memory_stat. This can ma
> *sb, const struct inode
> return inode;
> }
>
> -bool shmem_mapping(struct address_space *mapping)
> +inline bool shmem_mapping(struct address_space *mapping)
> {
> return mapping->a_ops == &shmem_aops;
> }
Reviewed-by: Pankaj Gupta
>max, nr_pages);
>
> - if (atomic_long_read(&counter->usage) <= usage)
> + if (page_counter_read(counter) <= usage)
> return 0;
>
> counter->max = old;
Reviewed-by: Pankaj Gupta
* Set batch and high values safe for a boot pageset. A true percpu
> +* pageset's initialization will update them subsequently. Here we
> don't
> +* need to be as careful as pageset_update() as nobody can access the
> +* pageset yet.
> +*/
> + pcp->high = 0;
> + pcp->batch = 1;
> }
Acked-by: Pankaj Gupta
k(&pcp_batch_high_lock);
> return ret;
> @@ -8746,7 +8740,7 @@ EXPORT_SYMBOL(free_contig_range);
> void __meminit zone_pcp_update(struct zone *zone)
> {
> mutex_lock(&pcp_batch_high_lock);
> - __zone_pcp_update(zone);
> + zone_set_pageset_high_and_batch(zone);
> mutex_unlock(&pcp_batch_high_lock);
> }
Acked-by: Pankaj Gupta
new_high = zone_managed_pages(zone) /
> percpu_pagelist_fraction;
> + new_batch = max(1UL, new_high / 4);
> + if ((new_high / 4) > (PAGE_SHIFT * 8))
> + new_batch = PAGE_SHIFT * 8;
> + } else {
> + new_batch = zone_batchsize(zone);
> + new_high = 6 * new_batch;
> + new_batch = max(1UL, 1 * new_batch);
> + }
> + pageset_update(&p->pcp, new_high, new_batch);
> }
>
> static void __meminit zone_pageset_init(struct zone *zone, int cpu)
Looks good to me.
Acked-by: Pankaj Gupta
Hi Paul,
> > This patch improves readability by using better variable names
> > in flush request coalescing logic.
>
> Please do not indent the commit message.
o.k
>
> > Signed-off-by: Pankaj Gupta
> > ---
> > drivers/md/md.c | 8
> &g
From: Pankaj Gupta
This patch improves readability by using better variable names
in flush request coalescing logic.
Signed-off-by: Pankaj Gupta
---
drivers/md/md.c | 8
drivers/md/md.h | 6 +++---
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/md/md.c b
From: Pankaj Gupta
Request coalescing logic is dependent on flush time
update in other context. This patch adds comments
to understand the code flow better.
Signed-off-by: Pankaj Gupta
---
drivers/md/md.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/md/md.c b/drivers/md
From: Pankaj Gupta
This patch series does some cleanups during my attempt to understand
the code.
Pankaj Gupta (3):
md: improve variable names in md_flush_request()
md: add comments in md_flush_request()
md: use current request time as base for ktime comparisons
drivers/md/md.c | 12
From: Pankaj Gupta
Request coalescing logic uses 'prev_flush_start' as base to
compare the current request start time. 'prev_flush_start' is
updated in other context.
This patch changes this by using ktime comparison base to
'req_start' for better readability o
> > This looks good to me. This should solve "-EPERM" return by "__kvm_set_msr"
> > .
> >
> > A question I have, In the case of "kvm_emulate_rdmsr()", for "r" we
> > are injecting #GP.
> > Is there any possibility of this check to be hit and still result in #GP?
>
> When I wrote this patch series
ot; .
A question I have, In the case of "kvm_emulate_rdmsr()", for "r" we
are injecting #GP.
Is there any possibility of this check to be hit and still result in #GP?
int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
{
r = kvm_get_msr(vcpu, ecx, &data);
/* MSR read failed? See if we should ask user space */
if (r && kvm_get_msr_user_space(vcpu, ecx, r)) {
/* Bounce to user space */
return 0;
}
/* MSR read failed? Inject a #GP */
if (r) {
trace_kvm_msr_read_ex(ecx);
kvm_inject_gp(vcpu, 0);
return 1;
}
}
Apart from the question above, feel free to add:
Reviewed-by: Pankaj Gupta
; struct list_lru_node *nlru = &lru->node[nid];
>
> @@ -304,7 +304,7 @@ unsigned long list_lru_walk_node(struct list_lru *lru,
> int nid,
> nr_to_walk);
> spin_unlock(&nlru->lock);
>
> - if (*nr_to_walk <= 0)
> + if (!*nr_to_walk)
> break;
> }
> }
Acked-by: Pankaj Gupta
Cc: Alexander Duyck
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Dave Hansen
> Cc: Vlastimil Babka
> Cc: Wei Yang
> Cc: Oscar Salvador
> Cc: Mike Rapoport
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> mm/memory_hotplug.c | 11 ---
>
> The calculation is already complicated enough, let's limit it to one
> location.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 20 +++-
>
> Let's rename and move accordingly. While at it, rename sb_bitmap to
> "sb_states".
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 118 ++
"nb_mb_state" to "mb_count"
> - "set_mb_state" / "get_mb_state" vs. "mb_set_state" / "mb_get_state"
> - Don't use lengthy "enum virtio_mem_smb_mb_state", simply use "uint8_t"
>
>
.
>
> s/Device Block Mode (DBM)/Big Block Mode (BBM)/
>
Reviewed-by: Pankaj Gupta
> ... which now matches virtio_mem_fake_online(). We'll reuse this
> functionality soon.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 34 ++
> Avoid using memory block ids. While at it, use uint64_t for
> address/size.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 10 +++---
> 1 file changed,
ontrol *capc)
> goto check_drain;
> case ISOLATE_SUCCESS:
> update_cached = false;
> - last_migrated_pfn = start_pfn;
> - ;
> + last_migrated_pfn = iteration_start_pfn;
> }
>
> err = migrate_pages(&cc->migratepages, compaction_alloc,
Improves readability.
Acked-by: Pankaj Gupta
> No longer used, let's drop it.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 4
> 1 file changed, 4 deletions(-)
>
> diff --git a/drivers/v
> No harm done, but let's be consistent.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 8
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> Avoid using memory block ids. Rename it to virtio_mem_contains_range().
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 9 +
> 1 file changed, 5 insertion
> We actually need one byte less (next_mb_id is exclusive, first_mb_id is
> inclusive). Simplify.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 4 ++--
> 1 fi
> Let's determine the target nid only once in case we have none specified -
> usually, we'll end up with node 0 either way.
>
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> Cc: "Michael S. Tsirkin"
> Cc: Jason Wang
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> drivers/virtio/virtio_mem.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/virtio/virtio_mem.c b/drivers/vir
t; + * We didn't actually touch any of the isolated pages, so place them
> +* to the tail of the freelist. This is an optimization for memory
> +* onlining - just onlined memory won't immediately be considered for
> +* allocation.
> */
> if (!isolated_page) {
> - nr_pages = move_freepages_block(zone, page, migratetype,
> NULL);
> + nr_pages = move_freepages_block(zone, page, migratetype, true,
> + NULL);
> __mod_zone_freepage_state(zone, nr_pages, migratetype);
> }
> set_pageblock_migratetype(page, migratetype);
Acked-by: Pankaj Gupta
flags & FOP_TO_TAIL)
> + to_tail = true;
> + else if (is_shuffle_order(order))
> to_tail = shuffle_pick_tail();
> else
> to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
> @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page,
> unsigned int order, int mt)
>
> /* Return isolated page to tail of freelist. */
> __free_one_page(page, page_to_pfn(page), zone, order, mt,
> - FOP_SKIP_REPORT_NOTIFY);
> + FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
> }
Reviewed-by: Pankaj Gupta
e >= MIGRATE_PCPTYPES) {
> if (unlikely(is_migrate_isolate(migratetype))) {
> - free_one_page(zone, page, pfn, 0, migratetype);
> + free_one_page(zone, page, pfn, 0, migratetype,
> + FOP_NONE);
> return;
> }
> migratetype = MIGRATE_MOVABLE;
> @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page,
> unsigned int order)
> if (order == 0) /* Via pcp? */
> free_unref_page(page);
> else
> - __free_pages_ok(page, order);
> + __free_pages_ok(page, order, FOP_NONE);
> }
>
> void __free_pages(struct page *page, unsigned int order)
Acked-by: Pankaj Gupta
order, migratetype, FOP_NONE);
> spin_unlock(&zone->lock);
> }
>
> @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page,
> unsigned int order, int mt)
> lockdep_assert_held(&zone->lock);
>
> /* Return isolated page to tail of freelist. */
> - __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> + __free_one_page(page, page_to_pfn(page), zone, order, mt,
> + FOP_SKIP_REPORT_NOTIFY);
> }
Reviewed-by: Pankaj Gupta
orton
> Cc: Michal Hocko
> Cc: Dan Williams
> Cc: Jason Gunthorpe
> Cc: Kees Cook
> Cc: Ard Biesheuvel
> Cc: Pankaj Gupta
> Cc: Baoquan He
> Cc: Wei Yang
> Signed-off-by: David Hildenbrand
> ---
>
> Based on next-20200915. Follow up on
> &quo
Reviewed-by: Pankaj Gupta
Looks good to me.
Reviewed-by: Pankaj Gupta
ed-by: Juergen Gross # Xen related part
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Dan Williams
> Cc: Jason Gunthorpe
> Cc: Pankaj Gupta
> Cc: Baoquan He
> Cc: Wei Yang
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: "Ra
end - index);
> /* drain pagevecs to help isolate_lru_page()
> */
> lru_add_drain();
> page = find_lock_page(mapping, index);
>
Acked-by: Pankaj Gupta
..006dace60b1a 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -216,7 +216,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
> return ERR_PTR(-EINVAL);
> }
> break;
> - case MEMORY_DEVICE_DEVDAX:
> + case MEMORY_DEVICE_GENERIC:
> need_devmap_managed = false;
> break;
> case MEMORY_DEVICE_PCI_P2PDMA:
Reviewed-by: Pankaj Gupta
> We make sure that we cannot have any memory holes right at the beginning
> of offline_pages(). We no longer need walk_system_ram_range() and can
> call test_pages_isolated() directly.
>
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Wei Yang
> Cc: Baoquan He
> Cc
> There is only a single user, offline_pages(). Let's inline, to make
> it look more similar to online_pages().
>
> Acked-by: Michal Hocko
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Wei Yang
> Cc: Baoquan He
> Cc: Pankaj Gupta
> Cc: Oscar Salvador
%d, type %lx (%pGp)\n",
> pfn, msg_page[huge], ret, page_count(page),
> page->flags, &page->flags);
> + ret = -EBUSY;
> }
> return ret;
> }
> --
Acked-by: Pankaj Gupta
> 2.11.0
>
>
; Cc: Thomas Gleixner
> Cc: "K. Y. Srinivasan"
> Cc: Haiyang Zhang
> Cc: Stephen Hemminger
> Cc: Wei Liu
> Cc: Boris Ostrovsky
> Cc: Juergen Gross
> Cc: Stefano Stabellini
> Cc: Roger Pau Monné
> Cc: Julien Grall
> Cc: Pankaj Gupta
> Cc: Baoquan H
gion *nd_region)
>
> return 0;
> }
> -EXPORT_SYMBOL_GPL(nvdimm_flush);
>
> /**
> * nvdimm_has_flush - determine write flushing requirements
> --
Reviewed-by: Pankaj Gupta
> 1.8.3
>
> ___
> Linux-nvdimm mailing list -- linux-nvd...@lists.01.org
> To unsubscribe send an email to linux-nvdimm-le...@lists.01.org
wed(vcpu);
> + if (!kvm_arch_interrupt_allowed(vcpu))
> + return false;
> +
> + /* Found gfn in error gfn cache. Force sync fault */
> + if (kvm_find_and_remove_error_gfn(vcpu, gfn))
> + return false;
> +
> + return true;
> }
>
> bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index 68e84cf42a3f..677bb8269cd3 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -36,6 +36,7 @@ typedef u64gpa_t;
> typedef u64gfn_t;
>
> #define GPA_INVALID(~(gpa_t)0)
> +#define GFN_INVALID(~(gfn_t)0)
>
> typedef unsigned long hva_t;
> typedef u64hpa_t;
> --
> 2.25.4
This patch looks good to me.
Reviewed-by: Pankaj Gupta
>
ry(struct xa_state *xas,
> if (dax_is_conflict(entry))
> goto fallback;
> if (!xa_is_value(entry)) {
> - xas_set_err(xas, EIO);
> + xas_set_err(xas, -EIO);
> goto ou
ne->lock held will likely trigger a
> * lockdep splat, so defer it here.
> */
> dump_page(unmovable, "unmovable page");
>
> - return ret;
> + return -EBUSY;
> }
>
> static void unset_migratetype_isolate(struct page *page, unsigned
> migratetype)
> --
This clean up looks good to me.
Reviewed-by: Pankaj Gupta
> 2.26.2
>
>
f (is_migrate_isolate_page(page)) {
> + spin_unlock_irqrestore(&zone->lock, flags);
> + return -EBUSY;
> + }
>
> /*
> * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
> --
Reviewed-by: Pankaj Gupta
> 2.26.2
>
>
addr_to_nid(u64 addr)
> -{
> - /* Node 0 for now.. */
> - return 0;
> -}
> -EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
> -#endif
> -
> void arch_remove_memory(int nid, u64 start, u64 size,
> struct vmem_altmap *altmap)
> {
Reviewed-by: Pankaj Gupta
k_to_alloc(h, nr_nodes, node, nodes_allowed)
> {
> + for_each_node_mask_to_alloc(h, node, nodes_allowed) {
> if (h->surplus_huge_pages_node[node])
> goto found;
> }
> } else {
> - for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
> + for_each_node_mask_to_free(h, node, nodes_allowed) {
> if (h->surplus_huge_pages_node[node] <
> h->nr_huge_pages_node[node])
> goto found;
> --
> 2.20.1 (Apple Git-117)
Acked-by: Pankaj Gupta
>
>
/* cpuset refresh routine should be here */
> }
> - vm_total_pages = nr_free_pagecache_pages();
> + /* Get the number of free pages beyond high watermark in all zones. */
> + vm_total_pages = nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE));
> /*
> * Disable grouping by mobility if the number of pages in the
> * system is too low to allow the mechanism to work. It would be
Reviewed-by: Pankaj Gupta
.c
> @@ -170,11 +170,6 @@ struct scan_control {
> * From 0 .. 200. Higher means more swappy.
> */
> int vm_swappiness = 60;
> -/*
> - * The total number of pages which are beyond the high watermark within all
> - * zones.
> - */
> -unsigned long vm_total_pages;
>
> static void set_task_reclaim_state(struct task_struct *task,
>struct reclaim_state *rs)
Reviewed-by: Pankaj Gupta
> Let's add the status/info page, which is still under construction, however,
> already contains valuable documentation/information.
>
> Cc: "Michael S. Tsirkin"
> Cc: Pankaj Gupta
> Signed-off-by: David Hildenbrand
> ---
> MAINTAINERS | 1 +
> 1 fil
3ff : virtio0
> 14000-147ff : System RAM (virtio_mem)
> 33400-533ff : virtio1
> 33800-33fff : System RAM (virtio_mem)
> 34000-347ff : System RAM (virtio_mem)
> 34800-34fff : System RAM (virtio_mem)
> [...]
>
&g
m_delete_resource(struct
> virtio_mem *vm)
> static int virtio_mem_probe(struct virtio_device *vdev)
> {
> struct virtio_mem *vm;
> - int rc = -EINVAL;
> + int rc;
>
> BUILD_BUG_ON(sizeof(struct virtio_mem_req) != 24);
> BUILD_BUG_ON(sizeof(struct virtio_mem_resp) != 10);
Reviewed-by: Pankaj Gupta
31,9 @@
> */
> static struct page *page_idle_get_page(unsigned long pfn)
> {
> - struct page *page;
> + struct page *page = pfn_to_online_page(pfn);
> pg_data_t *pgdat;
>
> - if (!pfn_valid(pfn))
> - return NULL;
> -
> -
ice *dev = kobj_to_dev(kobj);
> struct dev_dax *dev_dax = to_dev_dax(dev);
>
> if (a == &dev_attr_target_node.attr && dev_dax_target_node(dev_dax) <
> 0)
Reviewed-by: Pankaj Gupta
t; return r;
> BUG_ON(r != 1);
> base = kmap_atomic(page);
> set_bit(bit, base);
> kunmap_atomic(base);
> - set_page_dirty_lock(page);
> - put_page(page);
> + unpin_user_pages_dirty_lock(&page, 1, true);
> return 0;
> }
Acked-by: Pankaj Gupta
Acked-by: Pankaj Gupta
On Thu, 28 May 2020 at 00:32, John Hubbard wrote:
>
> Introduce pin_user_pages_locked(), which is nearly identical to
> get_user_pages_locked() except that it sets FOLL_PIN and rejects
> FOLL_GET.
>
> Signed-off-by: John Hubbard
> ---
> include/
Acked-by: Pankaj Gupta
MF_ACTION_REQUIRED) {
> + if (t->mm == current->mm)
> + ret = force_sig_mceerr(BUS_MCEERR_AR,
> +(void __user *)tk->addr, addr_lsb);
> + /* send no signal to non-current processes */
> } else {
> /*
> * Don't use force here, it's convenient if the signal
> --
Looks good to me.
Acked-by: Pankaj Gupta
[...]
> 33400-3033ff : virtio1
> 33800-33fff : System RAM
> 34000-347ff : System RAM
> 34800-34fff : System RAM
> [...]
>
> Cc: "Michael S. Tsirkin"
> Cc: Pankaj Gup
Looks good to me.
Acked-by: Pankaj Gupta
rved)
> feffc000-ff00 (Reserved)
> fffc-0001 (Reserved)
> 00010000-00014000 (System RAM)
>
> kexec-tools already seem to basically ignore any System RAM that's not
> on top level when search
+3106,6 @@ void split_page(struct page *page, unsigned int order)
>
> int __isolate_free_page(struct page *page, unsigned int order)
> {
> - struct free_area *area = &page_zone(page)->free_area[order];
> unsigned long watermark;
> struct zone *zone;
> int mt;
> @@ -3139,7 +3131,7 @@ int __isolate_free_page(struct page *page, unsigned int
> order)
>
> /* Remove page from free list */
>
> - del_page_from_free_area(page, area);
> + del_page_from_free_list(page, zone, order);
>
> /*
>* Set the pageblock if the isolated page is at least half of a
> @@ -8560,7 +8552,7 @@ void zone_pcp_reset(struct zone *zone)
> pr_info("remove from free list %lx %d %lx\n",
> pfn, 1 << order, end_pfn);
> #endif
> - del_page_from_free_area(page, &zone->free_area[order]);
> + del_page_from_free_list(page, zone, order);
> for (i = 0; i < (1 << order); i++)
> SetPageReserved((page+i));
> pfn += (1 << order);
>
>
Reviewed-by: Pankaj Gupta
>
> On Thu, 2019-08-22 at 06:43 -0400, Pankaj Gupta wrote:
> > > This series provides an asynchronous means of reporting to a hypervisor
> > > that a guest page is no longer in use and can have the data associated
> > > with it dropped. To do this I have impleme
>
> This series provides an asynchronous means of reporting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as unused page reporting
>
> The functiona
tion in sysfs and display using ndctl.
Thanks,
Pankaj
> >
> > Signed-off-by: Pankaj Gupta
> > ---
> > drivers/nvdimm/namespace_devs.c | 6 +-
> > 1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/nvdimm/namespace_devs.c
>
https://lists.oasis-open.org/archives/virtio-dev/201908/msg00055.html
Pankaj Gupta (2):
virtio: decrement avail idx with buffer detach for packed ring
virtio_console: free unused buffers with port delete
char/virtio_console.c | 14 +++---
virtio/virtio_ring.c |6 ++
2 files
ttached with the port. Re-plug the same port tries to allocate new
buffers in virtqueue and results in this error if queue is full.
This patch reverts this commit by removing the unused buffers in vq's
when we unplug the port.
Reported-by: Xiaohui Li
Cc: sta...@vger.kernel.org
Signed-off-
etatched from the vq.
Acked-by: Jason Wang
Signed-off-by: Pankaj Gupta
---
drivers/virtio/virtio_ring.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c8be1c4f5b55..7c69181113e2 100644
--- a/drivers/virtio/virtio_ring.c
Ping.
>
> This patch adds prefix 'v' in disk name for virtio pmem.
> This differentiates virtio-pmem disks from the pmem disks.
>
> Signed-off-by: Pankaj Gupta
> ---
> drivers/nvdimm/namespace_devs.c | 6 +-
> 1 file changed, 5 insertions(+), 1 del
>
> On 2019/8/9 下午2:48, Pankaj Gupta wrote:
> > This patch decrements 'next_avail_idx' count when detaching a buffer
> > from vq for packed ring code. Split ring code already does this in
> > virtqueue_detach_unused_buf_split function. This updates the
&g
>
> On Fri, Aug 09, 2019 at 12:18:46PM +0530, Pankaj Gupta wrote:
> > The commit a7a69ec0d8e4 ("virtio_console: free buffers after reset")
> > deferred detaching of unused buffer to virtio device unplug time.
> > This causes unplug/replug of single port in virt
> On Fri, Aug 09, 2019 at 12:18:47PM +0530, Pankaj Gupta wrote:
> > This patch decrements 'next_avail_idx' count when detaching a buffer
> > from vq for packed ring code. Split ring code already does this in
> > virtqueue_detach_unused_buf_split function. This upda
etatched from the vq.
Signed-off-by: Pankaj Gupta
---
drivers/virtio/virtio_ring.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c8be1c4f5b55..7c69181113e2 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virt
1 - 100 of 300 matches
Mail list logo