On Thu, Jun 03, 2021 at 12:10:09AM +, Chaitanya Kulkarni wrote:
> > diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> > index d8e098f1e5b5..74fb2ec63219 100644
> > --- a/drivers/block/null_blk/main.c
> > +++ b/drivers/block/null_blk/main.c
> > @@ -1851,13 +1851,12 @@
Currently, the "auto-movable" online policy does not allow for hotplugged
KERNEL (ZONE_NORMAL) memory to increase the amount of MOVABLE memory we can
have, primarily, because there is no coordiantion across memory devices and
we don't want to create zone-imbalances accidentially when unplugging
Use memory groups to improve our "auto-movable" onlining policy:
1. For static memory groups (e.g., a DIMM), online a memory block MOVABLE
only if all other memory blocks in the group are either MOVABLE or could
be onlined MOVABLE. A DIMM will either be MOVABLE or not, not a mixture.
2.
Let's use a single dynamic memory group.
Signed-off-by: David Hildenbrand
---
drivers/virtio/virtio_mem.c | 22 +++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index e327fb878143..6b9b8b7bf89d 100644
Let's group all memory we add for a single memory device - we want a
single node for that (which also seems to be the sane thing to do).
We won't care for now about memory that was already added to the system
(e.g., via e820) -- usually *all* memory of a memory device was already
added and we'll
We allocate + initialize everything from scratch. In case enabling the
device fails, we free all memory resourcs.
Signed-off-by: David Hildenbrand
---
drivers/acpi/acpi_memhotplug.c | 4
1 file changed, 4 deletions(-)
diff --git a/drivers/acpi/acpi_memhotplug.c
Let's track all present pages in each memory group. Especially, track
memory present in ZONE_MOVABLE and memory present in one of the kernel
zones (which really only is ZONE_NORMAL right now as memory groups only
apply to hotplugged memory) separate;y within a memory group, to prepare
for making
In our "auto-movable" memory onlining policy, we want to make decisions
across memory blocks of a single memory device. Examples of memory devices
include ACPI memory devices (in the simplest case a single DIMM) a
virtio-mem. For now, we don't have a connection between a single memory
block device
There is only a single user remaining. We can simply try to offline all
online nodes - which is fast, because we usually span pages and can skip
such nodes right away.
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: "Rafael J. Wysocki"
Cc: Len Brown
Cc: Dan Williams
The parameter is unused, let's remove it.
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Vasily Gorbik
Cc: Christian Borntraeger
Cc: Yoshinori Sato
Cc: Rich Felker
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc:
When onlining without specifying a zone (using "online" instead of
"online_kernel" or "online_movable"), we currently select a zone such that
existing zones are kept contiguous. This online policy made sense in the
past, where contiguous zones where required.
We'd like to implement smarter
For implementing a new memory onlining policy, which determines when to
online memory blocks to ZONE_MOVABLE semi-automatically, we need the number
of present early (boot) pages -- present pages excluding hotplugged pages.
Let's track these pages per zone.
Pass a page instead of the zone to
Checkpatch complained on a follow-up patch that we are using "unsigned"
here, which defaults to "unsigned int" and checkpatch is correct.
Use "unsigned long" instead, just as we do in other places when handling
PFNs. This can bite us once we have physical addresses in the range of
multiple TB.
Hi,
this series aims at improving in-kernel auto-online support. It tackles the
fundamental problems that:
1) We can create zone imbalances when onlining all memory blindly to
ZONE_MOVABLE, in the worst case crashing the system. We have to know
upfront how much memory we are going to
Convert the virtio-mmio binding to DT schema format.
Cc: "Michael S. Tsirkin"
Cc: Jason Wang
Cc: Jean-Philippe Brucker
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Rob Herring
---
Jean-Philippe, hopefully you are okay with being listed as the
maintainer here. You're the only
Any comments? Thanks.
On Thu, May 27, 2021 at 9:01 PM Jiang Wang wrote:
>
> From: "jiang.wang"
>
> Add supports for datagram type for virtio-vsock. Datagram
> sockets are connectionless and unreliable. To avoid contention
> with stream and other sockets, add two more virtqueues and
> a new
On Sat, Jun 05, 2021 at 05:40:17PM -0500, michael.chris...@oracle.com wrote:
> On 6/3/21 5:16 PM, Mike Christie wrote:
> > On 6/3/21 9:37 AM, Stefan Hajnoczi wrote:
> >> On Tue, May 25, 2021 at 01:05:51PM -0500, Mike Christie wrote:
> >>> The following patches apply over linus's tree or mst's
On Sat, Jun 05, 2021 at 06:53:58PM -0500, michael.chris...@oracle.com wrote:
> On 6/3/21 9:30 AM, Stefan Hajnoczi wrote:
> >> + if (info->pid == VHOST_VRING_NEW_WORKER) {
> >> + worker = vhost_worker_create(dev);
> >
> > The maximum number of kthreads created is limited by
> >
On Mon, Jun 07, 2021 at 02:29:28PM +0300, Arseny Krasnov wrote:
On 07.06.2021 13:48, Stefano Garzarella wrote:
On Fri, Jun 04, 2021 at 09:00:14PM +0300, Arseny Krasnov wrote:
On 04.06.2021 18:06, Stefano Garzarella wrote:
On Thu, May 20, 2021 at 10:16:08PM +0300, Arseny Krasnov wrote:
Add
On Fri, Jun 04, 2021 at 09:03:26PM +0300, Arseny Krasnov wrote:
On 04.06.2021 18:03, Stefano Garzarella wrote:
On Fri, Jun 04, 2021 at 04:12:23PM +0300, Arseny Krasnov wrote:
On 03.06.2021 17:45, Stefano Garzarella wrote:
On Thu, May 20, 2021 at 10:17:58PM +0300, Arseny Krasnov wrote:
On Fri, Jun 04, 2021 at 09:00:14PM +0300, Arseny Krasnov wrote:
On 04.06.2021 18:06, Stefano Garzarella wrote:
On Thu, May 20, 2021 at 10:16:08PM +0300, Arseny Krasnov wrote:
Add receive loop for SEQPACKET. It looks like receive loop for
STREAM, but there are differences:
1) It doesn't call
On Sat, Jun 05, 2021 at 03:47:28PM +0300, Dan Carpenter wrote:
Return -ENOMEM if vp_modern_map_vq_notify() fails. Currently it
returns success.
Fixes: 11d8ffed00b2 ("vp_vdpa: switch to use vp_modern_map_vq_notify()")
Signed-off-by: Dan Carpenter
---
drivers/vdpa/virtio_pci/vp_vdpa.c | 1 +
1
22 matches
Mail list logo