On Thu, Jan 18, 2018 at 11:51 AM, David Hildenbrand wrote:
>
>>> 1] Existing pmem driver & virtio for region discovery:
>>> -
>>> Use existing pmem driver which is tightly coupled with concepts of
>>> namespaces, labels etc
>>> from ACPI r
>> 1] Existing pmem driver & virtio for region discovery:
>> -
>> Use existing pmem driver which is tightly coupled with concepts of
>> namespaces, labels etc
>> from ACPI region discovery and re-implement these concepts with virtio so
>>
On Thu, Jan 18, 2018 at 11:36 AM, Pankaj Gupta wrote:
>
>>
>> On Thu, Jan 18, 2018 at 10:54 AM, Pankaj Gupta wrote:
>> >
>> >>
>> >> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
>> >> >> solution.
>> >> >>
>> >> >> There are architectures out there (e.g. s390x) that don'
>
> On Thu, Jan 18, 2018 at 10:54 AM, Pankaj Gupta wrote:
> >
> >>
> >> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
> >> >> solution.
> >> >>
> >> >> There are architectures out there (e.g. s390x) that don't support
> >> >> NVDIMMs - there is no HW interface to expose
>
> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
> >> solution.
> >>
> >> There are architectures out there (e.g. s390x) that don't support
> >> NVDIMMs - there is no HW interface to expose any such stuff.
> >>
> >> However, with virtio-pmem, we could make it work also o
On Thu, Jan 18, 2018 at 10:54 AM, Pankaj Gupta wrote:
>
>>
>> >> I'd like to emphasize again, that I would prefer a virtio-pmem only
>> >> solution.
>> >>
>> >> There are architectures out there (e.g. s390x) that don't support
>> >> NVDIMMs - there is no HW interface to expose any such stuff.
>> >
On Thu, Jan 18, 2018 at 9:48 AM, David Hildenbrand wrote:
>>> I'd like to emphasize again, that I would prefer a virtio-pmem only
>>> solution.
>>>
>>> There are architectures out there (e.g. s390x) that don't support
>>> NVDIMMs - there is no HW interface to expose any such stuff.
>>>
>>> However
>> I'd like to emphasize again, that I would prefer a virtio-pmem only
>> solution.
>>
>> There are architectures out there (e.g. s390x) that don't support
>> NVDIMMs - there is no HW interface to expose any such stuff.
>>
>> However, with virtio-pmem, we could make it work also on architectures
>>
On Thu, Jan 18, 2018 at 8:53 AM, David Hildenbrand wrote:
> On 24.11.2017 13:40, Pankaj Gupta wrote:
>>
>> Hello,
>>
>> Thank you all for all the useful suggestions.
>> I want to summarize the discussions so far in the
>> thread. Please see below:
>>
>
>> We can go with the "best" interfac
On 24.11.2017 13:40, Pankaj Gupta wrote:
>
> Hello,
>
> Thank you all for all the useful suggestions.
> I want to summarize the discussions so far in the
> thread. Please see below:
>
> We can go with the "best" interface for what
> could be a relatively slow flush (fsync on a
>
Hi Dan,
Thanks for your reply.
>
> On Fri, Jan 12, 2018 at 10:23 PM, Pankaj Gupta wrote:
> >
> > Hello Dan,
> >
> >> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
> >> System Physical Address (SPA) Range Structure" in the ACPI 6.2A
> >> specification. Since it is a GUI
On Fri, Jan 12, 2018 at 10:23 PM, Pankaj Gupta wrote:
>
> Hello Dan,
>
>> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
>> System Physical Address (SPA) Range Structure" in the ACPI 6.2A
>> specification. Since it is a GUID we could define a Linux specific
>> type for this
Hello Dan,
> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
> System Physical Address (SPA) Range Structure" in the ACPI 6.2A
> specification. Since it is a GUID we could define a Linux specific
> type for this case, but spec changes would allow non-Linux hypervisors
> to
On Fri, Nov 24, 2017 at 4:40 AM, Pankaj Gupta wrote:
[..]
> 1] Expose vNVDIMM memory range to KVM guest.
>
>- Add flag in ACPI NFIT table for this new memory type. Do we need NVDIMM
> spec
> changes for this?
Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2
System
On 24/11/2017 14:02, Pankaj Gupta wrote:
>
>>>- Suggestion by Paolo & Stefan(previously) to use virtio-blk makes sense
>>>if just
>>> want a flush vehicle to send guest commands to host and get reply
>>> after asynchronous
>>> execution. There was previous discussion [1] wit
> >- Suggestion by Paolo & Stefan(previously) to use virtio-blk makes sense
> >if just
> > want a flush vehicle to send guest commands to host and get reply
> > after asynchronous
> > execution. There was previous discussion [1] with Rik & Dan on this.
> >
> > [1] https
On 24/11/2017 13:40, Pankaj Gupta wrote:
>- Suggestion by Paolo & Stefan(previously) to use virtio-blk makes sense
> if just
> want a flush vehicle to send guest commands to host and get reply after
> asynchronous
> execution. There was previous discussion [1] with Rik & Dan on thi
Hello,
Thank you all for all the useful suggestions.
I want to summarize the discussions so far in the
thread. Please see below:
> >>
> >>> We can go with the "best" interface for what
> >>> could be a relatively slow flush (fsync on a
> >>> file on ssd/disk on the host), which requires
> >>> th
On 23/11/2017 17:14, Dan Williams wrote:
> On Wed, Nov 22, 2017 at 8:05 PM, Xiao Guangrong
> wrote:
>>
>>
>> On 11/22/2017 02:19 AM, Rik van Riel wrote:
>>
>>> We can go with the "best" interface for what
>>> could be a relatively slow flush (fsync on a
>>> file on ssd/disk on the host), which req
On Wed, Nov 22, 2017 at 8:05 PM, Xiao Guangrong
wrote:
>
>
> On 11/22/2017 02:19 AM, Rik van Riel wrote:
>
>> We can go with the "best" interface for what
>> could be a relatively slow flush (fsync on a
>> file on ssd/disk on the host), which requires
>> that the flushing task wait on completion
>
On 11/22/2017 02:19 AM, Rik van Riel wrote:
We can go with the "best" interface for what
could be a relatively slow flush (fsync on a
file on ssd/disk on the host), which requires
that the flushing task wait on completion
asynchronously.
I'd like to clarify the interface of "wait on completi
On Tue, 2017-11-21 at 10:26 -0800, Dan Williams wrote:
> On Tue, Nov 21, 2017 at 10:19 AM, Rik van Riel
> wrote:
> > On Fri, 2017-11-03 at 14:21 +0800, Xiao Guangrong wrote:
> > > On 11/03/2017 12:30 AM, Dan Williams wrote:
> > > >
> > > > Good point, I was assuming that the mmio flush interface
On Tue, Nov 21, 2017 at 10:19 AM, Rik van Riel wrote:
> On Fri, 2017-11-03 at 14:21 +0800, Xiao Guangrong wrote:
>> On 11/03/2017 12:30 AM, Dan Williams wrote:
>> >
>> > Good point, I was assuming that the mmio flush interface would be
>> > discovered separately from the NFIT-defined memory range.
On Fri, 2017-11-03 at 14:21 +0800, Xiao Guangrong wrote:
> On 11/03/2017 12:30 AM, Dan Williams wrote:
> >
> > Good point, I was assuming that the mmio flush interface would be
> > discovered separately from the NFIT-defined memory range. Perhaps
> > via
> > PCI in the guest? This piece of the pro
> >
> >
> >> [..]
> >> >> Yes, the GUID will specifically identify this range as "Virtio Shared
> >> >> Memory" (or whatever name survives after a bikeshed debate). The
> >> >> libnvdimm core then needs to grow a new region type that mostly
> >> >> behaves the same as a "pmem" region, but drivers/
On Sun, Nov 5, 2017 at 11:57 PM, Pankaj Gupta wrote:
>
>
>> [..]
>> >> Yes, the GUID will specifically identify this range as "Virtio Shared
>> >> Memory" (or whatever name survives after a bikeshed debate). The
>> >> libnvdimm core then needs to grow a new region type that mostly
>> >> behaves th
> [..]
> >> Yes, the GUID will specifically identify this range as "Virtio Shared
> >> Memory" (or whatever name survives after a bikeshed debate). The
> >> libnvdimm core then needs to grow a new region type that mostly
> >> behaves the same as a "pmem" region, but drivers/nvdimm/pmem.c grows a
On 11/03/2017 12:30 AM, Dan Williams wrote:
On Thu, Nov 2, 2017 at 1:50 AM, Xiao Guangrong
wrote:
[..]
Yes, the GUID will specifically identify this range as "Virtio Shared
Memory" (or whatever name survives after a bikeshed debate). The
libnvdimm core then needs to grow a new region type tha
On Thu, Nov 2, 2017 at 1:50 AM, Xiao Guangrong
wrote:
[..]
>> Yes, the GUID will specifically identify this range as "Virtio Shared
>> Memory" (or whatever name survives after a bikeshed debate). The
>> libnvdimm core then needs to grow a new region type that mostly
>> behaves the same as a "pmem"
On 11/01/2017 11:20 PM, Dan Williams wrote:
On 11/01/2017 12:25 PM, Dan Williams wrote:
[..]
It's not persistent memory if it requires a hypercall to make it
persistent. Unless memory writes can be made durable purely with cpu
instructions it's dangerous for it to be treated as a PMEM range.
> On 11/01/2017 12:25 PM, Dan Williams wrote:
[..]
>> It's not persistent memory if it requires a hypercall to make it
>> persistent. Unless memory writes can be made durable purely with cpu
>> instructions it's dangerous for it to be treated as a PMEM range.
>> Consider a guest that tried to map i
On 11/01/2017 12:25 PM, Dan Williams wrote:
On Tue, Oct 31, 2017 at 8:43 PM, Xiao Guangrong
wrote:
On 10/31/2017 10:20 PM, Dan Williams wrote:
On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
wrote:
On 07/27/2017 08:54 AM, Dan Williams wrote:
At that point, would it make sense to e
On Tue, Oct 31, 2017 at 8:43 PM, Xiao Guangrong
wrote:
>
>
> On 10/31/2017 10:20 PM, Dan Williams wrote:
>>
>> On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
>> wrote:
>>>
>>>
>>>
>>> On 07/27/2017 08:54 AM, Dan Williams wrote:
>>>
> At that point, would it make sense to expose these specia
On 10/31/2017 10:20 PM, Dan Williams wrote:
On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
wrote:
On 07/27/2017 08:54 AM, Dan Williams wrote:
At that point, would it make sense to expose these special
virtio-pmem areas to the guest in a slightly different way,
so the regions that need v
On Tue, Oct 31, 2017 at 12:13 AM, Xiao Guangrong
wrote:
>
>
> On 07/27/2017 08:54 AM, Dan Williams wrote:
>
>>> At that point, would it make sense to expose these special
>>> virtio-pmem areas to the guest in a slightly different way,
>>> so the regions that need virtio flushing are not bound by
>
On 07/27/2017 08:54 AM, Dan Williams wrote:
At that point, would it make sense to expose these special
virtio-pmem areas to the guest in a slightly different way,
so the regions that need virtio flushing are not bound by
the regular driver, and the regular driver can continue to
work for memor
On Wed, Jul 26, 2017 at 4:46 PM, Rik van Riel wrote:
> On Wed, 2017-07-26 at 14:40 -0700, Dan Williams wrote:
>> On Wed, Jul 26, 2017 at 2:27 PM, Rik van Riel
>> wrote:
>> > On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote:
>> > > >
>> > >
>> > > Just want to summarize here(high level):
>> >
On Wed, 2017-07-26 at 14:40 -0700, Dan Williams wrote:
> On Wed, Jul 26, 2017 at 2:27 PM, Rik van Riel
> wrote:
> > On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote:
> > > >
> > >
> > > Just want to summarize here(high level):
> > >
> > > This will require implementing new 'virtio-pmem' de
On Wed, Jul 26, 2017 at 2:27 PM, Rik van Riel wrote:
> On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote:
>> >
>> Just want to summarize here(high level):
>>
>> This will require implementing new 'virtio-pmem' device which
>> presents
>> a DAX address range(like pmem) to guest with read/write(
On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote:
> >
> Just want to summarize here(high level):
>
> This will require implementing new 'virtio-pmem' device which
> presents
> a DAX address range(like pmem) to guest with read/write(direct
> access)
> & device flush functionality. Also, qemu
>
> On Tue, 2017-07-25 at 07:46 -0700, Dan Williams wrote:
> > On Tue, Jul 25, 2017 at 7:27 AM, Pankaj Gupta
> > wrote:
> > >
> > > Looks like only way to send flush(blk dev) from guest to host with
> > > nvdimm
> > > is using flush hint addresses. Is this the correct interface I am
> > > looki
On Tue, 2017-07-25 at 07:46 -0700, Dan Williams wrote:
> On Tue, Jul 25, 2017 at 7:27 AM, Pankaj Gupta
> wrote:
> >
> > Looks like only way to send flush(blk dev) from guest to host with
> > nvdimm
> > is using flush hint addresses. Is this the correct interface I am
> > looking?
> >
> > blkdev_
On Tue, Jul 25, 2017 at 7:27 AM, Pankaj Gupta wrote:
>
>> Subject: Re: KVM "fake DAX" flushing interface - discussion
>>
>> On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
>> >
>> > > On Sun 23-07-17 13:10:34, Dan Williams wrote:
>> > > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
>> > >
> Subject: Re: KVM "fake DAX" flushing interface - discussion
>
> On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
> >
> > > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> > > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > > > > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
On Mon, Jul 24, 2017 at 8:48 AM, Jan Kara wrote:
> On Mon 24-07-17 08:10:05, Dan Williams wrote:
>> On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara wrote:
[..]
>> This approach would turn into a full fsync on the host. The question
>> in my mind is whether there is any optimization to be had by trappin
On Mon 24-07-17 08:10:05, Dan Williams wrote:
> On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara wrote:
> > On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
> >>
> >> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> >> > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> >> > > > On Sun, 2017-07-23
On Mon, Jul 24, 2017 at 5:37 AM, Jan Kara wrote:
> On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
>>
>> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
>> > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
>> > > > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
>> > > >> [ adding Ro
On Mon 24-07-17 08:06:07, Pankaj Gupta wrote:
>
> > On Sun 23-07-17 13:10:34, Dan Williams wrote:
> > > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > > > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> > > >> [ adding Ross and Jan ]
> > > >>
> > > >> On Sun, Jul 23, 2017 at 7
> On Sun 23-07-17 13:10:34, Dan Williams wrote:
> > On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> > >> [ adding Ross and Jan ]
> > >>
> > >> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
> > >> wrote:
> > >> >
> > >> > The go
On Sun 23-07-17 13:10:34, Dan Williams wrote:
> On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> > On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> >> [ adding Ross and Jan ]
> >>
> >> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
> >> wrote:
> >> >
> >> > The goal is to increase d
On Sun, Jul 23, 2017 at 11:10 AM, Rik van Riel wrote:
> On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
>> [ adding Ross and Jan ]
>>
>> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
>> wrote:
>> >
>> > The goal is to increase density of guests, by moving page
>> > cache into the host (whe
On Sun, 2017-07-23 at 09:01 -0700, Dan Williams wrote:
> [ adding Ross and Jan ]
>
> On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel
> wrote:
> >
> > The goal is to increase density of guests, by moving page
> > cache into the host (where it can be easily reclaimed).
> >
> > If we assume the gues
[ adding Ross and Jan ]
On Sun, Jul 23, 2017 at 7:04 AM, Rik van Riel wrote:
> On Sat, 2017-07-22 at 12:34 -0700, Dan Williams wrote:
>> On Fri, Jul 21, 2017 at 8:58 AM, Stefan Hajnoczi > > wrote:
>> >
>> > Maybe the NVDIMM folks can comment on this idea.
>>
>> I think it's unworkable to use the
On Sat, 2017-07-22 at 12:34 -0700, Dan Williams wrote:
> On Fri, Jul 21, 2017 at 8:58 AM, Stefan Hajnoczi > wrote:
> >
> > Maybe the NVDIMM folks can comment on this idea.
>
> I think it's unworkable to use the flush hints as a guest-to-host
> fsync mechanism. That mechanism was designed to flush
On Fri, Jul 21, 2017 at 8:58 AM, Stefan Hajnoczi wrote:
> On Fri, Jul 21, 2017 at 09:29:15AM -0400, Pankaj Gupta wrote:
>>
>> > > A] Problems to solve:
>> > > --
>> > >
>> > > 1] We are considering two approaches for 'fake DAX flushing interface'.
>> > >
>> > > 1.1] fake dax with
On Fri, Jul 21, 2017 at 09:29:15AM -0400, Pankaj Gupta wrote:
>
> > > A] Problems to solve:
> > > --
> > >
> > > 1] We are considering two approaches for 'fake DAX flushing interface'.
> > >
> > > 1.1] fake dax with NVDIMM flush hints & KVM async page fault
> > >
> > >
On Fri, 2017-07-21 at 09:29 -0400, Pankaj Gupta wrote:
> > >
> > > - Flush hint address traps from guest to host and do an
> > > entire fsync
> > > on backing file which itself is costly.
> > >
> > > - Can be used to flush specific pages on host backing disk.
> > > We can
> > >
> > A] Problems to solve:
> > --
> >
> > 1] We are considering two approaches for 'fake DAX flushing interface'.
> >
> > 1.1] fake dax with NVDIMM flush hints & KVM async page fault
> >
> > - Existing interface.
> >
> > - The approach to use flush hint address is
On Fri, Jul 21, 2017 at 02:56:34AM -0400, Pankaj Gupta wrote:
> A] Problems to solve:
> --
>
> 1] We are considering two approaches for 'fake DAX flushing interface'.
>
> 1.1] fake dax with NVDIMM flush hints & KVM async page fault
>
> - Existing interface.
>
> -
> >
> > Hello,
> >
> > We shared a proposal for 'KVM fake DAX flushing interface'.
> >
> > https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg02478.html
> >
>
> In above link,
> "Overall goal of project
>is to increase the number of virtual machines that can be
>run on a physic
On 07/21/17 02:56 -0400, Pankaj Gupta wrote:
>
> Hello,
>
> We shared a proposal for 'KVM fake DAX flushing interface'.
>
> https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg02478.html
>
In above link,
"Overall goal of project
is to increase the number of virtual machines that can
Hello,
We shared a proposal for 'KVM fake DAX flushing interface'.
https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg02478.html
We did initial POC in which we used 'virtio-blk' device to perform
a device flush on pmem fsync on ext4 filesystem. They are few hacks
to make things work. We
62 matches
Mail list logo