On 6/7/21 11:08 PM, Stefan Hajnoczi wrote:
On Mon, Jun 07, 2021 at 09:32:52PM +0800, zhenwei pi wrote:
Since 2020, I started to develop a userspace NVMF initiator library:
https://github.com/bytedance/libnvmf
and released v0.1 recently.
Also developed block driver for QEMU side:
On Mon, Jun 07, 2021 at 04:22:27PM -0500, Eric Blake wrote:
[replying to myself]
> > Here is simpler reproducer:
> >
> > # Create a qcow2 image with a raw backing file:
> > $ qemu-img create base.raw $((4*64*1024))
> > $ qemu-img create -f qcow2 -b base.raw -F raw top.qcow2
> >
> >
On Mon, Jun 07, 2021 at 11:22:04PM +0300, Nir Soffer wrote:
> When zeroing a cluster in an image with backing file, qemu-img and
> qemu-nbd reported the area as a hole. This does not affect the guest
> since the area is read as zero, but breaks code trying to reconstruct
> the image chain based on
When zeroing a cluster in an image with backing file, qemu-img and
qemu-nbd reported the area as a hole. This does not affect the guest
since the area is read as zero, but breaks code trying to reconstruct
the image chain based on qemu-img map or qemu-nbd block status response.
Here is simpler
On 07/06/2021 18:18, Vladimir Sementsov-Ogievskiy wrote:
07.06.2021 18:16, Emanuele Giuseppe Esposito wrote:
On 07/06/2021 17:10, Kevin Wolf wrote:
Am 03.06.2021 um 09:38 hat Paolo Bonzini geschrieben:
On 02/06/21 14:21, Kevin Wolf wrote:
Am 02.06.2021 um 11:13 hat Stefan Hajnoczi
On Wed, May 05, 2021 at 10:50:01AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Now, when all drivers are updated by previous commit, we can drop the
Now that all drivers are updated by the previous commit,
> last limiter on pdiscard path: INT_MAX in bdrv_co_pdiscard().
>
> Now everything is
On Wed, May 05, 2021 at 10:50:00AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are generally moving to int64_t for both offset and bytes parameters
> on all io paths.
>
> Main motivation is realization of 64-bit write_zeroes operation for
> fast zeroing large disk chunks, up to the whole
In downstream, we want to use a different name for the QEMU binary,
and some people might also use the docs for non-x86 binaries, that's
why we already created the |qemu_system| placeholder in the past.
Use it now in the live-block-operations doc, too.
Signed-off-by: Thomas Huth
---
07.06.2021 18:16, Emanuele Giuseppe Esposito wrote:
On 07/06/2021 17:10, Kevin Wolf wrote:
Am 03.06.2021 um 09:38 hat Paolo Bonzini geschrieben:
On 02/06/21 14:21, Kevin Wolf wrote:
Am 02.06.2021 um 11:13 hat Stefan Hajnoczi geschrieben:
On Fri, May 28, 2021 at 05:16:26PM +0300, Vladimir
On 6/5/21 10:08 AM, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 19:39, John Snow wrote:
Since iotests are such a heavy and prominent user of the Python qemu.qmp
and qemu.machine packages, it would be convenient if the Python linting
suite also checked this client for any possible regressions
On 6/5/21 10:27 AM, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 19:39, John Snow wrote:
Refactor the core function of the linting configuration out of 297 and
into a new file called linters.py.
Now, linters.py represents an invocation of the linting scripts that
more resembles a "normal"
On 07/06/2021 17:10, Kevin Wolf wrote:
Am 03.06.2021 um 09:38 hat Paolo Bonzini geschrieben:
On 02/06/21 14:21, Kevin Wolf wrote:
Am 02.06.2021 um 11:13 hat Stefan Hajnoczi geschrieben:
On Fri, May 28, 2021 at 05:16:26PM +0300, Vladimir Sementsov-Ogievskiy wrote:
Hi all!
This is my
Am 03.06.2021 um 09:38 hat Paolo Bonzini geschrieben:
> On 02/06/21 14:21, Kevin Wolf wrote:
> > Am 02.06.2021 um 11:13 hat Stefan Hajnoczi geschrieben:
> > > On Fri, May 28, 2021 at 05:16:26PM +0300, Vladimir Sementsov-Ogievskiy
> > > wrote:
> > > > Hi all!
> > > >
> > > > This is my suggestion
On Mon, Jun 07, 2021 at 09:32:52PM +0800, zhenwei pi wrote:
> Since 2020, I started to develop a userspace NVMF initiator library:
> https://github.com/bytedance/libnvmf
> and released v0.1 recently.
>
> Also developed block driver for QEMU side:
>
On Wed, May 05, 2021 at 10:49:59AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> We are going to support 64 bit discard requests. Now update the
> limit variable. It's absolutely safe. The variable is set in some
> drivers, and used in bdrv_co_pdiscard().
>
> Update also max_pdiscard variable in
On Mon, Jun 07, 2021 at 09:32:53PM +0800, zhenwei pi wrote:
> Add a new qemu block driver which uses libnvmf as userspace NVMe over
> fabric initiator.
>
> Currently QEMU uses 4 NVMF IO-queues by a RR policy, test with a
> linux kernel NVMF target, QEMU gets about 220K IOPS.
>
> Thanks to Famz
On Wed, May 05, 2021 at 10:49:58AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Now, when all drivers are updated by previous commit, we can drop two
s/Now, when/Now that/
> last limiters on write-zeroes path: INT_MAX in
> bdrv_co_do_pwrite_zeroes() and bdrv_check_request32() in
>
On Fri, Jun 04, 2021 at 11:25:16AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> So, there are some ways to improve the situation:
My personal preference (although I'm fine with any of your listed
options, if others speak up in favor of a different one):
> 2. Take this patch and also convert
On Tue, May 11, 2021 at 11:39:49AM -0400, Paolo Bonzini wrote:
> Most block device commands do not require a fully constructed machine.
> Allow running them before machine initialization has concluded.
>
> Signed-off-by: Paolo Bonzini
> ---
> hmp-commands.hx| 14 +
>
On 11/05/21 17:39, Paolo Bonzini wrote:
Most block device commands do not require a fully constructed machine.
Allow running them before machine initialization has concluded.
Signed-off-by: Paolo Bonzini
---
hmp-commands.hx| 14 +
qapi/block-core.json | 117
On Thu, 2021-06-03 at 15:37 +0200, Paolo Bonzini wrote:
> Hi Kevin,
>
> this is a combination of two series that both affect host block device
> support in block/file-posix.c. Joelle's series is unchanged, while
> mine was adjusted according to your review of v2.
>
> v1->v2: add missing patch
>
Since 2020, I started to develop a userspace NVMF initiator library:
https://github.com/bytedance/libnvmf
and released v0.1 recently.
Also developed block driver for QEMU side:
https://github.com/pizhenwei/qemu/tree/block-nvmf
Test with linux kernel NVMF target (TCP), QEMU gets about 220K IOPS,
Add a new qemu block driver which uses libnvmf as userspace NVMe over
fabric initiator.
Currently QEMU uses 4 NVMF IO-queues by a RR policy, test with a
linux kernel NVMF target, QEMU gets about 220K IOPS.
Thanks to Famz for several suggestions.
Signed-off-by: zhenwei pi
---
block/meson.build
On Jun 7 13:24, Vladimir Sementsov-Ogievskiy wrote:
07.06.2021 13:00, Klaus Jensen wrote:
On Jun 7 10:11, Vladimir Sementsov-Ogievskiy wrote:
07.06.2021 09:17, Klaus Jensen wrote:
On Jun 7 08:14, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 09:52, Klaus Jensen wrote:
I've kept the RFC
07.06.2021 13:00, Klaus Jensen wrote:
On Jun 7 10:11, Vladimir Sementsov-Ogievskiy wrote:
07.06.2021 09:17, Klaus Jensen wrote:
On Jun 7 08:14, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 09:52, Klaus Jensen wrote:
I've kept the RFC since I'm still new to using the block layer like
On May 31 21:39, Klaus Jensen wrote:
On May 31 15:42, Niklas Cassel wrote:
On Fri, May 28, 2021 at 01:22:38PM +0200, Klaus Jensen wrote:
On May 28 11:05, Niklas Cassel wrote:
From: Niklas Cassel
In the Zoned Namespace Command Set Specification, chapter
2.5.1 Managing resources
"The
On Jun 7 09:58, Niklas Cassel wrote:
On Mon, Jun 07, 2021 at 11:54:02AM +0200, Klaus Jensen wrote:
On Jun 1 07:30, Niklas Cassel wrote:
> On Mon, May 31, 2021 at 09:39:20PM +0200, Klaus Jensen wrote:
> > On May 31 15:42, Niklas Cassel wrote:
> > > On Fri, May 28, 2021 at 01:22:38PM +0200,
On Jun 7 10:11, Vladimir Sementsov-Ogievskiy wrote:
07.06.2021 09:17, Klaus Jensen wrote:
On Jun 7 08:14, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 09:52, Klaus Jensen wrote:
I've kept the RFC since I'm still new to using the block layer like
this. I was hoping that Stefan could find
On Mon, Jun 07, 2021 at 11:54:02AM +0200, Klaus Jensen wrote:
> On Jun 1 07:30, Niklas Cassel wrote:
> > On Mon, May 31, 2021 at 09:39:20PM +0200, Klaus Jensen wrote:
> > > On May 31 15:42, Niklas Cassel wrote:
> > > > On Fri, May 28, 2021 at 01:22:38PM +0200, Klaus Jensen wrote:
> > > > > On May
On Jun 1 07:30, Niklas Cassel wrote:
On Mon, May 31, 2021 at 09:39:20PM +0200, Klaus Jensen wrote:
On May 31 15:42, Niklas Cassel wrote:
> On Fri, May 28, 2021 at 01:22:38PM +0200, Klaus Jensen wrote:
> > On May 28 11:05, Niklas Cassel wrote:
> > > From: Niklas Cassel
> > >
> > > In the Zoned
From: Klaus Jensen
Qiang Liu reported that an access on an unknown address is triggered in
memory_region_set_enabled because a check on CAP.PMRS is missing for the
PMRCTL register write when no PMR is configured.
Cc: qemu-sta...@nongnu.org
Fixes: 75c3c9de961d ("hw/block/nvme: disable PMR at
On 04/06/21 12:07, Emanuele Giuseppe Esposito wrote:
+WITH_QEMU_LOCK_GUARD(>lock) {
+new_state = s->state;
+QLIST_FOREACH_SAFE(rule, >rules[event], next, next) {
+process_rule(bs, rule, actions_count, _state);
+}
+s->state = new_state;
}
On 04/06/21 18:16, Eric Blake wrote:
On Fri, Jun 04, 2021 at 12:07:36PM +0200, Emanuele Giuseppe Esposito wrote:
Extract to a separate function. Do not rely on FOREACH_SAFE, which is
only "safe" if the *current* node is removed---not if another node is
removed. Instead, just walk the entire
03.06.2021 16:37, Paolo Bonzini wrote:
Even though it was only called for devices that have bs->sg set (which
must be character devices),
sg_get_max_segments looked at /sys/dev/block which only works for
block devices.
I assume, you keep /sys/dev/block code branch here only for following
07.06.2021 09:17, Klaus Jensen wrote:
Hi Vladimir,
Thanks for taking the time to look through this!
I'll try to comment on all your observations below.
On Jun 7 08:14, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 09:52, Klaus Jensen wrote:
From: Klaus Jensen
This series reimplements
Hi Vladimir,
Thanks for taking the time to look through this!
I'll try to comment on all your observations below.
On Jun 7 08:14, Vladimir Sementsov-Ogievskiy wrote:
04.06.2021 09:52, Klaus Jensen wrote:
From: Klaus Jensen
This series reimplements flush, dsm, copy, zone reset and format
07.06.2021 08:39, Vladimir Sementsov-Ogievskiy wrote:
03.06.2021 16:37, Paolo Bonzini wrote:
Even though it was only called for devices that have bs->sg set (which
must be character devices),
sg_get_max_segments looked at /sys/dev/block which only works for
block devices.
On Linux the sg
37 matches
Mail list logo