ot;);
> return false;
> }
>
>
> Is anyone aware of this issue?
>
> Br, David Cohen
CCing Cameron and Roman, just in case.
Best regards,
Christian Schoenebeck
On Freitag, 21. August 2020 15:13:35 CEST Paolo Bonzini wrote:
> On 20/08/20 14:00, Christian Schoenebeck wrote:
> > One way would be a recursive type and logging a warning, which you
> > obviously don't like; another option would be an assertion fault instead
> > to make d
On Samstag, 22. August 2020 02:16:23 CEST Geoffrey McRae wrote:
> On 2020-08-22 03:47, Paolo Bonzini wrote:
> > On 21/08/20 19:34, Christian Schoenebeck wrote:
> >>> static void qjack_fini_out(HWVoiceOut *hw)
> >>> {
> >>>
> >>> QJ
t; +qemu_mutex_unlock(>shutdown_lock);
> }
>
> static void qjack_fini_out(HWVoiceOut *hw)
> {
> QJackOut *jo = (QJackOut *)hw;
> qjack_client_fini(>c);
> +
> +qemu_bh_delete(jo->c.shutdown_bh);
Paolo wrapped that qemu_bh_delete() call inside the lock as well. So I guess
it makes a difference for the BH API?
> +qemu_mutex_destroy(>c.shutdown_lock);
> }
Hmmm, is this qemu_mutex_destroy() safe at this point?
>
> static void qjack_fini_in(HWVoiceIn *hw)
> {
> QJackIn *ji = (QJackIn *)hw;
> qjack_client_fini(>c);
> +
> +qemu_bh_delete(ji->c.shutdown_bh);
> +qemu_mutex_destroy(>c.shutdown_lock);
> }
>
> static void qjack_enable_out(HWVoiceOut *hw, bool enable)
Best regards,
Christian Schoenebeck
On Freitag, 21. August 2020 15:08:09 CEST Paolo Bonzini wrote:
> On 21/08/20 13:12, Christian Schoenebeck wrote:
> > There is a golden rule with recursive locks: You always have to preserve a
> > clear hierarchy. Say you have the following recursive mutexes:
> >
>
it guards a clear ownership relation and hence allows to guard and prevent
many mistakes.
Best regards,
Christian Schoenebeck
client_connect_ports(>c);
> > return qjack_buffer_write(>c.fifo, buf, len);
> >
> > }
> >
> > So you are ensuring to reconnect the JACK ports in every cycle. Isn't
> > that a
> > bit often?
>
> No, please see the implementation of qjack_client_connect_ports.
Ah, you mean this entry check:
static void qjack_client_connect_ports(QJackClient *c)
{
if (!c->connect_ports || !c->opt->connect_ports) {
return;
}
...
It's okay. However on the long-term I would consider moving that away from the
audio thread as most JACK API functions are not deterministic, i.e. they could
lead to audio dropouts if executed on audio thread.
Best regards,
Christian Schoenebeck
d-safe, it can of
course race with the loop that executes bottom halves unless you are
holding the iothread mutex. This makes it mostly useless if you are not
holding the mutex."
So this lock was not about driver internal data protection, but rather about
dealing with the BH API correctly.
Best regards,
Christian Schoenebeck
not initialized as 'recursive' lock type. Does that make
sense? I.e. shouldn't there be a
qemu_rec_mutex_init(_global_mutex);
in softmmu/cpus.c for safety reasons to prevent nested locks from same thread
causing misbehaviour?
CCing Paolo to clarify.
Best regards,
Christian Schoenebeck
, size_t len)
{
QJackOut *jo = (QJackOut *)hw;
++jo->c.packets;
if (jo->c.state != QJACK_STATE_RUNNING) {
qjack_client_recover(>c);
return len;
}
qjack_client_connect_ports(>c);
return qjack_buffer_write(>c.fifo, buf, len);
}
So you are ensuring to reconnect the JACK ports in every cycle. Isn't that a
bit often?
Best regards,
Christian Schoenebeck
holding the iothread mutex. This makes it mostly useless if you are
not
holding the mutex."
> switch (c->state) {
> case QJACK_STATE_RUNNING:
> jack_deactivate(c->client);
> @@ -564,6 +574,7 @@ static void qjack_client_fini(QJackClient *c)
>
> case QJACK_STATE_SHUTDOWN:
> jack_client_close(c->client);
> +c->client = NULL;
> /* fallthrough */
>
> case QJACK_STATE_DISCONNECTED:
Best regards,
Christian Schoenebeck
quite a while with 9pfs. :)
And I am working on the performance issues actually.
Best regards,
Christian Schoenebeck
eird that there is no jack_client_version() in the shared weak
> > API (i.e.
> > missing on JACK1 side).
>
> I raised this as an issue today:
> https://github.com/jackaudio/jack2/issues/628
> The developer there seems to feel that allowing the application to know
> the jack client version is a bad thing.
Yeah, I raised my voice there as well now.
Best regards,
Christian Schoenebeck
here, because the JACK API is very clear that it is the
client's responsibility to free itself.
So it looks like a JACK2-only-bug to me.
Very weird that there is no jack_client_version() in the shared weak API (i.e.
missing on JACK1 side).
Best regards,
Christian Schoenebeck
would be sufficient for the use case?
Best regards,
Christian Schoenebeck
s JACK1 vs. JACK2 and despite their names, they are in fact
completely separate implementations and there are people who prefer one over
the other. Your change would affect JACK1 as well.
Best regards,
Christian Schoenebeck
Fixed in QEMU 5.1 release.
** Changed in: qemu
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877688
Title:
9p virtfs device reports error when
response size. So this will not introduce a performance penalty on
another end.
Also: No longer seek initial directory position in v9fs_readdir(),
as this is now handled (more consistently) by
v9fs_co_readdir_many() instead.
Signed-off-by: Christian Schoenebeck
Message-Id:
Signed-off-by: Christian
already being bottom half focused in regards
to fetching directory entries that is).
Signed-off-by: Christian Schoenebeck
Message-Id:
<9a2ddc347e533b0d801866afd9dfac853d2d4106.1596012787.git.qemu_...@crudebyte.com>
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 21 ++-
.
Christian Schoenebeck (7):
tests/virtio-9p: added split readdir tests
9pfs: make v9fs_readdir_response_size() public
9pfs: split out fs driver core of v9fs_co_readdir()
9pfs: add new function v9fs_co_readdir_many()
9pfs: T_readdir latency optimization
9pfs
As we just fixed a severe performance issue with Treaddir request
handling, clarify this overall issue as a comment on
v9fs_co_run_in_worker() with the intention to hopefully prevent
such performance mistakes in future (and fixing other yet
outstanding ones).
Signed-off-by: Christian Schoenebeck
hat this function is used for.
Signed-off-by: Christian Schoenebeck
Reviewed-by: Greg Kurz
Message-Id:
<3668ebc7d5b929a0e4f1357457060d96f50f76f4.1596012787.git.qemu_...@crudebyte.com>
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 10 --
hw/9pfs/9p.h | 1 +
2 files changed, 9 i
latencies of several hundred ms or even more. So
using v9fs_co_readdir_many() instead of v9fs_co_readdir() will
provide significant performance improvements.
Signed-off-by: Christian Schoenebeck
Message-Id:
<73dc827a12ef577ae7e644dcf34a5c0e443ab42f.1596012787.git.qemu_...@crudebyte.com>
.
This is just a preparatory patch for the subsequent patch, with the
purpose to avoid the next patch to clutter the overall diff.
Signed-off-by: Christian Schoenebeck
Reviewed-by: Greg Kurz
Message-Id:
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/codir.c | 38
with count=256
3. Split readdir test with count=128
This test case sequence is chosen because the smaller the 'count' value,
the higher the chance of errors in case of implementation bugs on server
side.
Signed-off-by: Christian Schoenebeck
Reviewed-by: Greg Kurz
Message-Id
The status of this issue is unchanged in QEMU, i.e. user.virtfs.* is
still filtered out.
If someone wants to see this changed, please use the common way for sending the
patch via ML:
https://wiki.qemu.org/Contribute/SubmitAPatch
--
You received this bug notification because you are a member of
On Donnerstag, 30. Juli 2020 12:08:33 CEST Christian Schoenebeck wrote:
> > @@ -52,6 +56,167 @@ int coroutine_fn v9fs_co_readdir(V9fsPDU *pdu,
> > V9fsFidState *fidp, return err;
> >
> > }
> >
> > +/*
> > + * This is solely executed on a
On Mittwoch, 29. Juli 2020 10:12:33 CEST Christian Schoenebeck wrote:
> The newly added function v9fs_co_readdir_many() retrieves multiple
> directory entries with a single fs driver request. It is intended to
> replace uses of v9fs_co_readdir(), the latter only retrives a single
> dir
On Mittwoch, 29. Juli 2020 18:02:56 CEST Greg Kurz wrote:
> On Wed, 29 Jul 2020 10:11:54 +0200
>
> Christian Schoenebeck wrote:
> > The implementation of v9fs_co_readdir() has two parts: the outer
> > part is executed by main I/O thread, whereas the inner part is
>
On Mittwoch, 29. Juli 2020 17:42:54 CEST Greg Kurz wrote:
> On Wed, 29 Jul 2020 10:10:23 +0200
>
> Christian Schoenebeck wrote:
> > The previous, already existing 'basic' readdir test simply used a
> > 'count' parameter big enough to retrieve all directory entries with
As we just fixed a severe performance issue with Treaddir request
handling, clarify this overall issue as a comment on
v9fs_co_run_in_worker() with the intention to hopefully prevent
such performance mistakes in future (and fixing other yet
outstanding ones).
Signed-off-by: Christian Schoenebeck
.
This is just a preparatory patch for the subsequent patch, with the
purpose to avoid the next patch to clutter the overall diff.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/codir.c | 37 +++--
1 file changed, 23 insertions(+), 14 deletions(-)
diff --git
latencies of several hundred ms or even more. So
using v9fs_co_readdir_many() instead of v9fs_co_readdir() will
provide significant performance improvements.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.h| 22 +++
hw/9pfs/codir.c | 165
with count=256
3. Split readdir test with count=128
This test case sequence is chosen because the smaller the 'count' value,
the higher the chance of errors in case of implementation bugs on server
side.
Signed-off-by: Christian Schoenebeck
---
tests/qtest/virtio-9p-test.c | 108
already being bottom half focused in regards
to fetching directory entries that is).
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 21 ++---
hw/9pfs/9p.h | 27 ++-
2 files changed, 40 insertions(+), 8 deletions(-)
diff --git a/hw/9pfs/9p.c b/hw/9pfs
-ID of previous version (v7):
cover.1595166227.git.qemu_...@crudebyte.com
Message-ID of version with performance benchmark (v4):
cover.1579567019.git.qemu_...@crudebyte.com
Christian Schoenebeck (7):
tests/virtio-9p: added split readdir tests
9pfs: make v9fs_readdir_response_size() public
9
response size. So this will not introduce a performance penalty on
another end.
Also: No longer seek initial directory position in v9fs_readdir(),
as this is now handled (more consistently) by
v9fs_co_readdir_many() instead.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 132
hat this function is used for.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 10 --
hw/9pfs/9p.h | 1 +
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 2ffd96ade9..7a228c4828 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -2313,7 +2313
On Sonntag, 19. Juli 2020 15:20:11 CEST Christian Schoenebeck wrote:
> Previous patch suggests that it might make sense to use a different mutex
> type now while handling readdir requests, depending on the precise
> protocol variant, as v9fs_do_readdir_with_stat() (used by 9P200
On Dienstag, 28. Juli 2020 10:46:00 CEST Greg Kurz wrote:
> > So I'll prepare a v8 with this patch here split into two.
> >
> > But this is it. I don't see another chunk in this patch set that could be
> > split further down in an useful way.
> >
> > Bes
On Sonntag, 19. Juli 2020 14:29:13 CEST Christian Schoenebeck wrote:
> The newly added function v9fs_co_readdir_many() retrieves multiple
> directory entries with a single fs driver request. It is intended to
> replace uses of v9fs_co_readdir(), the latter only retrives a single
> dir
last option: simply taking the risk by ignoring this potential
issue. The SASL headers are still made available by Apple, which suggests that
they don't break SASL 'too often'.
Your choice. ;-)
Best regards,
Christian Schoenebeck
ber/001254.html
The common recommendation is: "Ship your macOS app bundled with the preferred
version of these libs."
Best regards,
Christian Schoenebeck
error.
>
> Fix it by using the "official" built-in functions instead (see e.g.
> https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html).
> We are not using the *_8 variants in QEMU anyway.
>
> Suggested-by: Christian Schoenebeck
> Signed-off-by: Thomas Huth
>
hat this function is used for.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 10 --
hw/9pfs/9p.h | 1 +
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 2ffd96ade9..7a228c4828 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -2313,7 +2313
of
fetching directory entries is located at (9P2000.u still being more top
half focused, while 9P2000.L already being bottom half focused in regards
to fetching directory entries that is).
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 4 ++--
hw/9pfs/9p.h | 27
response size. So this will not introduce a performance penalty on
another end.
Also: No longer seek initial directory position in v9fs_readdir(),
as this is now handled (more consistently) by
v9fs_co_readdir_many() instead.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.c | 132
(v4):
cover.1579567019.git.qemu_...@crudebyte.com
Christian Schoenebeck (6):
tests/virtio-9p: added split readdir tests
9pfs: make v9fs_readdir_response_size() public
9pfs: add new function v9fs_co_readdir_many()
9pfs: T_readdir latency optimization
9pfs: differentiate readdir lock betw
As we just fixed a severe performance issue with Treaddir request
handling, clarify this overall issue as a comment on
v9fs_co_run_in_worker() with the intention to hopefully prevent
such performance mistakes in future (and fixing other yet
outstanding ones).
Signed-off-by: Christian Schoenebeck
with count=256
3. Split readdir test with count=128
This test case sequence is chosen because the smaller the 'count' value,
the higher the chance of errors in case of implementation bugs on server
side.
Signed-off-by: Christian Schoenebeck
---
tests/qtest/virtio-9p-test.c | 108
latencies of several hundred ms or even more. So
using v9fs_co_readdir_many() instead of v9fs_co_readdir() will
provide significant performance improvements.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/9p.h| 22 ++
hw/9pfs/codir.c | 196
data truncation and/or ending up in wrong
registers on ABI level, etc.
So __atomic_*_8() -> __atomic_*() (and including ) is probably
the best choice. Even though __atomic_*() uses a different data type, but its
just a couple lines changes in this case fortunately, as the actual qemu code
base is not affected.
Best regards,
Christian Schoenebeck
) instead of __atomic_load_8(), etc.
That's what the actual qemu code base is using actually anyway.
Best regards,
Christian Schoenebeck
g to either put 9pfs into either
'high' or 'low', because another observed problematic with 9pfs is that the
degree of participation is so low, that if you try to impose certain formal
minimum requirements to contributors, you usually never hear from them again.
And BTW Prasad actually suggested t
nly b) the avarage use cases (as far
we know) for 9pfs are above a certain trust level.
However b) does not imply 9pfs being 'unsafe', nor that we want users to
refrain using it in a security relevant environment. So 9pfs would actually be
somewhere in between.
Best regards,
Christian Schoenebeck
pite the details, that concept in general has the limitation of
being a somewhat undeterministic runtime feature; i.e. it might abort
immediately (good) or who knows when (bad). Hence being able to also associate
a security level with runtime parameters would be beneficial to cause the
abortion to happen rather immediately.
Best regards,
Christian Schoenebeck
On Freitag, 3. Juli 2020 18:08:21 CEST Greg Kurz wrote:
> On Fri, 03 Jul 2020 10:08:09 +0200
>
> Christian Schoenebeck wrote:
> > On Donnerstag, 2. Juli 2020 19:23:35 CEST Christian Schoenebeck wrote:
> > > > > Back to the actual topic: so what do we do about the
> > > > Plus if there are really users caring for 9p2000.u, I can gladly
> > > > assist
> > > > them to address these issues for 9p2000.u as well, provided that they
> > > > help at least with testing the required 9p2000.u changes.
> > >
> > > I would personally do the opposite... again ;)
> > >
> > > Like you say we essentially care for 9p2000.L and we only do limited
> > > support for 9p2000.u. If we have a change that we really want for
> > > 9p2000.L but it has an impact on 9p2000.u because of shared code,
> > > it is fine to do the changes anyway, including changes to the 9p2000.u
> > > specific code. Very basic testing of 9p2000.u (mount/ls or find/umount)
> > > or maybe running pjd-fstest is enough IMHO. If we break someone's setup
> > > and he complains, then we can ask him to test a fix before we merge it.
> >
> > Well, so you want to handle the relevant 9p2000.u readdir optimizations by
> > yourself, and you would send it as follow-up patch set (after this set is
> > through), including test cases?
>
> Ah it wasn't my point. I was just saying that any valuable change for
> 9p2000.L prevails and if you have to change some common code (eg.
> locking) that could impact the 9p2000.u experience, you can do it
> anyway, even if you only do smoke testing with 9p2000.u.
Ah I see, so you would. But you don't. ;-)
And yes, I could. But I won't either, for the following reasons:
1. I would have to write readdir test cases for 9p2000.u as well, because the
9p2000.L tests are incompatible. -> My time
2. I have to change the 9p2000.u server code -> My time
3. This patch grows substantially, by both lines and amount of patches -> My
time and probably +10 versions more of this patch series.
4. I would have to do at least some basic testing of 9p2000.u behaviour
-> My time
All these things would bring me +x months farther away from reaching my actual
goal: making 9p useable by Linux clients. So no, I don't waste time on
9p2000.u optimizations before I see somebody actually caring for 9p2000.u
efficiency.
Best regards,
Christian Schoenebeck
On Donnerstag, 2. Juli 2020 19:23:35 CEST Christian Schoenebeck wrote:
> > > Back to the actual topic: so what do we do about the mutex then? CoMutex
> > > for 9p2000.u and Mutex for 9p2000.L? I know you find that ugly, but it
> > > would just be a transitional measure.
p2000.u changes.
>
> I would personally do the opposite... again ;)
>
> Like you say we essentially care for 9p2000.L and we only do limited
> support for 9p2000.u. If we have a change that we really want for
> 9p2000.L but it has an impact on 9p2000.u because of share
On Mittwoch, 1. Juli 2020 17:12:40 CEST Greg Kurz wrote:
> On Wed, 01 Jul 2020 13:47:12 +0200
>
> Christian Schoenebeck wrote:
> > On Mittwoch, 1. Juli 2020 12:09:24 CEST Greg Kurz wrote:
> > > No I'm talking about code that isn't changed by this series:
> >
ple
> worker threads...
Well, your Mutex -> CoMutex change was intended to fix a deadlock with the old
readdir implementation. That deadlock could not happen with the new readdir
implementation, which suggests that it probably makes sense to revert this
change (i.e. CoMutex -> Mutex) for this new implementation.
Best regards,
Christian Schoenebeck
On Dienstag, 30. Juni 2020 18:39:57 CEST Greg Kurz wrote:
> On Tue, 30 Jun 2020 17:16:40 +0200
>
> Christian Schoenebeck wrote:
> > On Montag, 29. Juni 2020 18:39:02 CEST Greg Kurz wrote:
> > > On Wed, 03 Jun 2020 19:16:08 +0200
> > >
> > > Christian
On Montag, 29. Juni 2020 18:39:02 CEST Greg Kurz wrote:
> On Wed, 03 Jun 2020 19:16:08 +0200
>
> Christian Schoenebeck wrote:
> > On Sonntag, 19. April 2020 17:06:17 CEST Christian Schoenebeck wrote:
> > > Make top half really top half and bottom half really bottom
That's the behaviour on macOS that I would expect ATM. So it's not a
bug.
Your macOS version was compiled without virtfs support, that's why qemu
does not even offer you these options.
Even though 9P is a network protocol, you still need support by host OS
and guest OS for some kind of
Planning/5.0 (by clicking the
> star/"add to watchlist" icon), then you'll get notifications when
> additional release/freeze dates are added. Generally it will be updated
> shortly before the patch round-up gets posted to qemu-stable.
Good idea! Will do that.
Thanks Michael!
Best regards,
Christian Schoenebeck
On Dienstag, 16. Juni 2020 17:14:40 CEST Greg Kurz wrote:
> Cc'ing co-maintainer Christian Schoenebeck.
>
> Christian,
>
> If there are some more commits you think are worth being cherry picked
> for QEMU 4.2.1, please inform Michael before freeze on 2020-06-22.
Indeed, f
On Sonntag, 19. April 2020 17:06:17 CEST Christian Schoenebeck wrote:
> Make top half really top half and bottom half really bottom half:
>
> Each T_readdir request handling is hopping between threads (main
> I/O thread and background I/O driver threads) several times for
> e
Fix is now committed on master as SHA-1
cf45183b718f02b1369e18c795dc51bc1821245d, which actually just reverted the
mentioned commit that was leading to this broken behavior:
https://github.com/qemu/qemu/commit/cf45183b718f02b1369e18c795dc51bc1821245d
The original Xen transport bug that
lready reviewed by me; no changes, so just to make it clear:
Reviewed-by: Christian Schoenebeck
> hw/9pfs/9p.c | 33 +++--
> hw/9pfs/9p.h | 2 +-
> hw/9pfs/virtio-9p-device.c | 11 ---
> hw/9pfs/xen-9p-backend.c | 15 +
s in v2:
> - patch added
> ---
Reviewed-by: Christian Schoenebeck
> hw/9pfs/xen-9p-backend.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
> index 3c84c86ab8..a969fcc54c 100644
> --- a/hw/9pfs/x
On Mittwoch, 20. Mai 2020 19:35:00 CEST Stefano Stabellini wrote:
> On Wed, 20 May 2020, Christian Schoenebeck wrote:
> > On Mittwoch, 20. Mai 2020 03:47:12 CEST Stefano Stabellini wrote:
> > > From: Stefano Stabellini
> > >
> > > Instead of truncating repli
Good! Then just for the case ...
Compiling the 9pfs test cases:
cd build
make tests/qtest/qos-test
Running the test cases:
export QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64
tests/qtest/qos-test
All 9pfs test cases are in:
tests/qtest/virtio-9p-test.c
The 9pfs test cases are using a
On Mittwoch, 20. Mai 2020 03:47:11 CEST Stefano Stabellini wrote:
> From: Stefano Stabellini
>
> This reverts commit 16724a173049ac29c7b5ade741da93a0f46edff7.
> It causes https://bugs.launchpad.net/bugs/1877688.
>
> Signed-off-by: Stefano Stabellini
> ---
Reviewed-by:
mu_coroutine_enter_if_inactive(ring->co);
... correct me if I am wrong, but AFAIK qemu_coroutine_enter_if_inactive()
will simply run the coroutine directly on caller's thread, it will not
dispatch the coroutine onto the thread which yielded the coroutine before.
> +}
> xen_9pfs_receive(ring);
> }
AFAICS you have not addressed the problem msize >> xen ringbuffer size, in
which case I would expect the Xen driver to loop forever. Am I missing
something or have you postponed addressing this?
Best regards,
Christian Schoenebeck
> 9pfs: Fix potential deadlock of QEMU mainloop
>
> https://patchwork.ozlabs.org/project/qemu-devel/patch/158826201391.1344781.9
> 403916162733181811.st...@bahia.lan/
Good move!
Reviewed-by: Christian Schoenebeck
Yes, that compile error with QEMU + recent kernel headers is a bit
annoying, and AFAICS it is not fixed in Debian yet.
Would you mind writing a test case for this bug that you fixed, to
prevent this accidentally being broken in future again?
Please note that 9pfs is currently only been taken
_open_fd;
Usually I would say that should be wrapped in some OS conditional way, but as
usage of XATTR_SIZE_MAX is currently not in 9p code either, it's Ok for now.
Reviewed-by: Christian Schoenebeck
Best regards,
Christian Schoenebeck
serious to let it rest for too long.
In case Stefano comes up with a fix for Xen soon, you might just ignore
this patch and just revert SHA-1 16724a173049ac29c7b5ade741da93a0f46edff
entirely instead of course.
Christian Schoenebeck (1):
virtio-9pfs: don't truncate response
hw/9pfs/virtio-9p
-by: Christian Schoenebeck
---
hw/9pfs/virtio-9p-device.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/hw/9pfs/virtio-9p-device.c b/hw/9pfs/virtio-9p-device.c
index 536447a355..bb6154945a 100644
--- a/hw/9pfs/virtio-9p-device.c
+++ b/hw/9pfs/virtio-9p-device.c
@@ -154,16 +154,13
dmin know that Xen's transport buffer should be increased
to avoid this performance issue.
Likewise call warn_report_once() if server's Tversion request handler had to
reduce msize.
Best regards,
Christian Schoenebeck
On Donnerstag, 14. Mai 2020 17:51:27 CEST Stefano Stabellini wrote:
> On Thu, 14 May 2020, Christian Schoenebeck wrote:
> > Looks like this issue will still take quite some time to be fixed with
> > Xen. If you don't mind I'll send out a patch to revert truncation on
>
BTW. However the obvious step would be increasing Xen transport capacity
tremendously at first place.
Looks like this issue will still take quite some time to be fixed with Xen. If
you don't mind I'll send out a patch to revert truncation on virtio side, so
that at least this bug is fixed with virtio ASAP.
Best regards,
Christian Schoenebeck
ize large. If that's the case, then we don't have to worry about
anything else and can drop this transport truncation code.
> > > > Fixes: 16724a173049ac29c7b5ade741da93a0f46edff7
> > > > Fixes: https://bugs.launchpad.net/bugs/1877688
> > > > Signed-off-by: C
On Dienstag, 12. Mai 2020 00:09:46 CEST Stefano Stabellini wrote:
> On Sun, 10 May 2020, Christian Schoenebeck wrote:
> > Commit SHA-1 16724a173049ac29c7b5ade741da93a0f46edff7 introduced
> > truncating the response to the currently available transport buffer
> > size, whic
On Sonntag, 10. Mai 2020 19:18:21 CEST Christian Schoenebeck wrote:
> Commit SHA-1 16724a173049ac29c7b5ade741da93a0f46edff7 introduced
> truncating the response to the currently available transport buffer
> size, which was supposed to fix an 9pfs error on Xen boot where
> transport
If delivery of some 9pfs response fails for some reason, log the
error message by mentioning the 9P protocol reply type, not by
client's request type. The latter could be misleading that the
error occurred already when handling the request input.
Signed-off-by: Christian Schoenebeck
---
hw/9pfs
Stefano, looks like your original patch needs some more fine tuning:
https://bugs.launchpad.net/bugs/1877688
Please check if the assumptions I made about Xen are correct, and please
also test whether these changes still work for you with Xen as intended by
you.
Christian Schoenebeck (2):
xen
.
Fixes: 16724a173049ac29c7b5ade741da93a0f46edff7
Fixes: https://bugs.launchpad.net/bugs/1877688
Signed-off-by: Christian Schoenebeck
---
hw/9pfs/virtio-9p-device.c | 35 +++
hw/9pfs/xen-9p-backend.c | 38 +-
2 files changed, 56
** Changed in: qemu
Status: New => In Progress
** Changed in: qemu
Assignee: (unassigned) => Christian Schoenebeck (schoenebeck)
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877
Looks like being introduced by this change:
https://patchwork.kernel.org/patch/11319993/
More specifically this one exactly:
-if (buf_size < size) {
+if (buf_size < P9_IOHDRSZ) {
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to
Since the report is about overlayfs being involved, could you please try if
the following patch makes a difference?
https://github.com/gkurz/qemu/commit/f7f5a1b01307af1c7b6c94672f2ce75c36f10565
It's not yet on master, but will be soon.
--
You received this bug notification because you are a
t; > >
> > > I could possibly split this in two patches, one for returning a copy
> > > and one for moving the locking around, but...
> > >
> > > > related with each other, nor is the code simplification you are aiming
> > > > trivial
> > >
> > > ... this assertion is somewhat wrong: moving the locking to
> > > v9fs_co_readdir() really requires it returns a copy.
> >
> > Yeah, I am also not sure whether a split would make it more trivial enough
> > in this case to be worth the hassle. If you find an acceptable solution,
> > good, if not then leave it one patch.
>
> Another option would be to g_malloc() the dirent in v9fs_co_readdir() and
> g_free() in the callers. This would cause less churn since we could keep
> the same function signature.
I was actually just going to suggest the same. So yes, looks like a less
invasive change to me.
Best regards,
Christian Schoenebeck
On Montag, 4. Mai 2020 11:18:34 CEST Greg Kurz wrote:
> On Fri, 01 May 2020 16:04:41 +0200
>
> Christian Schoenebeck wrote:
> > On Donnerstag, 30. April 2020 15:30:49 CEST Greg Kurz wrote:
> > > > > I agree that a client that issues concurrent readdir requ
On Mittwoch, 6. Mai 2020 19:54:15 CEST Greg Kurz wrote:
> On Wed, 06 May 2020 15:36:16 +0200
>
> Christian Schoenebeck wrote:
> > On Mittwoch, 6. Mai 2020 15:05:23 CEST Christian Schoenebeck wrote:
> > > > diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
> > > >
e. I would rather use sizeof(struct dirent). It is also
> > more human friendly to read IMO.
>
> Hmm... I believe it's the opposite actually: with sizeof(*dent), memcpy
> will always copy the number of bytes that are expected to fit in *dent,
> no matter the type.
Yes, but what you intend is to flat copy a structure, not pointers. So no
matter how the type is going to be changed you always actually wanted
(semantically)
copy(sizeof(struct dirent), nelements)
Now it is nelements=1, in future it might also be nelements>1, but what you
certainly don't ever want here is
copy(sizeof(void*), nelements)
> But yes, since memcpy() doesn't do any type checking for us, I think
> I'll just turn this into:
>
> *dent = *entry;
Ok
Best regards,
Christian Schoenebeck
On Mittwoch, 6. Mai 2020 15:05:23 CEST Christian Schoenebeck wrote:
> > diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
> > index 9e046f7acb51..ac84ae804496 100644
> > --- a/hw/9pfs/9p.c
> > +++ b/hw/9pfs/9p.c
> > @@ -2170,7 +2170,7 @@ static int coroutine_fn
> > v9
s_rwalk(req, NULL, NULL);
> +
> +req = v9fs_tlopen(v9p, 1, O_WRONLY, 0);
> +v9fs_req_wait_for_reply(req, NULL);
> +v9fs_rlopen(req, NULL, NULL);
> +
> +/*
> + * The first request will remain pending in the backend until
> + * a request with the magic offset is received. This deadlocks
> + * the mainloop of an older QEMU that still handles the critical
> + * section around readdir() in the frontend code.
> + */
> +req = v9fs_treaddir(v9p, 1, 0, 128, 0);
> +req2 = v9fs_treaddir(v9p, 1, QTEST_V9FS_SYNTH_READDIR_DEADLOCK_OFFSET,
> 128, + 1);
> +v9fs_req_wait_for_reply(req, NULL);
> +v9fs_req_free(req);
> +v9fs_req_free(req2);
> +
> +g_free(wnames[0]);
> +}
> +
> static void register_virtio_9p_test(void)
> {
> qos_add_test("config", "virtio-9p", pci_config, NULL);
> @@ -810,6 +842,7 @@ static void register_virtio_9p_test(void)
> qos_add_test("fs/flush/ignored", "virtio-9p", fs_flush_ignored,
> NULL);
> qos_add_test("fs/readdir/basic", "virtio-9p", fs_readdir, NULL);
> +qos_add_test("fs/readdir/deadlock", "virtio-9p", fs_readdir_deadlock,
> NULL); }
>
> libqos_init(register_virtio_9p_test);
Best regards,
Christian Schoenebeck
_many() is currently simply
directly calling v9fs_readdir_response_size() to decide whether to terminate
the loop instead of taking some complicated general-purpose loop end
"predicate" structure or callback as function argument).
But when it comes to the structure of the code that I have to add NOW, then I
indeed take potential future changes into account, yes! And this applies
specifically to the two changes you requested here inside readdir_many().
Because I already know, I would need to revert those 2 changes that you
requested later on. And I don't see any issue whatsover retaining the current
version concerning those 2.
Best regards,
Christian Schoenebeck
at the "st"
> > member is optional and may be NULL. So if there's an error inside if
> > (dostat) {} then caller still has a valid "dent" field at least and it's
> > up to caller whether or not it's a problem for its purpose that "st" is
> > empty. For that reason I would not move the block forward.
>
> Hrm... the comment is:
>
> /*
> * optional (may be NULL): A full stat of each directory entry is just
> * done if explicitly told to fs driver.
> */
>
> I don't read that it is optional for the fs driver to populate "st"
> if this was required by the caller.
Well, if you find that ambigious, I could add an additional sentence "It might
also be NULL if stat failed.".
> Also, looking at the next patch
> I see that the condition for calling stat() is V9FS_REMAP_INODES and
> the code explicitly requires "st" to be available in this case.
Yes, but there is also an if (!st) { ... } in the subsequent patch already. So
I don't see an issue here.
Changing the order in readdir_many() would prevent future applications of that
function that I described. Or let's say, order would need to be reverted back
again later on.
Best regards,
Christian Schoenebeck
1101 - 1200 of 1403 matches
Mail list logo