Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations

2023-04-12 Thread Peter Maydell
On Wed, 12 Apr 2023 at 19:22, David Woodhouse  wrote:
>
> On Tue, 2023-04-11 at 19:07 +0100, Peter Maydell wrote:
> >
> >
> > > +static void xs_be_unwatch(struct qemu_xs_handle *h, struct
> > > qemu_xs_watch *w)
> > > +{
> > > +xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL,
> > > xs_be_watch_cb, w);
> >
> > Coverity points out that this is the only call to xs_impl_unwatch()
> > where we don't check the return value. Is there some useful way
> > we can report the error, or is it a "we're closing everything down
> > anyway, no way to report anything" situation? (This particular
> > Coverity heuristic is quite prone to false positives, so if that's
> > the way it is I'll just mark it as a f-p in the coverity UI.)
>
> This is because the Xen libxenstore API doesn't return an error, and
> this is the ops function which emulates that same API. I suppose we
> could explicitly cast to void with a comment to that effect, to avoid
> having it linger in Coverity? I think that's sufficient to make
> Coverity shut up, isn't it?

I've just marked it a false-positive in the UI. Coverity's generally
good at not resurfacing old false-positives, so don't bother
changing the code unless you think it would improve clarity for
a human reader.

thanks
-- PMM



Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations

2023-04-12 Thread David Woodhouse
On Tue, 2023-04-11 at 19:07 +0100, Peter Maydell wrote:
> 
> 
> > +static void xs_be_unwatch(struct qemu_xs_handle *h, struct
> > qemu_xs_watch *w)
> > +{
> > +    xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL,
> > xs_be_watch_cb, w);
> 
> Coverity points out that this is the only call to xs_impl_unwatch()
> where we don't check the return value. Is there some useful way
> we can report the error, or is it a "we're closing everything down
> anyway, no way to report anything" situation? (This particular
> Coverity heuristic is quite prone to false positives, so if that's
> the way it is I'll just mark it as a f-p in the coverity UI.)

This is because the Xen libxenstore API doesn't return an error, and
this is the ops function which emulates that same API. I suppose we
could explicitly cast to void with a comment to that effect, to avoid
having it linger in Coverity? I think that's sufficient to make
Coverity shut up, isn't it?


smime.p7s
Description: S/MIME cryptographic signature


Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations

2023-04-11 Thread Peter Maydell
On Tue, 7 Mar 2023 at 18:27, David Woodhouse  wrote:
>
> From: David Woodhouse 
>
> Now that we have an internal implementation of XenStore, we can populate
> the xenstore_backend_ops to allow PV backends to talk to it.
>
> Watches can't be processed with immediate callbacks because that would
> call back into XenBus code recursively. Defer them to a QEMUBH to be run
> as appropriate from the main loop. We use a QEMUBH per XS handle, and it
> walks all the watches (there shouldn't be many per handle) to fire any
> which have pending events. We *could* have done it differently but this
> allows us to use the same struct watch_event as we have for the guest
> side, and keeps things relatively simple.
>
> Signed-off-by: David Woodhouse 
> Reviewed-by: Paul Durrant 



> +static void xs_be_unwatch(struct qemu_xs_handle *h, struct qemu_xs_watch *w)
> +{
> +xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL, xs_be_watch_cb, w);

Coverity points out that this is the only call to xs_impl_unwatch()
where we don't check the return value. Is there some useful way
we can report the error, or is it a "we're closing everything down
anyway, no way to report anything" situation? (This particular
Coverity heuristic is quite prone to false positives, so if that's
the way it is I'll just mark it as a f-p in the coverity UI.)


> +h->watches = g_list_remove(h->watches, w);
> +g_list_free_full(w->events, (GDestroyNotify)free_watch_event);
> +g_free(w->path);
> +g_free(w);
> +}

thanks
-- PMM



Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations

2023-04-11 Thread Peter Maydell
On Tue, 7 Mar 2023 at 18:27, David Woodhouse  wrote:
>
> From: David Woodhouse 
>
> Now that we have an internal implementation of XenStore, we can populate
> the xenstore_backend_ops to allow PV backends to talk to it.
>
> Watches can't be processed with immediate callbacks because that would
> call back into XenBus code recursively. Defer them to a QEMUBH to be run
> as appropriate from the main loop. We use a QEMUBH per XS handle, and it
> walks all the watches (there shouldn't be many per handle) to fire any
> which have pending events. We *could* have done it differently but this
> allows us to use the same struct watch_event as we have for the guest
> side, and keeps things relatively simple.


> +static struct qemu_xs_handle *xs_be_open(void)
> +{
> +XenXenstoreState *s = xen_xenstore_singleton;
> +struct qemu_xs_handle *h;
> +
> +if (!s && !s->impl) {

Coverity points out that this will dereference a NULL pointer
if you hand it one, and will happily let through a XenXenstoreState
where s->impl is NULL.
Should be "!s || !s->impl" I guess ?
CID 1508131.

> +errno = -ENOSYS;
> +return NULL;
> +}
> +
> +h = g_new0(struct qemu_xs_handle, 1);
> +h->impl = s->impl;
> +
> +h->watch_bh = aio_bh_new(qemu_get_aio_context(), be_watch_bh, h);
> +
> +return h;
> +}

thanks
-- PMM



[PULL 22/27] hw/xen: Add emulated implementation of XenStore operations

2023-03-07 Thread David Woodhouse
From: David Woodhouse 

Now that we have an internal implementation of XenStore, we can populate
the xenstore_backend_ops to allow PV backends to talk to it.

Watches can't be processed with immediate callbacks because that would
call back into XenBus code recursively. Defer them to a QEMUBH to be run
as appropriate from the main loop. We use a QEMUBH per XS handle, and it
walks all the watches (there shouldn't be many per handle) to fire any
which have pending events. We *could* have done it differently but this
allows us to use the same struct watch_event as we have for the guest
side, and keeps things relatively simple.

Signed-off-by: David Woodhouse 
Reviewed-by: Paul Durrant 
---
 hw/i386/kvm/xen_xenstore.c | 273 -
 1 file changed, 269 insertions(+), 4 deletions(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 35898e9b37..bf466c71ed 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -49,7 +49,7 @@ struct XenXenstoreState {
 /*< public >*/
 
 XenstoreImplState *impl;
-GList *watch_events;
+GList *watch_events; /* for the guest */
 
 MemoryRegion xenstore_page;
 struct xenstore_domain_interface *xs;
@@ -73,6 +73,8 @@ struct XenXenstoreState *xen_xenstore_singleton;
 static void xen_xenstore_event(void *opaque);
 static void fire_watch_cb(void *opaque, const char *path, const char *token);
 
+static struct xenstore_backend_ops emu_xenstore_backend_ops;
+
 static void G_GNUC_PRINTF (4, 5) relpath_printf(XenXenstoreState *s,
 GList *perms,
 const char *relpath,
@@ -169,6 +171,8 @@ static void xen_xenstore_realize(DeviceState *dev, Error 
**errp)
 relpath_printf(s, perms, "feature", "%s", "");
 
 g_list_free_full(perms, g_free);
+
+xen_xenstore_ops = &emu_xenstore_backend_ops;
 }
 
 static bool xen_xenstore_is_needed(void *opaque)
@@ -1306,6 +1310,15 @@ struct watch_event {
 char *token;
 };
 
+static void free_watch_event(struct watch_event *ev)
+{
+if (ev) {
+g_free(ev->path);
+g_free(ev->token);
+g_free(ev);
+}
+}
+
 static void queue_watch(XenXenstoreState *s, const char *path,
 const char *token)
 {
@@ -1352,9 +1365,7 @@ static void process_watch_events(XenXenstoreState *s)
 deliver_watch(s, ev->path, ev->token);
 
 s->watch_events = g_list_remove(s->watch_events, ev);
-g_free(ev->path);
-g_free(ev->token);
-g_free(ev);
+free_watch_event(ev);
 }
 
 static void xen_xenstore_event(void *opaque)
@@ -1444,3 +1455,257 @@ int xen_xenstore_reset(void)
 
 return 0;
 }
+
+struct qemu_xs_handle {
+XenstoreImplState *impl;
+GList *watches;
+QEMUBH *watch_bh;
+};
+
+struct qemu_xs_watch {
+struct qemu_xs_handle *h;
+char *path;
+xs_watch_fn fn;
+void *opaque;
+GList *events;
+};
+
+static char *xs_be_get_domain_path(struct qemu_xs_handle *h, unsigned int 
domid)
+{
+return g_strdup_printf("/local/domain/%u", domid);
+}
+
+static char **xs_be_directory(struct qemu_xs_handle *h, xs_transaction_t t,
+  const char *path, unsigned int *num)
+{
+GList *items = NULL, *l;
+unsigned int i = 0;
+char **items_ret;
+int err;
+
+err = xs_impl_directory(h->impl, DOMID_QEMU, t, path, NULL, &items);
+if (err) {
+errno = err;
+return NULL;
+}
+
+items_ret = g_new0(char *, g_list_length(items) + 1);
+*num = 0;
+for (l = items; l; l = l->next) {
+items_ret[i++] = l->data;
+(*num)++;
+}
+g_list_free(items);
+return items_ret;
+}
+
+static void *xs_be_read(struct qemu_xs_handle *h, xs_transaction_t t,
+const char *path, unsigned int *len)
+{
+GByteArray *data = g_byte_array_new();
+bool free_segment = false;
+int err;
+
+err = xs_impl_read(h->impl, DOMID_QEMU, t, path, data);
+if (err) {
+free_segment = true;
+errno = err;
+} else {
+if (len) {
+*len = data->len;
+}
+/* The xen-bus-helper code expects to get NUL terminated string! */
+g_byte_array_append(data, (void *)"", 1);
+}
+
+return g_byte_array_free(data, free_segment);
+}
+
+static bool xs_be_write(struct qemu_xs_handle *h, xs_transaction_t t,
+const char *path, const void *data, unsigned int len)
+{
+GByteArray *gdata = g_byte_array_new();
+int err;
+
+g_byte_array_append(gdata, data, len);
+err = xs_impl_write(h->impl, DOMID_QEMU, t, path, gdata);
+g_byte_array_unref(gdata);
+if (err) {
+errno = err;
+return false;
+}
+return true;
+}
+
+static bool xs_be_create(struct qemu_xs_handle *h, xs_transaction_t t,
+ unsigned int owner, unsigned int domid,
+ unsigned int perms, const char *path)
+{
+