Hi,

On 27-11-17 20:46, Larry Finger wrote:
On 11/26/2017 09:12 AM, Hans de Goede wrote:
This commit adds a driver for the Virtual Box Guest PCI device used in
Virtual Box virtual machines. Enabling this driver will add support for
Virtual Box Guest integration features such as copy-and-paste, seamless
mode and OpenGL pass-through.

This driver also offers vboxguest IPC functionality which is needed
for the vboxfs driver which offers folder sharing support.

Signed-off-by: Hans de Goede <hdego...@redhat.com>

Reviewed-by: Larry Finger <Larry.Finger.net>


Smatch lists the following:

   CHECK   drivers/virt/vboxguest/vboxguest_core.c
drivers/virt/vboxguest/vboxguest_core.c:604 vbg_set_session_event_filter() 
error: we previously assumed 'req' could be null (see line 585)
drivers/virt/vboxguest/vboxguest_core.c:606 vbg_set_session_event_filter() 
warn: variable dereferenced before check 'req' (see line 604)
drivers/virt/vboxguest/vboxguest_core.c:698 vbg_set_session_capabilities() 
error: we previously assumed 'req' could be null (see line 679)
drivers/virt/vboxguest/vboxguest_core.c:700 vbg_set_session_capabilities() 
warn: variable dereferenced before check 'req' (see line 698)

Good idea to run smatch, thank you for catching these.

vbox_utils.c is clean.

The reasons for the above errors, and other comments inline below.

---
Changes in v2:
-Change all uapi headers to kernel coding style: Drop struct and enum typedefs
  make type and struct-member names all lowercase, enum values all uppercase.
-Remove unused struct type declarations from some headers (shaving of another
  1000 lines)
-Remove or fixup doxygen style comments
-Get rid of CHECK macros, use a function taking in_ and out_size args instead
-Some other small codyingstyle fixes
-Split into multiple patches
---
  drivers/virt/Kconfig                       |    1 +
  drivers/virt/Makefile                      |    1 +
  drivers/virt/vboxguest/Kconfig             |   16 +
  drivers/virt/vboxguest/Makefile            |    3 +
  drivers/virt/vboxguest/vboxguest_core.c    | 1577 ++++++++++++++++++++++++++++
  drivers/virt/vboxguest/vboxguest_core.h    |  187 ++++
  drivers/virt/vboxguest/vboxguest_linux.c   |  469 +++++++++
  drivers/virt/vboxguest/vboxguest_version.h |   18 +
  8 files changed, 2272 insertions(+)
  create mode 100644 drivers/virt/vboxguest/Kconfig
  create mode 100644 drivers/virt/vboxguest/Makefile
  create mode 100644 drivers/virt/vboxguest/vboxguest_core.c
  create mode 100644 drivers/virt/vboxguest/vboxguest_core.h
  create mode 100644 drivers/virt/vboxguest/vboxguest_linux.c
  create mode 100644 drivers/virt/vboxguest/vboxguest_version.h

diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
index 99ebdde590f8..8d9cdfbd6bcc 100644
--- a/drivers/virt/Kconfig
+++ b/drivers/virt/Kconfig
@@ -30,4 +30,5 @@ config FSL_HV_MANAGER
            4) A kernel interface for receiving callbacks when a managed
           partition shuts down.
+source "drivers/virt/vboxguest/Kconfig"
  endif
diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
index c47f04dd343b..d3f7b2540890 100644
--- a/drivers/virt/Makefile
+++ b/drivers/virt/Makefile
@@ -3,3 +3,4 @@
  #
  obj-$(CONFIG_FSL_HV_MANAGER)    += fsl_hypervisor.o
+obj-y                += vboxguest/
diff --git a/drivers/virt/vboxguest/Kconfig b/drivers/virt/vboxguest/Kconfig
new file mode 100644
index 000000000000..e88ee46c31d4
--- /dev/null
+++ b/drivers/virt/vboxguest/Kconfig
@@ -0,0 +1,16 @@
+config VBOXGUEST
+    tristate "Virtual Box Guest integration support"
+    depends on X86 && PCI && INPUT
+    help
+      This is a driver for the Virtual Box Guest PCI device used in
+      Virtual Box virtual machines. Enabling this driver will add
+      support for Virtual Box Guest integration features such as
+      copy-and-paste, seamless mode and OpenGL pass-through.
+
+      This driver also offers vboxguest IPC functionality which is needed
+      for the vboxfs driver which offers folder sharing support.
+
+      Although it is possible to build this module in, it is advised
+      to build this driver as a module, so that it can be updated
+      independently of the kernel. Select M to build this driver as a
+      module.

This Kconfig allows vboxguest to be built even though vboxvideo is not being 
built. That does not seem to be a useful combination.

As discussed in the cover-letter thread, I agree this is not useful, but
adding a "depends on" is also not really the answer, so I've added this
line to the help section:

          If you enable this driver you should also enable the VBOXVIDEO option.


diff --git a/drivers/virt/vboxguest/Makefile b/drivers/virt/vboxguest/Makefile
new file mode 100644
index 000000000000..203b8f465817
--- /dev/null
+++ b/drivers/virt/vboxguest/Makefile
@@ -0,0 +1,3 @@
+vboxguest-y := vboxguest_linux.o vboxguest_core.o vboxguest_utils.o
+
+obj-$(CONFIG_VBOXGUEST) += vboxguest.o
diff --git a/drivers/virt/vboxguest/vboxguest_core.c 
b/drivers/virt/vboxguest/vboxguest_core.c
new file mode 100644
index 000000000000..4927c0d3e336
--- /dev/null
+++ b/drivers/virt/vboxguest/vboxguest_core.c
@@ -0,0 +1,1577 @@
+/*
+ * vboxguest core guest-device handling code, VBoxGuest.cpp in upstream svn.
+ *
+ * Copyright (C) 2007-2016 Oracle Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * The contents of this file may alternatively be used under the terms
+ * of the Common Development and Distribution License Version 1.0
+ * (CDDL) only, in which case the provisions of the CDDL are applicable
+ * instead of those of the GPL.
+ *
+ * You may elect to license modified versions of this file under the
+ * terms and conditions of either the GPL or the CDDL or both.
+ */
+
+#include <linux/device.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+#include <linux/vbox_err.h>
+#include <linux/vbox_utils.h>
+#include <linux/vmalloc.h>
+#include "vboxguest_core.h"
+#include "vboxguest_version.h"
+
+/* Get the pointer to the first HGCM parameter. */
+#define VBG_IOCTL_HGCM_CALL_PARMS(a) \
+    ((struct vmmdev_hgcm_function_parameter *)( \
+        (u8 *)(a) + sizeof(struct vbg_ioctl_hgcm_call)))
+/* Get the pointer to the first HGCM parameter in a 32-bit request. */
+#define VBG_IOCTL_HGCM_CALL_PARMS32(a) \
+    ((struct vmmdev_hgcm_function_parameter32 *)( \
+        (u8 *)(a) + sizeof(struct vbg_ioctl_hgcm_call)))
+
+#define GUEST_MAPPINGS_TRIES    5
+
+/**
+ * Reserves memory in which the VMM can relocate any guest mappings
+ * that are floating around.
+ *
+ * This operation is a little bit tricky since the VMM might not accept
+ * just any address because of address clashes between the three contexts
+ * it operates in, so we try several times.
+ *
+ * Failure to reserve the guest mappings is ignored.
+ *
+ * @gdev:        The Guest extension device.
+ */
+static void vbg_guest_mappings_init(struct vbg_dev *gdev)
+{
+    struct vmmdev_hypervisorinfo *req;
+    void *guest_mappings[GUEST_MAPPINGS_TRIES];
+    struct page **pages = NULL;
+    u32 size, hypervisor_size;
+    int i, rc;
+
+    /* Query the required space. */
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_GET_HYPERVISOR_INFO);
+    if (!req)
+        return;
+
+    req->hypervisor_start = 0;
+    req->hypervisor_size = 0;
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0)
+        goto out;
+
+    /*
+     * The VMM will report back if there is nothing it wants to map, like
+     * for instance in VT-x and AMD-V mode.
+     */
+    if (req->hypervisor_size == 0)
+        goto out;
+
+    hypervisor_size = req->hypervisor_size;
+    /* Add 4M so that we can align the vmap to 4MiB as the host requires. */
+    size = PAGE_ALIGN(req->hypervisor_size) + SZ_4M;
+
+    pages = kmalloc(sizeof(*pages) * (size >> PAGE_SHIFT), GFP_KERNEL);
+    if (!pages)
+        goto out;
+
+    gdev->guest_mappings_dummy_page = alloc_page(GFP_HIGHUSER);
+    if (!gdev->guest_mappings_dummy_page)
+        goto out;
+
+    for (i = 0; i < (size >> PAGE_SHIFT); i++)
+        pages[i] = gdev->guest_mappings_dummy_page;
+
+    /* Try several times, the host can be picky about certain addresses. */

This comment should repeat that address clashes could be due to the different 
contexts. When I read this the first time, I had missed the comment above, and 
I expect that other readers might make the same mistake.

Ok, fixed for v3.

+    for (i = 0; i < GUEST_MAPPINGS_TRIES; i++) {
+        guest_mappings[i] = vmap(pages, (size >> PAGE_SHIFT),
+                     VM_MAP, PAGE_KERNEL_RO);
+        if (!guest_mappings[i])
+            break;
+
+        req->header.request_type = VMMDEVREQ_SET_HYPERVISOR_INFO;
+        req->header.rc = VERR_INTERNAL_ERROR;
+        req->hypervisor_size = hypervisor_size;
+        req->hypervisor_start =
+            (unsigned long)PTR_ALIGN(guest_mappings[i], SZ_4M);
+
+        rc = vbg_req_perform(gdev, req);
+        if (rc >= 0) {
+            gdev->guest_mappings = guest_mappings[i];
+            break;
+        }
+    }
+
+    /* Free vmap's from failed attempts. */
+    while (--i >= 0)
+        vunmap(guest_mappings[i]);
+
+    /* On failure free the dummy-page backing the vmap */
+    if (!gdev->guest_mappings) {
+        __free_page(gdev->guest_mappings_dummy_page);
+        gdev->guest_mappings_dummy_page = NULL;
+    }
+
+out:
+    kfree(req);
+    kfree(pages);
+}
+
+/**
+ * Undo what vbg_guest_mappings_init did.
+ *
+ * @gdev:        The Guest extension device.
+ */
+static void vbg_guest_mappings_exit(struct vbg_dev *gdev)
+{
+    struct vmmdev_hypervisorinfo *req;
+    int rc;
+
+    if (!gdev->guest_mappings)
+        return;
+
+    /*
+     * Tell the host that we're going to free the memory we reserved for
+     * it, the free it up. (Leak the memory if anything goes wrong here.)
+     */
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_HYPERVISOR_INFO);
+    if (!req)
+        return;
+
+    req->hypervisor_start = 0;
+    req->hypervisor_size = 0;
+
+    rc = vbg_req_perform(gdev, req);
+
+    kfree(req);
+
+    if (rc < 0) {
+        vbg_err("%s error: %d\n", __func__, rc);
+        return;
+    }
+
+    vunmap(gdev->guest_mappings);
+    gdev->guest_mappings = NULL;
+
+    __free_page(gdev->guest_mappings_dummy_page);
+    gdev->guest_mappings_dummy_page = NULL;
+}
+
+/**
+ * Report the guest information to the host.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ */
+static int vbg_report_guest_info(struct vbg_dev *gdev)
+{
+    /*
+     * Allocate and fill in the two guest info reports.
+     */
+    struct vmmdev_guest_info *req1 = NULL;
+    struct vmmdev_guest_info2 *req2 = NULL;
+    int rc, ret = -ENOMEM;
+
+    req1 = vbg_req_alloc(sizeof(*req1), VMMDEVREQ_REPORT_GUEST_INFO);
+    req2 = vbg_req_alloc(sizeof(*req2), VMMDEVREQ_REPORT_GUEST_INFO2);
+    if (!req1 || !req2)
+        goto out_free;
+
+    req1->interface_version = VMMDEV_VERSION;
+    req1->os_type = VMMDEV_OSTYPE_LINUX26;
+#if __BITS_PER_LONG == 64
+    req1->os_type |= VMMDEV_OSTYPE_X64;
+#endif
+
+    req2->additions_major = VBG_VERSION_MAJOR;
+    req2->additions_minor = VBG_VERSION_MINOR;
+    req2->additions_build = VBG_VERSION_BUILD;
+    req2->additions_revision = VBG_SVN_REV;
+    /* (no features defined yet) */
+    req2->additions_features = 0;
+    strlcpy(req2->name, VBG_VERSION_STRING,
+        sizeof(req2->name));
+
+    /*
+     * There are two protocols here:
+     *      1. INFO2 + INFO1. Supported by >=3.2.51.
+     *      2. INFO1 and optionally INFO2. The old protocol.
+     *
+     * We try protocol 2 first.  It will fail with VERR_NOT_SUPPORTED
+     * if not supported by the VMMDev (message ordering requirement).
+     */
+    rc = vbg_req_perform(gdev, req2);
+    if (rc >= 0) {
+        rc = vbg_req_perform(gdev, req1);
+    } else if (rc == VERR_NOT_SUPPORTED || rc == VERR_NOT_IMPLEMENTED) {
+        rc = vbg_req_perform(gdev, req1);
+        if (rc >= 0) {
+            rc = vbg_req_perform(gdev, req2);
+            if (rc == VERR_NOT_IMPLEMENTED)
+                rc = VINF_SUCCESS;
+        }
+    }
+    ret = vbg_status_code_to_errno(rc);
+
+out_free:
+    kfree(req2);
+    kfree(req1);
+    return ret;
+}
+
+/**
+ * Report the guest driver status to the host.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ * @active:        Flag whether the driver is now active or not.
+ */
+static int vbg_report_driver_status(struct vbg_dev *gdev, bool active)
+{
+    struct vmmdev_guest_status *req;
+    int rc;
+
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_REPORT_GUEST_STATUS);
+    if (!req)
+        return -ENOMEM;
+
+    req->facility = VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER;
+    if (active)
+        req->status = VBOXGUEST_FACILITY_STATUS_ACTIVE;
+    else
+        req->status = VBOXGUEST_FACILITY_STATUS_INACTIVE;
+    req->flags = 0;
+
+    rc = vbg_req_perform(gdev, req);
+    if (rc == VERR_NOT_IMPLEMENTED)    /* Compatibility with older hosts. */
+        rc = VINF_SUCCESS;
+
+    kfree(req);
+
+    return vbg_status_code_to_errno(rc);
+}
+
+/**
+ * Inflate the balloon by one chunk. The caller owns the balloon mutex.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ * @chunk_idx:        Index of the chunk.
+ */
+static int vbg_balloon_inflate(struct vbg_dev *gdev, u32 chunk_idx)
+{
+    struct vmmdev_memballoon_change *req = gdev->mem_balloon.change_req;
+    struct page **pages;
+    int i, rc, ret;
+
+    pages = kmalloc(sizeof(*pages) * VMMDEV_MEMORY_BALLOON_CHUNK_PAGES,
+            GFP_KERNEL | __GFP_NOWARN);
+    if (!pages)
+        return -ENOMEM;
+
+    req->header.size = sizeof(*req);
+    req->inflate = true;
+    req->pages = VMMDEV_MEMORY_BALLOON_CHUNK_PAGES;
+
+    for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) {
+        pages[i] = alloc_page(GFP_KERNEL | __GFP_NOWARN);
+        if (!pages[i]) {
+            ret = -ENOMEM;
+            goto out_error;
+        }
+
+        req->phys_page[i] = page_to_phys(pages[i]);
+    }
+
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0) {
+        vbg_err("%s error, rc: %d\n", __func__, rc);
+        ret = vbg_status_code_to_errno(rc);
+        goto out_error;
+    }
+
+    gdev->mem_balloon.pages[chunk_idx] = pages;
+
+    return 0;
+
+out_error:
+    while (--i >= 0)
+        __free_page(pages[i]);
+    kfree(pages);
+
+    return ret;
+}
+
+/**
+ * Deflate the balloon by one chunk. The caller owns the balloon mutex.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ * @chunk_idx:        Index of the chunk.
+ */
+static int vbg_balloon_deflate(struct vbg_dev *gdev, u32 chunk_idx)
+{
+    struct vmmdev_memballoon_change *req = gdev->mem_balloon.change_req;
+    struct page **pages = gdev->mem_balloon.pages[chunk_idx];
+    int i, rc;
+
+    req->header.size = sizeof(*req);
+    req->inflate = false;
+    req->pages = VMMDEV_MEMORY_BALLOON_CHUNK_PAGES;
+
+    for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++)
+        req->phys_page[i] = page_to_phys(pages[i]);
+
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0) {
+        vbg_err("%s error, rc: %d\n", __func__, rc);
+        return vbg_status_code_to_errno(rc);
+    }
+
+    for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++)
+        __free_page(pages[i]);
+    kfree(pages);
+    gdev->mem_balloon.pages[chunk_idx] = NULL;
+
+    return 0;
+}
+
+/**
+ * Respond to VMMDEV_EVENT_BALLOON_CHANGE_REQUEST events, query the size
+ * the host wants the balloon to be and adjust accordingly.
+ */
+static void vbg_balloon_work(struct work_struct *work)
+{
+    struct vbg_dev *gdev =
+        container_of(work, struct vbg_dev, mem_balloon.work);
+    struct vmmdev_memballoon_info *req = gdev->mem_balloon.get_req;
+    u32 i, chunks;
+    int rc, ret;
+
+    /*
+     * Setting this bit means that we request the value from the host and
+     * change the guest memory balloon according to the returned value.
+     */
+    req->event_ack = VMMDEV_EVENT_BALLOON_CHANGE_REQUEST;
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0) {
+        vbg_err("%s error, rc: %d)\n", __func__, rc);
+        return;
+    }
+
+    /*
+     * The host always returns the same maximum amount of chunks, so
+     * we do this once.
+     */
+    if (!gdev->mem_balloon.max_chunks) {
+        gdev->mem_balloon.pages =
+            devm_kcalloc(gdev->dev, req->phys_mem_chunks,
+                     sizeof(struct page **), GFP_KERNEL);
+        if (!gdev->mem_balloon.pages)
+            return;
+
+        gdev->mem_balloon.max_chunks = req->phys_mem_chunks;
+    }
+
+    chunks = req->balloon_chunks;
+    if (chunks > gdev->mem_balloon.max_chunks) {
+        vbg_err("%s: illegal balloon size %u (max=%u)\n",
+            __func__, chunks, gdev->mem_balloon.max_chunks);
+        return;
+    }
+
+    if (chunks > gdev->mem_balloon.chunks) {
+        /* inflate */
+        for (i = gdev->mem_balloon.chunks; i < chunks; i++) {
+            ret = vbg_balloon_inflate(gdev, i);
+            if (ret < 0)
+                return;
+
+            gdev->mem_balloon.chunks++;
+        }
+    } else {
+        /* deflate */
+        for (i = gdev->mem_balloon.chunks; i-- > chunks;) {
+            ret = vbg_balloon_deflate(gdev, i);
+            if (ret < 0)
+                return;
+
+            gdev->mem_balloon.chunks--;
+        }
+    }
+}
+
+/**
+ * Callback for heartbeat timer.
+ */
+static void vbg_heartbeat_timer(struct timer_list *t)
+{
+    struct vbg_dev *gdev = from_timer(gdev, t, heartbeat_timer);
+
+    vbg_req_perform(gdev, gdev->guest_heartbeat_req);
+    mod_timer(&gdev->heartbeat_timer,
+          msecs_to_jiffies(gdev->heartbeat_interval_ms));
+}
+
+/**
+ * Configure the host to check guest's heartbeat
+ * and get heartbeat interval from the host.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ * @enabled:        Set true to enable guest heartbeat checks on host.
+ */
+static int vbg_heartbeat_host_config(struct vbg_dev *gdev, bool enabled)
+{
+    struct vmmdev_heartbeat *req;
+    int rc;
+
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_HEARTBEAT_CONFIGURE);
+    if (!req)
+        return -ENOMEM;
+
+    req->enabled = enabled;
+    req->interval_ns = 0;
+    rc = vbg_req_perform(gdev, req);
+    do_div(req->interval_ns, 1000000); /* ns -> ms */
+    gdev->heartbeat_interval_ms = req->interval_ns;
+    kfree(req);
+
+    return vbg_status_code_to_errno(rc);
+}
+
+/**
+ * Initializes the heartbeat timer. This feature may be disabled by the host.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ */
+static int vbg_heartbeat_init(struct vbg_dev *gdev)
+{
+    int ret;
+
+    /* Make sure that heartbeat checking is disabled if we fail. */
+    ret = vbg_heartbeat_host_config(gdev, false);
+    if (ret < 0)
+        return ret;
+
+    ret = vbg_heartbeat_host_config(gdev, true);
+    if (ret < 0)
+        return ret;
+
+    /*
+     * Preallocate the request to use it from the timer callback because:
+     *    1) on Windows vbg_req_alloc must be called at IRQL <= APC_LEVEL
+     *       and the timer callback runs at DISPATCH_LEVEL;
+     *    2) avoid repeated allocations.
+     */
+    gdev->guest_heartbeat_req = vbg_req_alloc(
+                    sizeof(*gdev->guest_heartbeat_req),
+                    VMMDEVREQ_GUEST_HEARTBEAT);
+    if (!gdev->guest_heartbeat_req)
+        return -ENOMEM;
+
+    vbg_info("%s: Setting up heartbeat to trigger every %d milliseconds\n",
+         __func__, gdev->heartbeat_interval_ms);
+    mod_timer(&gdev->heartbeat_timer, 0);
+
+    return 0;
+}
+
+/**
+ * Cleanup hearbeat code, stop HB timer and disable host heartbeat checking.
+ * @gdev:        The Guest extension device.
+ */
+static void vbg_heartbeat_exit(struct vbg_dev *gdev)
+{
+    del_timer_sync(&gdev->heartbeat_timer);
+    vbg_heartbeat_host_config(gdev, false);
+    kfree(gdev->guest_heartbeat_req);
+
+}
+
+/**
+ * Applies a change to the bit usage tracker.
+ * Return: true if the mask changed, false if not.
+ * @tracker:        The bit usage tracker.
+ * @changed:        The bits to change.
+ * @previous:        The previous value of the bits.
+ */
+static bool vbg_track_bit_usage(struct vbg_bit_usage_tracker *tracker,
+                u32 changed, u32 previous)
+{
+    bool global_change = false;
+
+    while (changed) {
+        u32 bit = ffs(changed) - 1;
+        u32 bitmask = BIT(bit);
+
+        if (bitmask & previous) {
+            tracker->per_bit_usage[bit] -= 1;
+            if (tracker->per_bit_usage[bit] == 0) {
+                global_change = true;
+                tracker->mask &= ~bitmask;
+            }
+        } else {
+            tracker->per_bit_usage[bit] += 1;
+            if (tracker->per_bit_usage[bit] == 1) {
+                global_change = true;
+                tracker->mask |= bitmask;
+            }
+        }
+
+        changed &= ~bitmask;
+    }
+
+    return global_change;
+}
+
+/**
+ * Init and termination worker for resetting the (host) event filter on the 
host
+ * Return: 0 or negative errno value.
+ * @gdev:           The Guest extension device.
+ * @fixed_events:       Fixed events (init time).
+ */
+static int vbg_reset_host_event_filter(struct vbg_dev *gdev,
+                       u32 fixed_events)
+{
+    struct vmmdev_mask *req;
+    int rc;
+
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_CTL_GUEST_FILTER_MASK);
+    if (!req)
+        return -ENOMEM;
+
+    req->not_mask = U32_MAX & ~fixed_events;
+    req->or_mask = fixed_events;
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0)
+        vbg_err("%s error, rc: %d\n", __func__, rc);
+
+    kfree(req);
+    return vbg_status_code_to_errno(rc);
+}
+
+/**
+ * Changes the event filter mask for the given session.
+ *
+ * This is called in response to VBG_IOCTL_CHANGE_FILTER_MASK as well as to
+ * do session cleanup. Takes the session spinlock.
+ *
+ * Return: 0 or negative errno value.
+ * @gdev:            The Guest extension device.
+ * @session:            The session.
+ * @or_mask:            The events to add.
+ * @not_mask:            The events to remove.
+ * @session_termination:    Set if we're called by the session cleanup code.
+ *                This tweaks the error handling so we perform
+ *                proper session cleanup even if the host
+ *                misbehaves.
+ */
+static int vbg_set_session_event_filter(struct vbg_dev *gdev,
+                    struct vbg_session *session,
+                    u32 or_mask, u32 not_mask,
+                    bool session_termination)
+{
+    struct vmmdev_mask *req;
+    u32 changed, previous;
+    int rc, ret = 0;
+
+    /* Allocate a request buffer before taking the spinlock */
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_CTL_GUEST_FILTER_MASK);
+    if (!req) {
+        if (!session_termination)
+            return -ENOMEM;
+        /* Ignore failure, we must do session cleanup. */

The comment should say "Ignore allocation failure ...", but that leads to 
problems below.

Ok, fixed for v3.

+    }
+
+    mutex_lock(&gdev->session_mutex);
+
+    /* Apply the changes to the session mask. */
+    previous = session->event_filter;
+    session->event_filter |= or_mask;
+    session->event_filter &= ~not_mask;
+
+    /* If anything actually changed, update the global usage counters. */
+    changed = previous ^ session->event_filter;
+    if (!changed)
+        goto out;
+
+    vbg_track_bit_usage(&gdev->event_filter_tracker, changed, previous);
+    req->or_mask = gdev->fixed_events | gdev->event_filter_tracker.mask;

At this point, req could be NULL. I'm not sure what session cleanup is needed 
if req is NULL and session_termination is not, but it needs to be split out.

Right, I've solved this by using a local or_mask variable and ...

+
+    if (gdev->event_filter_host == req->or_mask || !req)
+        goto out;
+
+    gdev->event_filter_host = req->or_mask;
+    req->not_mask = ~req->or_mask;

Assigning that to req->or_mask here, after the !req check.

+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0) {
+        ret = vbg_status_code_to_errno(rc);
+
+        /* Failed, roll back (unless it's session termination time). */
+        gdev->event_filter_host = U32_MAX;
+        if (session_termination)
+            goto out;
+
+        vbg_track_bit_usage(&gdev->event_filter_tracker, changed,
+                    session->event_filter);
+        session->event_filter = previous;
+    }
+
+out:
+    mutex_unlock(&gdev->session_mutex);
+    kfree(req);
+
+    return ret;
+}
+
+/**
+ * Init and termination worker for set guest capabilities to zero on the host.
+ * Return: 0 or negative errno value.
+ * @gdev:        The Guest extension device.
+ */
+static int vbg_reset_host_capabilities(struct vbg_dev *gdev)
+{
+    struct vmmdev_mask *req;
+    int rc;
+
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_GUEST_CAPABILITIES);
+    if (!req)
+        return -ENOMEM;
+
+    req->not_mask = U32_MAX;
+    req->or_mask = 0;
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0)
+        vbg_err("%s error, rc: %d\n", __func__, rc);
+
+    kfree(req);
+    return vbg_status_code_to_errno(rc);
+}
+
+/**
+ * Sets the guest capabilities for a session. Takes the session spinlock.
+ * Return: 0 or negative errno value.
+ * @gdev:            The Guest extension device.
+ * @session:            The session.
+ * @or_mask:            The capabilities to add.
+ * @not_mask:            The capabilities to remove.
+ * @session_termination:    Set if we're called by the session cleanup code.
+ *                This tweaks the error handling so we perform
+ *                proper session cleanup even if the host
+ *                misbehaves.
+ */
+static int vbg_set_session_capabilities(struct vbg_dev *gdev,
+                    struct vbg_session *session,
+                    u32 or_mask, u32 not_mask,
+                    bool session_termination)
+{
+    struct vmmdev_mask *req;
+    u32 changed, previous;
+    int rc, ret = 0;
+
+    /* Allocate a request buffer before taking the spinlock */
+    req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_GUEST_CAPABILITIES);
+    if (!req) {
+        if (!session_termination)
+            return -ENOMEM;
+        /* Ignore failure, we must do session cleanup. */

This comment should also be changed.

Ack, also fixed.

+    }
+
+    mutex_lock(&gdev->session_mutex);
+
+    /* Apply the changes to the session mask. */
+    previous = session->guest_caps;
+    session->guest_caps |= or_mask;
+    session->guest_caps &= ~not_mask;
+
+    /* If anything actually changed, update the global usage counters. */
+    changed = previous ^ session->guest_caps;
+    if (!changed)
+        goto out;
+
+    vbg_track_bit_usage(&gdev->guest_caps_tracker, changed, previous);
+    req->or_mask = gdev->guest_caps_tracker.mask;

req can be NULL here.

Ack, fixed in the same way as above.

+
+    if (gdev->guest_caps_host == req->or_mask || !req)
+        goto out;
+
+    gdev->guest_caps_host = req->or_mask;
+    req->not_mask = ~req->or_mask;
+    rc = vbg_req_perform(gdev, req);
+    if (rc < 0) {
+        ret = vbg_status_code_to_errno(rc);
+
+        /* Failed, roll back (unless it's session termination time). */
+        gdev->guest_caps_host = U32_MAX;
+        if (session_termination)
+            goto out;
+
+        vbg_track_bit_usage(&gdev->guest_caps_tracker, changed,
+                    session->guest_caps);
+        session->guest_caps = previous;
+    }
+
+out:
+    mutex_unlock(&gdev->session_mutex);
+    kfree(req);
+
+    return ret;
+}

<snip>

Regards,

Hans

Reply via email to