Previous concern with using drm_clflush_sg was that we don't know what the
sg_table is pointing to, thus the usage of wbinvd_on_all_cpus to flush
everything at once to avoid paranoia.

To make i915 more architecture-neutral and be less paranoid, lets attempt to
use drm_clflush_sg to flush the pages for when the GPU wants to read
from main memory.

Signed-off-by: Michael Cheng <michael.ch...@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index f5062d0c6333..b0a5baaebc43 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -8,6 +8,7 @@
 #include <linux/highmem.h>
 #include <linux/dma-resv.h>
 #include <linux/module.h>
+#include <drm/drm_cache.h>
 
 #include <asm/smp.h>
 
@@ -250,16 +251,10 @@ static int i915_gem_object_get_pages_dmabuf(struct 
drm_i915_gem_object *obj)
         * DG1 is special here since it still snoops transactions even with
         * CACHE_NONE. This is not the case with other HAS_SNOOP platforms. We
         * might need to revisit this as we add new discrete platforms.
-        *
-        * XXX: Consider doing a vmap flush or something, where possible.
-        * Currently we just do a heavy handed wbinvd_on_all_cpus() here since
-        * the underlying sg_table might not even point to struct pages, so we
-        * can't just call drm_clflush_sg or similar, like we do elsewhere in
-        * the driver.
         */
        if (i915_gem_object_can_bypass_llc(obj) ||
            (!HAS_LLC(i915) && !IS_DG1(i915)))
-               wbinvd_on_all_cpus();
+               drm_clflush_sg(pages);
 
        sg_page_sizes = i915_sg_dma_sizes(pages->sgl);
        __i915_gem_object_set_pages(obj, pages, sg_page_sizes);
-- 
2.25.1

Reply via email to