From: Ross Zwisler <ross.zwis...@linux.intel.com>

The current algorithm used in clflush_cache_range() can cause the last
cache line of the buffer to be flushed twice. Fix that algorithm so that
each cache line will only be flushed once.

Signed-off-by: Ross Zwisler <ross.zwis...@linux.intel.com>
Reported-by: H. Peter Anvin <h...@zytor.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: x86-ml <x...@kernel.org>
Link: 
http://lkml.kernel.org/r/1430259192-18802-1-git-send-email-ross.zwis...@linux.intel.com
Signed-off-by: Borislav Petkov <b...@suse.de>
---
 arch/x86/mm/pageattr.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 89af288ec674..338e507f95b8 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -129,16 +129,15 @@ within(unsigned long addr, unsigned long start, unsigned 
long end)
  */
 void clflush_cache_range(void *vaddr, unsigned int size)
 {
-       void *vend = vaddr + size - 1;
+       unsigned long clflush_mask = boot_cpu_data.x86_clflush_size - 1;
+       char *vend = (char *)vaddr + size;
+       char *p;
 
        mb();
 
-       for (; vaddr < vend; vaddr += boot_cpu_data.x86_clflush_size)
-               clflushopt(vaddr);
-       /*
-        * Flush any possible final partial cacheline:
-        */
-       clflushopt(vend);
+       for (p = (char *)((unsigned long)vaddr & ~clflush_mask);
+            p < vend; p += boot_cpu_data.x86_clflush_size)
+               clflushopt(p);
 
        mb();
 }
-- 
2.3.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to