From: Mikulas Patocka <mpato...@redhat.com> commit a1cd6c2ae47ee10ff21e62475685d5b399e2ed4a upstream.
If we copy less than 8 bytes and if the destination crosses a cache line, __copy_user_flushcache would invalidate only the first cache line. This patch makes it invalidate the second cache line as well. Fixes: 0aed55af88345b ("x86, uaccess: introduce copy_from_iter_flushcache for pmem / cache-bypass operations") Signed-off-by: Mikulas Patocka <mpato...@redhat.com> Signed-off-by: Andrew Morton <a...@linux-foundation.org> Reviewed-by: Dan Williams <dan.j.wiilli...@intel.com> Cc: Jan Kara <j...@suse.cz> Cc: Jeff Moyer <jmo...@redhat.com> Cc: Ingo Molnar <mi...@redhat.com> Cc: Christoph Hellwig <h...@lst.de> Cc: Toshi Kani <toshi.k...@hpe.com> Cc: "H. Peter Anvin" <h...@zytor.com> Cc: Al Viro <v...@zeniv.linux.org.uk> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Matthew Wilcox <mawil...@microsoft.com> Cc: Ross Zwisler <ross.zwis...@linux.intel.com> Cc: Ingo Molnar <mi...@elte.hu> Cc: <sta...@vger.kernel.org> Link: https://lkml.kernel.org/r/alpine.lrh.2.02.2009161451140.21...@file01.intranet.prod.int.rdu2.redhat.com Signed-off-by: Linus Torvalds <torva...@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org> --- arch/x86/lib/usercopy_64.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/lib/usercopy_64.c +++ b/arch/x86/lib/usercopy_64.c @@ -118,7 +118,7 @@ long __copy_user_flushcache(void *dst, c */ if (size < 8) { if (!IS_ALIGNED(dest, 4) || size != 4) - clean_cache_range(dst, 1); + clean_cache_range(dst, size); } else { if (!IS_ALIGNED(dest, 8)) { dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);