From: Matt Sealey <[email protected]> Enable faster 8-byte copies on arm64.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Matt Sealey <[email protected]> Signed-off-by: Dave Rodgman <[email protected]> Cc: David S. Miller <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Markus F.X.J. Oberhumer <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Nitin Gupta <[email protected]> Cc: Richard Purdie <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: Sonny Rao <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> --- lib/lzo/lzodefs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/lzo/lzodefs.h b/lib/lzo/lzodefs.h index c8965dc181df..06fa83a38e0a 100644 --- a/lib/lzo/lzodefs.h +++ b/lib/lzo/lzodefs.h @@ -15,7 +15,7 @@ #define COPY4(dst, src) \ put_unaligned(get_unaligned((const u32 *)(src)), (u32 *)(dst)) -#if defined(CONFIG_X86_64) +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) #define COPY8(dst, src) \ put_unaligned(get_unaligned((const u64 *)(src)), (u64 *)(dst)) #else -- 2.17.1

