When explicitly hashing the end of a string with the word-at-a-time interface, we have to be careful which end of the word we pick up.
On big-endian CPUs, the upper-bits will contain the data we're after, so ensure we generate our masks accordingly (and avoid hashing whatever random junk may have been sitting after the string). Cc: Al Viro <[email protected]> Signed-off-by: Will Deacon <[email protected]> --- fs/dcache.c | 4 ++++ fs/namei.c | 9 ++++----- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 4bdb300b16e2..60c7264163bc 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -192,7 +192,11 @@ static inline int dentry_string_cmp(const unsigned char *cs, const unsigned char if (!tcount) return 0; } +#ifdef __BIG_ENDIAN + mask = ~(~0ul >> tcount*8); +#else mask = ~(~0ul << tcount*8); +#endif return unlikely(!!((a ^ b) & mask)); } diff --git a/fs/namei.c b/fs/namei.c index c53d3a9547f9..ac35646b3da6 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -1598,11 +1598,6 @@ static inline int nested_symlink(struct path *path, struct nameidata *nd) * do a "get_unaligned()" if this helps and is sufficiently * fast. * - * - Little-endian machines (so that we can generate the mask - * of low bytes efficiently). Again, we *could* do a byte - * swapping load on big-endian architectures if that is not - * expensive enough to make the optimization worthless. - * * - non-CONFIG_DEBUG_PAGEALLOC configurations (so that we * do not trap on the (extremely unlikely) case of a page * crossing operation. @@ -1646,7 +1641,11 @@ unsigned int full_name_hash(const unsigned char *name, unsigned int len) if (!len) goto done; } +#ifdef __BIG_ENDIAN + mask = ~(~0ul >> len*8); +#else mask = ~(~0ul << len*8); +#endif hash += mask & a; done: return fold_hash(hash); -- 1.8.2.2 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

