On Thu, Oct 31, 2013 at 01:38:57PM -0500, Tom wrote:
> From: Tom Musta <tommu...@gmail.com>
> 
> This patch modifies the unaligned access routines of the sstep.c
> module so that it properly reverses the bytes of storage operands
> in the little endian kernel kernel.

This has rather a lot of #ifdefs inside function definitions, and for
little-endian it does the unaligned accesses one byte at a time.  You
could avoid all the #ifdefs if you define the combining function in an
endian-dependant way and make read_mem_unaligned look something like
this:

#ifdef __LITTLE_ENDIAN__
#define combine_pieces(x, b, c, nd)     ((x) + ((b) << (8 * (nd))))
#else
#define combine_pieces(x, b, c, nd)     (((x) << (8 * (c))) + (b))
#endif

static int __kprobes read_mem_unaligned(unsigned long *dest, unsigned long ea,
                                        int nb, struct pt_regs *regs)
{
        int err;
        int nd;
        unsigned long x, b, c;

        /* unaligned, do this in pieces */
        x = 0;
        for (nd = 0; nd < nb; nd += c) {
                c = max_align(ea);
                if (c > nb - nd)
                        c = max_align(nb - nd);
                err = read_mem_aligned(&b, ea, c);
                if (err)
                        return err;
                x = combine_pieces(x, b, c, nd);
                ea += c;
        }
        *dest = x;
        return 0;
}

and do something analogous for write_mem_unaligned().

Paul.
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to