On Fri, 2016-01-29 at 09:27 +0100, Ingo Molnar wrote: > * Toshi Kani <[email protected]> wrote: : > > --- > > arch/x86/lib/copy_user_64.S | 44 ++++++++++++++++++++++++++++++++--- > > -------- > > 1 file changed, 33 insertions(+), 11 deletions(-) > > > > diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S > > index 982ce34..84b5578 100644 > > --- a/arch/x86/lib/copy_user_64.S > > +++ b/arch/x86/lib/copy_user_64.S > > @@ -232,12 +232,17 @@ ENDPROC(copy_user_enhanced_fast_string) > > > > /* > > * copy_user_nocache - Uncached memory copy with exception handling > > - * This will force destination/source out of cache for more > > performance. > > + * This will force destination out of cache for more performance. > > + * > > + * Note: Cached memory copy is used when destination or size is not > > + * naturally aligned. That is: > > + * - Require 8-byte alignment when size is 8 bytes or larger. > > + * - Require 4-byte alignment when size is 4 bytes. > > */ > > ENTRY(__copy_user_nocache) : > So at minimum this patch needs to add quite a few comments to explain the > alignment dependent control flow. > > Assembly code is hard enough to read as-is. Adding 20 more lines with > zero in-line comments is a mistake. > > Btw., while at it, please add comments for the control flow of the whole > function. Above a certain complexity that is a must for assembly > functions.
Agreed. I will add comments for the whole function. Thanks, -Toshi

