On Tuesday 14 February 2017 11:55 AM, Michael Ellerman wrote:
"Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com> writes:

diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index b3f45e413a60..08ac27eae408 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -37,7 +37,16 @@
  #include <asm/hugetlb.h>
static DEFINE_SPINLOCK(slice_convert_lock);
-
+/*
+ * One bit per slice. We have lower slices which cover 256MB segments
+ * upto 4G range. That gets us 16 low slices. For the rest we track slices
+ * in 1TB size.
Can we tighten this comment up a bit.

What about:

+ * One bit per slice. The low slices cover the range 0 - 4GB, each
   * slice being 256MB in size, for 16 low slices. The high slices
   * cover the rest of the address space at 1TB granularity, with the
   * exception of high slice 0 which covers the range 4GB - 1TB.
OK?


good.


+ * 64 below is actually SLICE_NUM_HIGH to fixup complie errros
That line is bogus AFAICS, it refers to the old hardcoded value (prior
to 512), I'll drop it.


Thanks


-aneesh


Reply via email to