On Wed, 16 Jan 2019 at 04:37, Yueyi Li <liyu...@live.com> wrote:
>
> OK, thanks. But seems this mail be ignored, do i need re-sent the patch?
>
> On 2018/12/26 21:49, Ard Biesheuvel wrote:
> > On Tue, 25 Dec 2018 at 03:30, Yueyi Li <liyu...@live.com> wrote:
> >> Hi Ard,
> >>
> >>
> >> On 2018/12/24 17:45, Ard Biesheuvel wrote:
> >>> Does the following change fix your issue as well?
> >>>
> >>> index 9b432d9fcada..9dcf0ff75a11 100644
> >>> --- a/arch/arm64/mm/init.c
> >>> +++ b/arch/arm64/mm/init.c
> >>> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
> >>>                    * memory spans, randomize the linear region as well.
> >>>                    */
> >>>                   if (memstart_offset_seed > 0 && range >= 
> >>> ARM64_MEMSTART_ALIGN) {
> >>> -                       range = range / ARM64_MEMSTART_ALIGN + 1;
> >>> +                       range /= ARM64_MEMSTART_ALIGN;
> >>>                           memstart_addr -= ARM64_MEMSTART_ALIGN *
> >>>                                            ((range * 
> >>> memstart_offset_seed) >> 16);
> >>>                   }
> >> Yes, it can fix this also. I just think modify the first *range*
> >> calculation would be easier to grasp, what do you think?
> >>
> > I don't think there is a difference, to be honest, but I will leave it
> > up to the maintainers to decide which approach they prefer.
>

No it has been merged already. It is in v5.0-rc2 I think.

Reply via email to