On Thu, Aug 21, 2025 at 08:24:00PM +1000, Balbir Singh wrote:
> On 8/15/25 10:04, Matthew Brost wrote:
> > On Fri, Aug 15, 2025 at 08:51:21AM +1000, Balbir Singh wrote:
> >> On 8/13/25 10:07, Mika Penttilä wrote:
> >>>
> >>> On 8/13/25 02:36, Balbir Singh wrote:
> >>>
> >>>> On 8/12/25 15:35, Mika Penttilä wrote:
> >>>>> Hi,
> >>>>>
> >>>>> On 8/12/25 05:40, Balbir Singh wrote:
> ...
> 
> >> I've not run into this with my testing, let me try with more mTHP sizes 
> >> enabled. I'll wait on Matthew
> >> to post his test case or any results, issues seen
> >>
> > 
> > I’ve hit this. In the code I shared privately, I split THPs in the
> > page-collection path. You omitted that in v2 and v3; I believe you’ll
> > need those changes. The code I'm referring to had the below comment.
> > 
> >  416         /*
> >  417          * XXX: No clean way to support higher-order folios that don't 
> > match PMD
> >  418          * boundaries for now — split them instead. Once mTHP support 
> > lands, add
> >  419          * proper support for this case.
> >  420          *
> >  421          * The test, which exposed this as problematic, remapped 
> > (memremap) a
> >  422          * large folio to an unaligned address, resulting in the folio 
> > being
> >  423          * found in the middle of the PTEs. The requested number of 
> > pages was
> >  424          * less than the folio size. Likely to be handled gracefully 
> > by upper
> >  425          * layers eventually, but not yet.
> >  426          */
> > 
> > I triggered it by doing some odd mremap operations, which caused the CPU
> > page-fault handler to spin indefinitely iirc. In that case, a large device
> > folio had been moved into the middle of a PMD.
> > 
> > Upstream could see the same problem if the device fault handler enforces
> > a must-migrate-to-device policy and mremap moves a large CPU folio into
> > the middle of a PMD.
> > 
> > I’m in the middle of other work; when I circle back, I’ll try to create
> > a selftest to reproduce this. My current test is a fairly convoluted IGT
> > with a bunch of threads doing remap nonsense, but I’ll try to distill it
> > into a concise selftest.
> > 
> 
> I ran into this while doing some testing as well, I fixed it in a manner 
> similar
> to split_folio() for partial unmaps. I will consolidate the folio splits into
> a single helper and post it with v4.
> 

I created a selftest for this one. I'm going to send these over along +
the fixes I've applied in v3. Please include my selftests in the v4.

Matt 

> 
> Balbir Singh

Reply via email to