Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Greg Kroah-Hartman
On Tue, Jan 18, 2022 at 07:50:38PM -0600, Rob Herring wrote:
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
> 
> The array of phandles case boils down to needing:
> 
> items:
>   maxItems: 1
> 
> The phandle plus args cases should typically take this form:
> 
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
> 
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.
> 
> Cc: Damien Le Moal 
> Cc: Herbert Xu 
> Cc: "David S. Miller" 
> Cc: Chun-Kuang Hu 
> Cc: Philipp Zabel 
> Cc: Laurent Pinchart 
> Cc: Kieran Bingham 
> Cc: Vinod Koul 
> Cc: Georgi Djakov 
> Cc: Thomas Gleixner 
> Cc: Marc Zyngier 
> Cc: Joerg Roedel 
> Cc: Lee Jones 
> Cc: Daniel Thompson 
> Cc: Jingoo Han 
> Cc: Pavel Machek 
> Cc: Mauro Carvalho Chehab 
> Cc: Krzysztof Kozlowski 
> Cc: Jakub Kicinski 
> Cc: Wolfgang Grandegger 
> Cc: Marc Kleine-Budde 
> Cc: Andrew Lunn 
> Cc: Vivien Didelot 
> Cc: Florian Fainelli 
> Cc: Vladimir Oltean 
> Cc: Kalle Valo 
> Cc: Viresh Kumar 
> Cc: Stephen Boyd 
> Cc: Kishon Vijay Abraham I 
> Cc: Linus Walleij 
> Cc: "Rafael J. Wysocki" 
> Cc: Kevin Hilman 
> Cc: Ulf Hansson 
> Cc: Sebastian Reichel 
> Cc: Mark Brown 
> Cc: Mathieu Poirier 
> Cc: Daniel Lezcano 
> Cc: Zhang Rui 
> Cc: Greg Kroah-Hartman 
> Cc: Thierry Reding 
> Cc: Jonathan Hunter 
> Cc: Sudeep Holla 
> Cc: Geert Uytterhoeven 
> Cc: linux-...@vger.kernel.org
> Cc: linux-cry...@vger.kernel.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: dmaeng...@vger.kernel.org
> Cc: linux...@vger.kernel.org
> Cc: iommu@lists.linux-foundation.org
> Cc: linux-l...@vger.kernel.org
> Cc: linux-me...@vger.kernel.org
> Cc: net...@vger.kernel.org
> Cc: linux-...@vger.kernel.org
> Cc: linux-wirel...@vger.kernel.org
> Cc: linux-...@lists.infradead.org
> Cc: linux-g...@vger.kernel.org
> Cc: linux-ri...@lists.infradead.org
> Cc: linux-remotep...@vger.kernel.org
> Cc: alsa-de...@alsa-project.org
> Cc: linux-...@vger.kernel.org
> Signed-off-by: Rob Herring 

Acked-by: Greg Kroah-Hartman 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Geert Uytterhoeven
Hi Rob,

On Wed, Jan 19, 2022 at 2:50 AM Rob Herring  wrote:

> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
>
> The array of phandles case boils down to needing:
>
> items:
>   maxItems: 1
>
> The phandle plus args cases should typically take this form:
>
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
>
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.

> Signed-off-by: Rob Herring 

The Renesas parts look good to me.
Reviewed-by: Geert Uytterhoeven 

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Ulf Hansson
On Wed, 19 Jan 2022 at 02:50, Rob Herring  wrote:
>
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
>
> The array of phandles case boils down to needing:
>
> items:
>   maxItems: 1
>
> The phandle plus args cases should typically take this form:
>
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
>
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.
>
> Cc: Damien Le Moal 
> Cc: Herbert Xu 
> Cc: "David S. Miller" 
> Cc: Chun-Kuang Hu 
> Cc: Philipp Zabel 
> Cc: Laurent Pinchart 
> Cc: Kieran Bingham 
> Cc: Vinod Koul 
> Cc: Georgi Djakov 
> Cc: Thomas Gleixner 
> Cc: Marc Zyngier 
> Cc: Joerg Roedel 
> Cc: Lee Jones 
> Cc: Daniel Thompson 
> Cc: Jingoo Han 
> Cc: Pavel Machek 
> Cc: Mauro Carvalho Chehab 
> Cc: Krzysztof Kozlowski 
> Cc: Jakub Kicinski 
> Cc: Wolfgang Grandegger 
> Cc: Marc Kleine-Budde 
> Cc: Andrew Lunn 
> Cc: Vivien Didelot 
> Cc: Florian Fainelli 
> Cc: Vladimir Oltean 
> Cc: Kalle Valo 
> Cc: Viresh Kumar 
> Cc: Stephen Boyd 
> Cc: Kishon Vijay Abraham I 
> Cc: Linus Walleij 
> Cc: "Rafael J. Wysocki" 
> Cc: Kevin Hilman 
> Cc: Ulf Hansson 
> Cc: Sebastian Reichel 
> Cc: Mark Brown 
> Cc: Mathieu Poirier 
> Cc: Daniel Lezcano 
> Cc: Zhang Rui 
> Cc: Greg Kroah-Hartman 
> Cc: Thierry Reding 
> Cc: Jonathan Hunter 
> Cc: Sudeep Holla 
> Cc: Geert Uytterhoeven 
> Cc: linux-...@vger.kernel.org
> Cc: linux-cry...@vger.kernel.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: dmaeng...@vger.kernel.org
> Cc: linux...@vger.kernel.org
> Cc: iommu@lists.linux-foundation.org
> Cc: linux-l...@vger.kernel.org
> Cc: linux-me...@vger.kernel.org
> Cc: net...@vger.kernel.org
> Cc: linux-...@vger.kernel.org
> Cc: linux-wirel...@vger.kernel.org
> Cc: linux-...@lists.infradead.org
> Cc: linux-g...@vger.kernel.org
> Cc: linux-ri...@lists.infradead.org
> Cc: linux-remotep...@vger.kernel.org
> Cc: alsa-de...@alsa-project.org
> Cc: linux-...@vger.kernel.org
> Signed-off-by: Rob Herring 
> ---

For CPUs and PM domains:

Acked-by: Ulf Hansson 

Kind regards
Uffe
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Georgi Djakov



On 19.01.22 3:50, Rob Herring wrote:

The 'phandle-array' type is a bit ambiguous. It can be either just an
array of phandles or an array of phandles plus args. Many schemas for
phandle-array properties aren't clear in the schema which case applies
though the description usually describes it.

The array of phandles case boils down to needing:

items:
   maxItems: 1

The phandle plus args cases should typically take this form:

items:
   - items:
   - description: A phandle
   - description: 1st arg cell
   - description: 2nd arg cell

With this change, some examples need updating so that the bracketing of
property values matches the schema.


[..]

  .../bindings/interconnect/qcom,rpmh.yaml  |  2 +


Acked-by: Georgi Djakov 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Rob Herring
On Wed, Jan 19, 2022 at 4:35 AM Vladimir Oltean  wrote:
>
> On Tue, Jan 18, 2022 at 07:50:38PM -0600, Rob Herring wrote:
> > The 'phandle-array' type is a bit ambiguous. It can be either just an
> > array of phandles or an array of phandles plus args. Many schemas for
> > phandle-array properties aren't clear in the schema which case applies
> > though the description usually describes it.
> >
> > The array of phandles case boils down to needing:
> >
> > items:
> >   maxItems: 1
> >
> > The phandle plus args cases should typically take this form:
> >
> > items:
> >   - items:
> >   - description: A phandle
> >   - description: 1st arg cell
> >   - description: 2nd arg cell
> >
> > With this change, some examples need updating so that the bracketing of
> > property values matches the schema.
> > ---
> (...)
> > diff --git a/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml 
> > b/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml
> > index 702df848a71d..c504feeec6db 100644
> > --- a/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml
> > +++ b/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml
> > @@ -34,6 +34,8 @@ properties:
> >full routing information must be given, not just the one hop
> >routes to neighbouring switches
> >  $ref: /schemas/types.yaml#/definitions/phandle-array
> > +items:
> > +  maxItems: 1
> >
> >ethernet:
> >  description:
>
> For better or worse, the mainline cases of this property all take the
> form of:
>
> arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
> link = <&switch1port9 &switch2port9>;
> link = <&switch1port10 &switch0port10>;
> arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
> link = <&switch1port6
> &switch2port9>;
> link = <&switch1port5
> &switch0port5>;
> arch/arm/boot/dts/vf610-zii-scu4-aib.dts
> link = <&switch1port10
> &switch3port10
> &switch2port10>;
> link = <&switch3port10
> &switch2port10>;
> link = <&switch1port9
> &switch0port10>;
>
> So not really an array of phandles.

Either form is an array. The DT yaml encoding maintains the
bracketing, so how the schema is defined matters. To some extent the
tools will process the schema to support both forms of bracketing, but
this has turned out to be fragile and just doesn't work for phandle
arrays. I'm working on further changes that will get rid of the yaml
encoded DT format and validate DTB files directly. These obviously
have no bracketing and needing the DTS source files to change goes
away. However, to be able to construct the internal format for
validation, I do need the schemas to have more information on what
exactly the phandle-array contains.

Rob
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Rob Herring
On Wed, Jan 19, 2022 at 9:22 AM Arnaud POULIQUEN
 wrote:
>
> Hello Rob,
>
> On 1/19/22 2:50 AM, Rob Herring wrote:
> > The 'phandle-array' type is a bit ambiguous. It can be either just an
> > array of phandles or an array of phandles plus args. Many schemas for
> > phandle-array properties aren't clear in the schema which case applies
> > though the description usually describes it.
> >
> > The array of phandles case boils down to needing:
> >
> > items:
> >   maxItems: 1
> >
> > The phandle plus args cases should typically take this form:
> >
> > items:
> >   - items:
> >   - description: A phandle
> >   - description: 1st arg cell
> >   - description: 2nd arg cell
> >
> > With this change, some examples need updating so that the bracketing of
> > property values matches the schema.
> >
> > Cc: Damien Le Moal 
> > Cc: Herbert Xu 
> > Cc: "David S. Miller" 
> > Cc: Chun-Kuang Hu 
> > Cc: Philipp Zabel 
> > Cc: Laurent Pinchart 
> > Cc: Kieran Bingham 
> > Cc: Vinod Koul 
> > Cc: Georgi Djakov 
> > Cc: Thomas Gleixner 
> > Cc: Marc Zyngier 
> > Cc: Joerg Roedel 
> > Cc: Lee Jones 
> > Cc: Daniel Thompson 
> > Cc: Jingoo Han 
> > Cc: Pavel Machek 
> > Cc: Mauro Carvalho Chehab 
> > Cc: Krzysztof Kozlowski 
> > Cc: Jakub Kicinski 
> > Cc: Wolfgang Grandegger 
> > Cc: Marc Kleine-Budde 
> > Cc: Andrew Lunn 
> > Cc: Vivien Didelot 
> > Cc: Florian Fainelli 
> > Cc: Vladimir Oltean 
> > Cc: Kalle Valo 
> > Cc: Viresh Kumar 
> > Cc: Stephen Boyd 
> > Cc: Kishon Vijay Abraham I 
> > Cc: Linus Walleij 
> > Cc: "Rafael J. Wysocki" 
> > Cc: Kevin Hilman 
> > Cc: Ulf Hansson 
> > Cc: Sebastian Reichel 
> > Cc: Mark Brown 
> > Cc: Mathieu Poirier 
> > Cc: Daniel Lezcano 
> > Cc: Zhang Rui 
> > Cc: Greg Kroah-Hartman 
> > Cc: Thierry Reding 
> > Cc: Jonathan Hunter 
> > Cc: Sudeep Holla 
> > Cc: Geert Uytterhoeven 
> > Cc: linux-...@vger.kernel.org
> > Cc: linux-cry...@vger.kernel.org
> > Cc: dri-de...@lists.freedesktop.org
> > Cc: dmaeng...@vger.kernel.org
> > Cc: linux...@vger.kernel.org
> > Cc: iommu@lists.linux-foundation.org
> > Cc: linux-l...@vger.kernel.org
> > Cc: linux-me...@vger.kernel.org
> > Cc: net...@vger.kernel.org
> > Cc: linux-...@vger.kernel.org
> > Cc: linux-wirel...@vger.kernel.org
> > Cc: linux-...@lists.infradead.org
> > Cc: linux-g...@vger.kernel.org
> > Cc: linux-ri...@lists.infradead.org
> > Cc: linux-remotep...@vger.kernel.org
> > Cc: alsa-de...@alsa-project.org
> > Cc: linux-...@vger.kernel.org
> > Signed-off-by: Rob Herring 
> > ---
>
> [...]
>
> >  .../bindings/remoteproc/st,stm32-rproc.yaml   | 33 ++--
>
> [...]
>
> > diff --git 
> > a/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml 
> > b/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
> > index b587c97c282b..be3d9b0e876b 100644
> > --- a/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
> > +++ b/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
> > @@ -29,17 +29,22 @@ properties:
> >
> >st,syscfg-holdboot:
> >  description: remote processor reset hold boot
> > -  - Phandle of syscon block.
> > -  - The offset of the hold boot setting register.
> > -  - The field mask of the hold boot.
> >  $ref: "/schemas/types.yaml#/definitions/phandle-array"
> > -maxItems: 1
> > +items:
> > +  - items:
> > +  - description: Phandle of syscon block
> > +  - description: The offset of the hold boot setting register
> > +  - description: The field mask of the hold boot
> >
> >st,syscfg-tz:
> >  description:
> >Reference to the system configuration which holds the RCC trust zone 
> > mode
> >  $ref: "/schemas/types.yaml#/definitions/phandle-array"
> > -maxItems: 1
> > +items:
> > +  - items:
> > +  - description: Phandle of syscon block
> > +  - description: FIXME
> > +  - description: FIXME
>
>  - description: The offset of the trust zone setting register
>  - description: The field mask of the trust zone state
>
> >
> >interrupts:
> >  description: Should contain the WWDG1 watchdog reset interrupt
> > @@ -93,20 +98,32 @@ properties:
> >  $ref: "/schemas/types.yaml#/definitions/phandle-array"
> >  description: |
> >Reference to the system configuration which holds the remote
> > -maxItems: 1
> > +items:
> > +  - items:
> > +  - description: Phandle of syscon block
> > +  - description: FIXME
> > +  - description: FIXME
>
>  - description: The offset of the power setting register
>  - description: The field mask of the PDDS selection
>
> >
> >st,syscfg-m4-state:
> >  $ref: "/schemas/types.yaml#/definitions/phandle-array"
> >  description: |
> >Reference to the tamp register which exposes the Cortex-M4 state.
> > -maxItems: 1
> > +items:
> > +  - items:
> > +  - description: Phandle of syscon block with the tamp register
> > +

Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Arnaud POULIQUEN
Hello Rob,

On 1/19/22 2:50 AM, Rob Herring wrote:
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
> 
> The array of phandles case boils down to needing:
> 
> items:
>   maxItems: 1
> 
> The phandle plus args cases should typically take this form:
> 
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
> 
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.
> 
> Cc: Damien Le Moal 
> Cc: Herbert Xu 
> Cc: "David S. Miller" 
> Cc: Chun-Kuang Hu 
> Cc: Philipp Zabel 
> Cc: Laurent Pinchart 
> Cc: Kieran Bingham 
> Cc: Vinod Koul 
> Cc: Georgi Djakov 
> Cc: Thomas Gleixner 
> Cc: Marc Zyngier 
> Cc: Joerg Roedel 
> Cc: Lee Jones 
> Cc: Daniel Thompson 
> Cc: Jingoo Han 
> Cc: Pavel Machek 
> Cc: Mauro Carvalho Chehab 
> Cc: Krzysztof Kozlowski 
> Cc: Jakub Kicinski 
> Cc: Wolfgang Grandegger 
> Cc: Marc Kleine-Budde 
> Cc: Andrew Lunn 
> Cc: Vivien Didelot 
> Cc: Florian Fainelli 
> Cc: Vladimir Oltean 
> Cc: Kalle Valo 
> Cc: Viresh Kumar 
> Cc: Stephen Boyd 
> Cc: Kishon Vijay Abraham I 
> Cc: Linus Walleij 
> Cc: "Rafael J. Wysocki" 
> Cc: Kevin Hilman 
> Cc: Ulf Hansson 
> Cc: Sebastian Reichel 
> Cc: Mark Brown 
> Cc: Mathieu Poirier 
> Cc: Daniel Lezcano 
> Cc: Zhang Rui 
> Cc: Greg Kroah-Hartman 
> Cc: Thierry Reding 
> Cc: Jonathan Hunter 
> Cc: Sudeep Holla 
> Cc: Geert Uytterhoeven 
> Cc: linux-...@vger.kernel.org
> Cc: linux-cry...@vger.kernel.org
> Cc: dri-de...@lists.freedesktop.org
> Cc: dmaeng...@vger.kernel.org
> Cc: linux...@vger.kernel.org
> Cc: iommu@lists.linux-foundation.org
> Cc: linux-l...@vger.kernel.org
> Cc: linux-me...@vger.kernel.org
> Cc: net...@vger.kernel.org
> Cc: linux-...@vger.kernel.org
> Cc: linux-wirel...@vger.kernel.org
> Cc: linux-...@lists.infradead.org
> Cc: linux-g...@vger.kernel.org
> Cc: linux-ri...@lists.infradead.org
> Cc: linux-remotep...@vger.kernel.org
> Cc: alsa-de...@alsa-project.org
> Cc: linux-...@vger.kernel.org
> Signed-off-by: Rob Herring 
> ---

[...]

>  .../bindings/remoteproc/st,stm32-rproc.yaml   | 33 ++--

[...]

> diff --git a/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml 
> b/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
> index b587c97c282b..be3d9b0e876b 100644
> --- a/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
> +++ b/Documentation/devicetree/bindings/remoteproc/st,stm32-rproc.yaml
> @@ -29,17 +29,22 @@ properties:
>  
>st,syscfg-holdboot:
>  description: remote processor reset hold boot
> -  - Phandle of syscon block.
> -  - The offset of the hold boot setting register.
> -  - The field mask of the hold boot.
>  $ref: "/schemas/types.yaml#/definitions/phandle-array"
> -maxItems: 1
> +items:
> +  - items:
> +  - description: Phandle of syscon block
> +  - description: The offset of the hold boot setting register
> +  - description: The field mask of the hold boot
>  
>st,syscfg-tz:
>  description:
>Reference to the system configuration which holds the RCC trust zone 
> mode
>  $ref: "/schemas/types.yaml#/definitions/phandle-array"
> -maxItems: 1
> +items:
> +  - items:
> +  - description: Phandle of syscon block
> +  - description: FIXME
> +  - description: FIXME

 - description: The offset of the trust zone setting register
 - description: The field mask of the trust zone state

>  
>interrupts:
>  description: Should contain the WWDG1 watchdog reset interrupt
> @@ -93,20 +98,32 @@ properties:
>  $ref: "/schemas/types.yaml#/definitions/phandle-array"
>  description: |
>Reference to the system configuration which holds the remote
> -maxItems: 1
> +items:
> +  - items:
> +  - description: Phandle of syscon block
> +  - description: FIXME
> +  - description: FIXME

 - description: The offset of the power setting register
 - description: The field mask of the PDDS selection

>  
>st,syscfg-m4-state:
>  $ref: "/schemas/types.yaml#/definitions/phandle-array"
>  description: |
>Reference to the tamp register which exposes the Cortex-M4 state.
> -maxItems: 1
> +items:
> +  - items:
> +  - description: Phandle of syscon block with the tamp register
> +  - description: FIXME
> +  - description: FIXME

 - description: The offset of the tamp register
 - description: The field mask of the Cortex-M4 state

>  
>st,syscfg-rsc-tbl:
>  $ref: "/schemas/types.yaml#/definitions/phandle-array"
>  description: |
>Reference to the tamp register which references the C

Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Mark Brown
On Tue, Jan 18, 2022 at 07:50:38PM -0600, Rob Herring wrote:
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.

Acked-by: Mark Brown 


signature.asc
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] iommu/arm-smmu-v3: fix event handling soft lockup

2022-01-19 Thread Zhou Guanghui via iommu
During event processing, events are read from the event queue one
by one until the queue is empty.If the master device continuously
requests address access at the same time and the SMMU generates
events, the cyclic processing of the event takes a long time and
softlockup warnings may be reported.

arm-smmu-v3 arm-smmu-v3.34.auto: event 0x0a received:
arm-smmu-v3 arm-smmu-v3.34.auto:0x7f22280a
arm-smmu-v3 arm-smmu-v3.34.auto:0x107e
arm-smmu-v3 arm-smmu-v3.34.auto:0x034e8670
watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [irq/268-arm-smm:247]
Call trace:
 _dev_info+0x7c/0xa0
 arm_smmu_evtq_thread+0x1c0/0x230
 irq_thread_fn+0x30/0x80
 irq_thread+0x128/0x210
 kthread+0x134/0x138
 ret_from_fork+0x10/0x1c
Kernel panic - not syncing: softlockup: hung tasks

Fix this by calling cond_resched() after the event information is
printed.

Signed-off-by: Zhou Guanghui 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 6dc6d8b6b368..f60381cdf1c4 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1558,6 +1558,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void 
*dev)
dev_info(smmu->dev, "\t0x%016llx\n",
 (unsigned long long)evt[i]);
 
+   cond_resched();
}
 
/*
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Vladimir Oltean
On Tue, Jan 18, 2022 at 07:50:38PM -0600, Rob Herring wrote:
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
> 
> The array of phandles case boils down to needing:
> 
> items:
>   maxItems: 1
> 
> The phandle plus args cases should typically take this form:
> 
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
> 
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.
> ---
(...)
> diff --git a/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml 
> b/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml
> index 702df848a71d..c504feeec6db 100644
> --- a/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml
> +++ b/Documentation/devicetree/bindings/net/dsa/dsa-port.yaml
> @@ -34,6 +34,8 @@ properties:
>full routing information must be given, not just the one hop
>routes to neighbouring switches
>  $ref: /schemas/types.yaml#/definitions/phandle-array
> +items:
> +  maxItems: 1
>  
>ethernet:
>  description:

For better or worse, the mainline cases of this property all take the
form of:

arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
link = <&switch1port9 &switch2port9>;
link = <&switch1port10 &switch0port10>;
arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
link = <&switch1port6
&switch2port9>;
link = <&switch1port5
&switch0port5>;
arch/arm/boot/dts/vf610-zii-scu4-aib.dts
link = <&switch1port10
&switch3port10
&switch2port10>;
link = <&switch3port10
&switch2port10>;
link = <&switch1port9
&switch0port10>;

So not really an array of phandles.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/vt-d: Do not dump pasid table entries in kdump kernel

2022-01-19 Thread Zelin Deng
In kdump kernel PASID translations won't be copied from previous kernel
even if scalable-mode is enabled, so pages of PASID translations are
non-present in kdump kernel. Attempt to access those address will cause
PF fault:

[   13.396476] DMAR: DRHD: handling fault status reg 3
[   13.396478] DMAR: [DMA Read NO_PASID] Request device [81:00.0] fault addr 
0xd000 [fault reason 0x59] SM: Present bit in PA$
[   13.396480] DMAR: Dump dmar5 table entries for IOVA 0xd000
[   13.396481] DMAR: scalable mode root entry: hi 0x, low 
0x460d1001
[   13.396482] DMAR: context entry: hi 0x0008, low 
0x0010c4237401
[   13.396485] BUG: unable to handle page fault for address: ff110010c4237000
[   13.396486] #PF: supervisor read access in kernel mode
[   13.396487] #PF: error_code(0x) - not-present page
[   13.396488] PGD 5d201067 P4D 5d202067 PUD 0
[   13.396490] Oops:  [#1] PREEMPT SMP NOPTI
[   13.396491] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 
5.16.0-rc6-next-20211224+ #6
[   13.396493] Hardware name: Intel Corporation EAGLESTREAM/EAGLESTREAM, BIOS 
EGSDCRB1.86B.0067.D12.2110190950 10/19/2021
[   13.396494] RIP: 0010:dmar_fault_dump_ptes+0x13b/0x295

Hence skip dumping pasid table entries if in kdump kernel.

Fixes: 914ff7719e8a (“iommu/vt-d: Dump DMAR translation structure when DMA 
fault occurs”)
Signed-off-by: Zelin Deng 
---
 drivers/iommu/intel/iommu.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 92fea3fb..f0134cf 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1074,6 +1074,12 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 
source_id,
if (!sm_supported(iommu))
goto pgtable_walk;
 
+   /* PASID translations is not copied, skip dumping pasid table entries
+* otherwise non-present page will be accessed.
+*/
+   if (is_kdump_kernel())
+   goto pgtable_walk;
+
/* get the pointer to pasid directory entry */
dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
if (!dir) {
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH v4 0/7] Use pageblock_order for cma and alloc_contig_range alignment.

2022-01-19 Thread Zi Yan
From: Zi Yan 

Hi all,

This patchset tries to remove the MAX_ORDER-1 alignment requirement for CMA
and alloc_contig_range(). It prepares for my upcoming changes to make
MAX_ORDER adjustable at boot time[1]. It is on top of mmotm-2021-12-29-20-07.

Changelog from RFC
===
1. Dropped two irrelevant patches on non-lru compound page handling, as
   it is not supported upstream.
2. Renamed migratetype_has_fallback() to migratetype_is_mergeable().
3. Always check whether two pageblocks can be merged in
   __free_one_page() when order is >= pageblock_order, as the case (not
   mergeable pageblocks are isolated, CMA, and HIGHATOMIC) becomes more common.
3. Moving has_unmovable_pages() is now a separate patch.
4. Removed MAX_ORDER-1 alignment requirement in the comment in virtio_mem code.

Description
===

The MAX_ORDER - 1 alignment requirement comes from that alloc_contig_range()
isolates pageblocks to remove free memory from buddy allocator but isolating
only a subset of pageblocks within a page spanning across multiple pageblocks
causes free page accounting issues. Isolated page might not be put into the
right free list, since the code assumes the migratetype of the first pageblock
as the whole free page migratetype. This is based on the discussion at [2].

To remove the requirement, this patchset:
1. still isolates pageblocks at MAX_ORDER - 1 granularity;
2. but saves the pageblock migratetypes outside the specified range of
   alloc_contig_range() and restores them after all pages within the range
   become free after __alloc_contig_migrate_range();
3. only checks unmovable pages within the range instead of MAX_ORDER - 1 aligned
   range during isolation to avoid alloc_contig_range() failure when pageblocks
   within a MAX_ORDER - 1 aligned range are allocated separately.
3. splits free pages spanning multiple pageblocks at the beginning and the end
   of the range and puts the split pages to the right migratetype free lists
   based on the pageblock migratetypes;
4. returns pages not in the range as it did before.

Isolation needs to be done at MAX_ORDER - 1 granularity, because otherwise
either 1) it is needed to detect to-be-isolated page size (free, PageHuge, THP,
or other PageCompound) to make sure all pageblocks belonging to a single page
are isolated together and later restore pageblock migratetypes outside the
range, or 2) assuming isolation happens at pageblock granularity, a free page
with multi-migratetype pageblocks can seen in free page path and needs
to be split and freed at pageblock granularity.

One optimization might come later:
1. make MIGRATE_ISOLATE a separate bit to avoid saving and restoring existing
   migratetypes before and after isolation respectively.

Feel free to give comments and suggestions. Thanks.

[1] https://lore.kernel.org/linux-mm/20210805190253.2795604-1-zi@sent.com/
[2] 
https://lore.kernel.org/linux-mm/d19fb078-cb9b-f60f-e310-fdeea1b94...@redhat.com/


Zi Yan (7):
  mm: page_alloc: avoid merging non-fallbackable pageblocks with others.
  mm: page_isolation: move has_unmovable_pages() to mm/page_isolation.c
  mm: page_isolation: check specified range for unmovable pages
  mm: make alloc_contig_range work at pageblock granularity
  mm: cma: use pageblock_order as the single alignment
  drivers: virtio_mem: use pageblock size as the minimum virtio_mem
size.
  arch: powerpc: adjust fadump alignment to be pageblock aligned.

 arch/powerpc/include/asm/fadump-internal.h |   4 +-
 drivers/virtio/virtio_mem.c|   7 +-
 include/linux/mmzone.h |  16 +-
 include/linux/page-isolation.h |   3 +-
 kernel/dma/contiguous.c|   2 +-
 mm/cma.c   |   6 +-
 mm/memory_hotplug.c|  12 +-
 mm/page_alloc.c| 337 +++--
 mm/page_isolation.c| 154 +-
 9 files changed, 352 insertions(+), 189 deletions(-)

-- 
2.34.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 1/7] mm: page_alloc: avoid merging non-fallbackable pageblocks with others.

2022-01-19 Thread Zi Yan
From: Zi Yan 

This is done in addition to MIGRATE_ISOLATE pageblock merge avoidance.
It prepares for the upcoming removal of the MAX_ORDER-1 alignment
requirement for CMA and alloc_contig_range().

MIGRARTE_HIGHATOMIC should not merge with other migratetypes like
MIGRATE_ISOLATE and MIGRARTE_CMA[1], so this commit prevents that too.
Also add MIGRARTE_HIGHATOMIC to fallbacks array for completeness.

[1] https://lore.kernel.org/linux-mm/20211130100853.gp3...@techsingularity.net/

Signed-off-by: Zi Yan 
---
 include/linux/mmzone.h | 11 +++
 mm/page_alloc.c| 37 -
 2 files changed, 31 insertions(+), 17 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index aed44e9b5d89..71b77aab748d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -83,6 +83,17 @@ static inline bool is_migrate_movable(int mt)
return is_migrate_cma(mt) || mt == MIGRATE_MOVABLE;
 }
 
+/*
+ * Check whether a migratetype can be merged with another migratetype.
+ *
+ * It is only mergeable when it can fall back to other migratetypes for
+ * allocation. See fallbacks[MIGRATE_TYPES][3] in page_alloc.c.
+ */
+static inline bool migratetype_is_mergeable(int mt)
+{
+   return mt < MIGRATE_PCPTYPES;
+}
+
 #define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
for (type = 0; type < MIGRATE_TYPES; type++)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8dd6399bafb5..15de65215c02 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1117,25 +1117,24 @@ static inline void __free_one_page(struct page *page,
}
if (order < MAX_ORDER - 1) {
/* If we are here, it means order is >= pageblock_order.
-* We want to prevent merge between freepages on isolate
-* pageblock and normal pageblock. Without this, pageblock
-* isolation could cause incorrect freepage or CMA accounting.
+* We want to prevent merge between freepages on pageblock
+* without fallbacks and normal pageblock. Without this,
+* pageblock isolation could cause incorrect freepage or CMA
+* accounting or HIGHATOMIC accounting.
 *
 * We don't want to hit this code for the more frequent
 * low-order merging.
 */
-   if (unlikely(has_isolate_pageblock(zone))) {
-   int buddy_mt;
+   int buddy_mt;
 
-   buddy_pfn = __find_buddy_pfn(pfn, order);
-   buddy = page + (buddy_pfn - pfn);
-   buddy_mt = get_pageblock_migratetype(buddy);
+   buddy_pfn = __find_buddy_pfn(pfn, order);
+   buddy = page + (buddy_pfn - pfn);
+   buddy_mt = get_pageblock_migratetype(buddy);
 
-   if (migratetype != buddy_mt
-   && (is_migrate_isolate(migratetype) ||
-   is_migrate_isolate(buddy_mt)))
-   goto done_merging;
-   }
+   if (migratetype != buddy_mt
+   && (!migratetype_is_mergeable(migratetype) ||
+   !migratetype_is_mergeable(buddy_mt)))
+   goto done_merging;
max_order = order + 1;
goto continue_merging;
}
@@ -2484,6 +2483,7 @@ static int fallbacks[MIGRATE_TYPES][3] = {
[MIGRATE_UNMOVABLE]   = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE,   
MIGRATE_TYPES },
[MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, 
MIGRATE_TYPES },
[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE,   
MIGRATE_TYPES },
+   [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */
 #ifdef CONFIG_CMA
[MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */
 #endif
@@ -2795,8 +2795,8 @@ static void reserve_highatomic_pageblock(struct page 
*page, struct zone *zone,
 
/* Yoink! */
mt = get_pageblock_migratetype(page);
-   if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt)
-   && !is_migrate_cma(mt)) {
+   /* Only reserve normal pageblocks (i.e., they can merge with others) */
+   if (migratetype_is_mergeable(mt)) {
zone->nr_reserved_highatomic += pageblock_nr_pages;
set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
@@ -3545,8 +3545,11 @@ int __isolate_free_page(struct page *page, unsigned int 
order)
struct page *endpage = page + (1 << order) - 1;
for (; page < endpage; page += pageblock_nr_pages) {
int mt = get_pageblock_migratetype(page);
-   if (!is_migrate_isolate(mt) && !is

[PATCH v4 2/7] mm: page_isolation: move has_unmovable_pages() to mm/page_isolation.c

2022-01-19 Thread Zi Yan
From: Zi Yan 

has_unmovable_pages() is only used in mm/page_isolation.c. Move it from
mm/page_alloc.c and make it static.

Signed-off-by: Zi Yan 
---
 include/linux/page-isolation.h |   2 -
 mm/page_alloc.c| 119 -
 mm/page_isolation.c| 119 +
 3 files changed, 119 insertions(+), 121 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 572458016331..e14eddf6741a 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -33,8 +33,6 @@ static inline bool is_migrate_isolate(int migratetype)
 #define MEMORY_OFFLINE 0x1
 #define REPORT_FAILURE 0x2
 
-struct page *has_unmovable_pages(struct zone *zone, struct page *page,
-int migratetype, int flags);
 void set_pageblock_migratetype(struct page *page, int migratetype);
 int move_freepages_block(struct zone *zone, struct page *page,
int migratetype, int *num_movable);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 15de65215c02..1d812268c2a9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8859,125 +8859,6 @@ void *__init alloc_large_system_hash(const char 
*tablename,
return table;
 }
 
-/*
- * This function checks whether pageblock includes unmovable pages or not.
- *
- * PageLRU check without isolation or lru_lock could race so that
- * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable
- * check without lock_page also may miss some movable non-lru pages at
- * race condition. So you can't expect this function should be exact.
- *
- * Returns a page without holding a reference. If the caller wants to
- * dereference that page (e.g., dumping), it has to make sure that it
- * cannot get removed (e.g., via memory unplug) concurrently.
- *
- */
-struct page *has_unmovable_pages(struct zone *zone, struct page *page,
-int migratetype, int flags)
-{
-   unsigned long iter = 0;
-   unsigned long pfn = page_to_pfn(page);
-   unsigned long offset = pfn % pageblock_nr_pages;
-
-   if (is_migrate_cma_page(page)) {
-   /*
-* CMA allocations (alloc_contig_range) really need to mark
-* isolate CMA pageblocks even when they are not movable in fact
-* so consider them movable here.
-*/
-   if (is_migrate_cma(migratetype))
-   return NULL;
-
-   return page;
-   }
-
-   for (; iter < pageblock_nr_pages - offset; iter++) {
-   page = pfn_to_page(pfn + iter);
-
-   /*
-* Both, bootmem allocations and memory holes are marked
-* PG_reserved and are unmovable. We can even have unmovable
-* allocations inside ZONE_MOVABLE, for example when
-* specifying "movablecore".
-*/
-   if (PageReserved(page))
-   return page;
-
-   /*
-* If the zone is movable and we have ruled out all reserved
-* pages then it should be reasonably safe to assume the rest
-* is movable.
-*/
-   if (zone_idx(zone) == ZONE_MOVABLE)
-   continue;
-
-   /*
-* Hugepages are not in LRU lists, but they're movable.
-* THPs are on the LRU, but need to be counted as #small pages.
-* We need not scan over tail pages because we don't
-* handle each tail page individually in migration.
-*/
-   if (PageHuge(page) || PageTransCompound(page)) {
-   struct page *head = compound_head(page);
-   unsigned int skip_pages;
-
-   if (PageHuge(page)) {
-   if 
(!hugepage_migration_supported(page_hstate(head)))
-   return page;
-   } else if (!PageLRU(head) && !__PageMovable(head)) {
-   return page;
-   }
-
-   skip_pages = compound_nr(head) - (page - head);
-   iter += skip_pages - 1;
-   continue;
-   }
-
-   /*
-* We can't use page_count without pin a page
-* because another CPU can free compound page.
-* This check already skips compound tails of THP
-* because their page->_refcount is zero at all time.
-*/
-   if (!page_ref_count(page)) {
-   if (PageBuddy(page))
-   iter += (1 << buddy_order(page)) - 1;
-   continue;
-   }
-
-   /*
-* The HWPoisoned page may be not in buddy system, and
-  

[PATCH v4 3/7] mm: page_isolation: check specified range for unmovable pages

2022-01-19 Thread Zi Yan
From: Zi Yan 

Enable set_migratetype_isolate() to check specified sub-range for
unmovable pages during isolation. Page isolation is done
at max(MAX_ORDER_NR_PAEGS, pageblock_nr_pages) granularity, but not all
pages within that granularity are intended to be isolated. For example,
alloc_contig_range(), which uses page isolation, allows ranges without
alignment. This commit makes unmovable page check only look for
interesting pages, so that page isolation can succeed for any
non-overlapping ranges.

Signed-off-by: Zi Yan 
---
 include/linux/page-isolation.h |  1 +
 mm/memory_hotplug.c| 12 +++-
 mm/page_alloc.c|  2 +-
 mm/page_isolation.c| 53 +-
 4 files changed, 46 insertions(+), 22 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index e14eddf6741a..a4d2687ed4e6 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -42,6 +42,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
  */
 int
 start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+unsigned long isolate_start, unsigned long isolate_end,
 unsigned migratetype, int flags);
 
 /*
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0139b77c51d5..5db84c3fa882 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1901,8 +1901,18 @@ int __ref offline_pages(unsigned long start_pfn, 
unsigned long nr_pages,
zone_pcp_disable(zone);
lru_cache_disable();
 
-   /* set above range as isolated */
+   /*
+* set above range as isolated
+*
+* start_pfn and end_pfn are the same as isolate_start and isolate_end,
+* because start_pfn and end_pfn are already PAGES_PER_SECTION
+* (>= MAX_ORDER_NR_PAGES) aligned; if start_pfn is
+* pageblock_nr_pages aligned in memmap_on_memory case, there is no
+* need to isolate pages before start_pfn, since they are used by
+* memmap thus not user visible.
+*/
ret = start_isolate_page_range(start_pfn, end_pfn,
+  start_pfn, end_pfn,
   MIGRATE_MOVABLE,
   MEMORY_OFFLINE | REPORT_FAILURE);
if (ret) {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1d812268c2a9..812cf557b20f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -9016,7 +9016,7 @@ int alloc_contig_range(unsigned long start, unsigned long 
end,
 * put back to page allocator so that buddy can use them.
 */
 
-   ret = start_isolate_page_range(pfn_max_align_down(start),
+   ret = start_isolate_page_range(start, end, pfn_max_align_down(start),
   pfn_max_align_up(end), migratetype, 0);
if (ret)
return ret;
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 6c841274bf46..d17ad9a7d4bf 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -16,7 +16,8 @@
 #include 
 
 /*
- * This function checks whether pageblock includes unmovable pages or not.
+ * This function checks whether pageblock within [start_pfn, end_pfn) includes
+ * unmovable pages or not.
  *
  * PageLRU check without isolation or lru_lock could race so that
  * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable
@@ -29,11 +30,14 @@
  *
  */
 static struct page *has_unmovable_pages(struct zone *zone, struct page *page,
-int migratetype, int flags)
+int migratetype, int flags,
+unsigned long start_pfn, unsigned long end_pfn)
 {
-   unsigned long iter = 0;
-   unsigned long pfn = page_to_pfn(page);
-   unsigned long offset = pfn % pageblock_nr_pages;
+   unsigned long first_pfn = max(page_to_pfn(page), start_pfn);
+   unsigned long pfn = first_pfn;
+   unsigned long last_pfn = min(ALIGN(pfn + 1, pageblock_nr_pages), 
end_pfn);
+
+   page = pfn_to_page(pfn);
 
if (is_migrate_cma_page(page)) {
/*
@@ -47,8 +51,8 @@ static struct page *has_unmovable_pages(struct zone *zone, 
struct page *page,
return page;
}
 
-   for (; iter < pageblock_nr_pages - offset; iter++) {
-   page = pfn_to_page(pfn + iter);
+   for (pfn = first_pfn; pfn < last_pfn; pfn++) {
+   page = pfn_to_page(pfn);
 
/*
 * Both, bootmem allocations and memory holes are marked
@@ -85,7 +89,7 @@ static struct page *has_unmovable_pages(struct zone *zone, 
struct page *page,
}
 
skip_pages = compound_nr(head) - (page - head);
-   iter += skip_pages - 1;
+   pfn += skip_pages - 1;
continue;
}
 
@@ -97,7 +101,7 @@ static s

[PATCH v4 5/7] mm: cma: use pageblock_order as the single alignment

2022-01-19 Thread Zi Yan
From: Zi Yan 

Now alloc_contig_range() works at pageblock granularity. Change CMA
allocation, which uses alloc_contig_range(), to use pageblock_order
alignment.

Signed-off-by: Zi Yan 
---
 include/linux/mmzone.h  | 5 +
 kernel/dma/contiguous.c | 2 +-
 mm/cma.c| 6 ++
 mm/page_alloc.c | 6 +++---
 4 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 71b77aab748d..7bd3694b24b4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -54,10 +54,7 @@ enum migratetype {
 *
 * The way to use it is to change migratetype of a range of
 * pageblocks to MIGRATE_CMA which can be done by
-* __free_pageblock_cma() function.  What is important though
-* is that a range of pageblocks must be aligned to
-* MAX_ORDER_NR_PAGES should biggest page be bigger than
-* a single pageblock.
+* __free_pageblock_cma() function.
 */
MIGRATE_CMA,
 #endif
diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index 3d63d91cba5c..ac35b14b0786 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -399,7 +399,7 @@ static const struct reserved_mem_ops rmem_cma_ops = {
 
 static int __init rmem_cma_setup(struct reserved_mem *rmem)
 {
-   phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
+   phys_addr_t align = PAGE_SIZE << pageblock_order;
phys_addr_t mask = align - 1;
unsigned long node = rmem->fdt_node;
bool default_cma = of_get_flat_dt_prop(node, "linux,cma-default", NULL);
diff --git a/mm/cma.c b/mm/cma.c
index bc9ca8f3c487..d171158bd418 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -180,8 +180,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, 
phys_addr_t size,
return -EINVAL;
 
/* ensure minimal alignment required by mm core */
-   alignment = PAGE_SIZE <<
-   max_t(unsigned long, MAX_ORDER - 1, pageblock_order);
+   alignment = PAGE_SIZE << pageblock_order;
 
/* alignment should be aligned with order_per_bit */
if (!IS_ALIGNED(alignment >> PAGE_SHIFT, 1 << order_per_bit))
@@ -268,8 +267,7 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
 * migratetype page by page allocator's buddy algorithm. In the case,
 * you couldn't get a contiguous memory, which is not what we want.
 */
-   alignment = max(alignment,  (phys_addr_t)PAGE_SIZE <<
- max_t(unsigned long, MAX_ORDER - 1, pageblock_order));
+   alignment = max(alignment,  (phys_addr_t)PAGE_SIZE << pageblock_order);
if (fixed && base & (alignment - 1)) {
ret = -EINVAL;
pr_err("Region at %pa must be aligned to %pa bytes\n",
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6ed506234efa..a8ced1a00ce8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -9008,8 +9008,8 @@ static inline void split_free_page_into_pageblocks(struct 
page *free_page,
  * be either of the two.
  * @gfp_mask:  GFP mask to use during compaction
  *
- * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
- * aligned.  The PFN range must belong to a single zone.
+ * The PFN range does not have to be pageblock aligned. The PFN range must
+ * belong to a single zone.
  *
  * The first thing this routine does is attempt to MIGRATE_ISOLATE all
  * pageblocks in the range.  Once isolated, the pageblocks should not
@@ -9125,7 +9125,7 @@ int alloc_contig_range(unsigned long start, unsigned long 
end,
ret = 0;
 
/*
-* Pages from [start, end) are within a MAX_ORDER_NR_PAGES
+* Pages from [start, end) are within a pageblock_nr_pages
 * aligned blocks that are marked as MIGRATE_ISOLATE.  What's
 * more, all pages in [start, end) are free in page allocator.
 * What we are going to do is to allocate all pages from
-- 
2.34.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 7/7] arch: powerpc: adjust fadump alignment to be pageblock aligned.

2022-01-19 Thread Zi Yan
From: Zi Yan 

CMA only requires pageblock alignment now. Change CMA alignment in
fadump too.

Signed-off-by: Zi Yan 
---
 arch/powerpc/include/asm/fadump-internal.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/fadump-internal.h 
b/arch/powerpc/include/asm/fadump-internal.h
index 52189928ec08..fbfca85b4200 100644
--- a/arch/powerpc/include/asm/fadump-internal.h
+++ b/arch/powerpc/include/asm/fadump-internal.h
@@ -20,9 +20,7 @@
 #define memblock_num_regions(memblock_type)(memblock.memblock_type.cnt)
 
 /* Alignment per CMA requirement. */
-#define FADUMP_CMA_ALIGNMENT   (PAGE_SIZE <<   \
-max_t(unsigned long, MAX_ORDER - 1,\
-pageblock_order))
+#define FADUMP_CMA_ALIGNMENT   (PAGE_SIZE << pageblock_order)
 
 /* FAD commands */
 #define FADUMP_REGISTER1
-- 
2.34.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 4/7] mm: make alloc_contig_range work at pageblock granularity

2022-01-19 Thread Zi Yan
From: Zi Yan 

alloc_contig_range() worked at MAX_ORDER-1 granularity to avoid merging
pageblocks with different migratetypes. It might unnecessarily convert
extra pageblocks at the beginning and at the end of the range. Change
alloc_contig_range() to work at pageblock granularity.

It is done by restoring pageblock types and split >pageblock_order free
pages after isolating at MAX_ORDER-1 granularity and migrating pages
away at pageblock granularity. The reason for this process is that
during isolation, some pages, either free or in-use, might have >pageblock
sizes and isolating part of them can cause free accounting issues.
Restoring the migratetypes of the pageblocks not in the interesting
range later is much easier.

Signed-off-by: Zi Yan 
---
 mm/page_alloc.c | 175 ++--
 1 file changed, 155 insertions(+), 20 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 812cf557b20f..6ed506234efa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8862,8 +8862,8 @@ void *__init alloc_large_system_hash(const char 
*tablename,
 #ifdef CONFIG_CONTIG_ALLOC
 static unsigned long pfn_max_align_down(unsigned long pfn)
 {
-   return pfn & ~(max_t(unsigned long, MAX_ORDER_NR_PAGES,
-pageblock_nr_pages) - 1);
+   return ALIGN_DOWN(pfn, max_t(unsigned long, MAX_ORDER_NR_PAGES,
+pageblock_nr_pages));
 }
 
 static unsigned long pfn_max_align_up(unsigned long pfn)
@@ -8952,6 +8952,52 @@ static int __alloc_contig_migrate_range(struct 
compact_control *cc,
return 0;
 }
 
+static inline int save_migratetypes(unsigned char *migratetypes,
+   unsigned long start_pfn, unsigned long end_pfn)
+{
+   unsigned long pfn = start_pfn;
+   int num = 0;
+
+   while (pfn < end_pfn) {
+   migratetypes[num] = get_pageblock_migratetype(pfn_to_page(pfn));
+   num++;
+   pfn += pageblock_nr_pages;
+   }
+   return num;
+}
+
+static inline int restore_migratetypes(unsigned char *migratetypes,
+   unsigned long start_pfn, unsigned long end_pfn)
+{
+   unsigned long pfn = start_pfn;
+   int num = 0;
+
+   while (pfn < end_pfn) {
+   set_pageblock_migratetype(pfn_to_page(pfn), migratetypes[num]);
+   num++;
+   pfn += pageblock_nr_pages;
+   }
+   return num;
+}
+
+static inline void split_free_page_into_pageblocks(struct page *free_page,
+   int order, struct zone *zone)
+{
+   unsigned long pfn;
+
+   spin_lock(&zone->lock);
+   del_page_from_free_list(free_page, zone, order);
+   for (pfn = page_to_pfn(free_page);
+pfn < page_to_pfn(free_page) + (1UL << order);
+pfn += pageblock_nr_pages) {
+   int mt = get_pfnblock_migratetype(pfn_to_page(pfn), pfn);
+
+   __free_one_page(pfn_to_page(pfn), pfn, zone, pageblock_order,
+   mt, FPI_NONE);
+   }
+   spin_unlock(&zone->lock);
+}
+
 /**
  * alloc_contig_range() -- tries to allocate given range of pages
  * @start: start PFN to allocate
@@ -8977,8 +9023,15 @@ int alloc_contig_range(unsigned long start, unsigned 
long end,
   unsigned migratetype, gfp_t gfp_mask)
 {
unsigned long outer_start, outer_end;
+   unsigned long isolate_start = pfn_max_align_down(start);
+   unsigned long isolate_end = pfn_max_align_up(end);
+   unsigned long alloc_start = ALIGN_DOWN(start, pageblock_nr_pages);
+   unsigned long alloc_end = ALIGN(end, pageblock_nr_pages);
+   unsigned long num_pageblock_to_save;
unsigned int order;
int ret = 0;
+   unsigned char *saved_mt;
+   int num;
 
struct compact_control cc = {
.nr_migratepages = 0,
@@ -8992,11 +9045,30 @@ int alloc_contig_range(unsigned long start, unsigned 
long end,
};
INIT_LIST_HEAD(&cc.migratepages);
 
+   /*
+* TODO: make MIGRATE_ISOLATE a standalone bit to avoid overwriting
+* the exiting migratetype. Then, we will not need the save and restore
+* process here.
+*/
+
+   /* Save the migratepages of the pageblocks before start and after end */
+   num_pageblock_to_save = (alloc_start - isolate_start) / 
pageblock_nr_pages
+   + (isolate_end - alloc_end) / 
pageblock_nr_pages;
+   saved_mt =
+   kmalloc_array(num_pageblock_to_save,
+ sizeof(unsigned char), GFP_KERNEL);
+   if (!saved_mt)
+   return -ENOMEM;
+
+   num = save_migratetypes(saved_mt, isolate_start, alloc_start);
+
+   num = save_migratetypes(&saved_mt[num], alloc_end, isolate_end);
+
/*
 * What we do here is we mark all pageblocks in range as
 * MIGRATE_ISOLATE.  Because pageblock and max order pages 

[PATCH v4 6/7] drivers: virtio_mem: use pageblock size as the minimum virtio_mem size.

2022-01-19 Thread Zi Yan
From: Zi Yan 

alloc_contig_range() now only needs to be aligned to pageblock_order,
drop virtio_mem size requirement that it needs to be the max of
pageblock_order and MAX_ORDER.

Signed-off-by: Zi Yan 
---
 drivers/virtio/virtio_mem.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index a6a78685cfbe..eafba2119ae3 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -2476,13 +2476,12 @@ static int virtio_mem_init_hotplug(struct virtio_mem 
*vm)
  VIRTIO_MEM_DEFAULT_OFFLINE_THRESHOLD);
 
/*
-* We want subblocks to span at least MAX_ORDER_NR_PAGES and
-* pageblock_nr_pages pages. This:
+* We want subblocks to span at least pageblock_nr_pages pages.
+* This:
 * - Is required for now for alloc_contig_range() to work reliably -
 *   it doesn't properly handle smaller granularity on ZONE_NORMAL.
 */
-   sb_size = max_t(uint64_t, MAX_ORDER_NR_PAGES,
-   pageblock_nr_pages) * PAGE_SIZE;
+   sb_size = pageblock_nr_pages * PAGE_SIZE;
sb_size = max_t(uint64_t, vm->device_block_size, sb_size);
 
if (sb_size < memory_block_size_bytes() && !force_bbm) {
-- 
2.34.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Stephen Boyd
Quoting Rob Herring (2022-01-18 17:50:38)
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
> 
> The array of phandles case boils down to needing:
> 
> items:
>   maxItems: 1
> 
> The phandle plus args cases should typically take this form:
> 
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
> 
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.
[..]
> Signed-off-by: Rob Herring 
> ---

Acked-by: Stephen Boyd 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] iommu/vt-d: Do not dump pasid table entries in kdump kernel

2022-01-19 Thread Lu Baolu

On 1/19/22 5:07 PM, Zelin Deng wrote:

In kdump kernel PASID translations won't be copied from previous kernel
even if scalable-mode is enabled, so pages of PASID translations are


Yes. The copy table support for scalable mode is still in my task list.


non-present in kdump kernel. Attempt to access those address will cause
PF fault:

[   13.396476] DMAR: DRHD: handling fault status reg 3
[   13.396478] DMAR: [DMA Read NO_PASID] Request device [81:00.0] fault addr 
0xd000 [fault reason 0x59] SM: Present bit in PA$
[   13.396480] DMAR: Dump dmar5 table entries for IOVA 0xd000
[   13.396481] DMAR: scalable mode root entry: hi 0x, low 
0x460d1001
[   13.396482] DMAR: context entry: hi 0x0008, low 
0x0010c4237401
[   13.396485] BUG: unable to handle page fault for address: ff110010c4237000
[   13.396486] #PF: supervisor read access in kernel mode
[   13.396487] #PF: error_code(0x) - not-present page
[   13.396488] PGD 5d201067 P4D 5d202067 PUD 0
[   13.396490] Oops:  [#1] PREEMPT SMP NOPTI
[   13.396491] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 
5.16.0-rc6-next-20211224+ #6
[   13.396493] Hardware name: Intel Corporation EAGLESTREAM/EAGLESTREAM, BIOS 
EGSDCRB1.86B.0067.D12.2110190950 10/19/2021
[   13.396494] RIP: 0010:dmar_fault_dump_ptes+0x13b/0x295

Hence skip dumping pasid table entries if in kdump kernel.


This just asks dmar_fault_dump_ptes() to keep silent. The problem is
that the context entry is mis-configured. Perhaps we should disable
copy table for scalable mode for now. How about below change?

--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3337,10 +3337,11 @@ static int __init init_dmars(void)

init_translation_status(iommu);

-   if (translation_pre_enabled(iommu) && !is_kdump_kernel()) {
+   if (translation_pre_enabled(iommu) &&
+   (!is_kdump_kernel() || sm_supported(iommu))) {
iommu_disable_translation(iommu);
clear_translation_pre_enabled(iommu);
-   pr_warn("Translation was enabled for %s but we 
are not in kdump mode\n",
+   pr_warn("Translation was enabled for %s but we 
are not in kdump mode or copy table not supported\n",

iommu->name);
}



Fixes: 914ff7719e8a (“iommu/vt-d: Dump DMAR translation structure when DMA 
fault occurs”)
Signed-off-by: Zelin Deng 
---
  drivers/iommu/intel/iommu.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 92fea3fb..f0134cf 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1074,6 +1074,12 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 
source_id,
if (!sm_supported(iommu))
goto pgtable_walk;
  
+	/* PASID translations is not copied, skip dumping pasid table entries

+* otherwise non-present page will be accessed.
+*/
+   if (is_kdump_kernel())
+   goto pgtable_walk;
+
/* get the pointer to pasid directory entry */
dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
if (!dir) {



Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] iommu/vt-d: Do not dump pasid table entries in kdump kernel

2022-01-19 Thread zelin deng


在 2022/1/20 上午10:58, Lu Baolu 写道:

On 1/19/22 5:07 PM, Zelin Deng wrote:

In kdump kernel PASID translations won't be copied from previous kernel
even if scalable-mode is enabled, so pages of PASID translations are


Yes. The copy table support for scalable mode is still in my task list.


non-present in kdump kernel. Attempt to access those address will cause
PF fault:

[   13.396476] DMAR: DRHD: handling fault status reg 3
[   13.396478] DMAR: [DMA Read NO_PASID] Request device [81:00.0] 
fault addr 0xd000 [fault reason 0x59] SM: Present bit in PA$

[   13.396480] DMAR: Dump dmar5 table entries for IOVA 0xd000
[   13.396481] DMAR: scalable mode root entry: hi 0x, 
low 0x460d1001
[   13.396482] DMAR: context entry: hi 0x0008, low 
0x0010c4237401
[   13.396485] BUG: unable to handle page fault for address: 
ff110010c4237000

[   13.396486] #PF: supervisor read access in kernel mode
[   13.396487] #PF: error_code(0x) - not-present page
[   13.396488] PGD 5d201067 P4D 5d202067 PUD 0
[   13.396490] Oops:  [#1] PREEMPT SMP NOPTI
[   13.396491] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 
5.16.0-rc6-next-20211224+ #6
[   13.396493] Hardware name: Intel Corporation 
EAGLESTREAM/EAGLESTREAM, BIOS EGSDCRB1.86B.0067.D12.2110190950 
10/19/2021

[   13.396494] RIP: 0010:dmar_fault_dump_ptes+0x13b/0x295

Hence skip dumping pasid table entries if in kdump kernel.


This just asks dmar_fault_dump_ptes() to keep silent. The problem is
that the context entry is mis-configured. Perhaps we should disable
copy table for scalable mode for now. How about below change?


Yep.  The change looks good to me.

Actually I had encountered another issue which had blocked virtio-net 
device when scalable mode is enabled in kdump kernel so that I had made 
the same change as yours -- 'to disable translation if sm_on in kdump 
kernel' in our internal tree.


I only observe this issue on our dragonfly baremetal server with 
virtio-net device inside, I did't send the fix to upstream as I am not 
sure if it is reasonable to disable translation in kdump kernel.





--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3337,10 +3337,11 @@ static int __init init_dmars(void)

    init_translation_status(iommu);

-   if (translation_pre_enabled(iommu) && 
!is_kdump_kernel()) {

+   if (translation_pre_enabled(iommu) &&
+   (!is_kdump_kernel() || sm_supported(iommu))) {
    iommu_disable_translation(iommu);
    clear_translation_pre_enabled(iommu);
-   pr_warn("Translation was enabled for %s but we 
are not in kdump mode\n",
+   pr_warn("Translation was enabled for %s but we 
are not in kdump mode or copy table not supported\n",

    iommu->name);
    }



Fixes: 914ff7719e8a (“iommu/vt-d: Dump DMAR translation structure 
when DMA fault occurs”)

Signed-off-by: Zelin Deng 
---
  drivers/iommu/intel/iommu.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 92fea3fb..f0134cf 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1074,6 +1074,12 @@ void dmar_fault_dump_ptes(struct intel_iommu 
*iommu, u16 source_id,

  if (!sm_supported(iommu))
  goto pgtable_walk;
  +    /* PASID translations is not copied, skip dumping pasid table 
entries

+ * otherwise non-present page will be accessed.
+ */
+    if (is_kdump_kernel())
+    goto pgtable_walk;
+
  /* get the pointer to pasid directory entry */
  dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
  if (!dir) {



Best regards,
baolu

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] dt-bindings: Improve phandle-array schemas

2022-01-19 Thread Vinod Koul
On 18-01-22, 19:50, Rob Herring wrote:
> The 'phandle-array' type is a bit ambiguous. It can be either just an
> array of phandles or an array of phandles plus args. Many schemas for
> phandle-array properties aren't clear in the schema which case applies
> though the description usually describes it.
> 
> The array of phandles case boils down to needing:
> 
> items:
>   maxItems: 1
> 
> The phandle plus args cases should typically take this form:
> 
> items:
>   - items:
>   - description: A phandle
>   - description: 1st arg cell
>   - description: 2nd arg cell
> 
> With this change, some examples need updating so that the bracketing of
> property values matches the schema.

Acked-By: Vinod Koul 

-- 
~Vinod
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu