On 4/4/2023 10:23 pm, Kinsey Moore wrote:
> On Mon, Apr 3, 2023 at 8:00 PM Chris Johns <chr...@rtems.org
> <mailto:chr...@rtems.org>> wrote:
> 
>     On 31/3/2023 8:13 am, Kinsey Moore wrote:
>     > Xilinx wrote their A53 HAL with the assumption that the CPU did not
>     > support cache invalidation without a flush, so the flush and
>     > invalidation functions were combined and all range invalidations are
>     > promoted to flush/invalidate. The implementation written for lwIP was
>     > written to the original intent of the function and thus was not flushing
>     > in some cases when it needed to. This resolves that issue which prevents
>     > DMA transmit errors in some cases.
>     > ---
>     >  rtemslwip/zynqmp/xil_shims.c | 7 ++++++-
>     >  1 file changed, 6 insertions(+), 1 deletion(-)
>     >
>     > diff --git a/rtemslwip/zynqmp/xil_shims.c b/rtemslwip/zynqmp/xil_shims.c
>     > index 2eda0c5..1b1b3cf 100644
>     > --- a/rtemslwip/zynqmp/xil_shims.c
>     > +++ b/rtemslwip/zynqmp/xil_shims.c
>     > @@ -102,7 +102,12 @@ void XScuGic_DisableIntr ( u32 DistBaseAddress, u32
>     Int_Id )
>     >    rtems_interrupt_vector_disable( Int_Id );
>     >  }
>     > 
>     > +/*
>     > + * The Xilinx code was written such that it assumed there was no
>     invalidate-only
>     > + * functionality on A53 cores. This function must flush and invalidate
>     because
>     > + * of how they mapped things.
>     > + */
>     >  void Xil_DCacheInvalidateRange( INTPTR adr, INTPTR len )
>     >  {
>     > -  rtems_cache_invalidate_multiple_data_lines( (const void *) adr, len 
> );
>     > +  rtems_cache_flush_multiple_data_lines( (const void *) adr, len );
>     >  }
> 
>     Does the Xilinx code use Xil_DCacheInvalidateRange in any DMA receive 
> paths? If
>     it does is this change correct as the invalidate has been removed?
> 
> 
> It just so happens that the way the code was written, a flush and invalidate
> works fine for the receive path. The invalidation that occurs in the receive
> path occurs before the pointer to the memory is passed to the DMA engine, so a
> flush there doesn't hurt anything (at least for this particular driver). If 
> more
> Xilinx drivers get pulled in, that may have to be reevaluated.

Sure. If you think it is fine and are happy that is all that is need.

Chris
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to