On 11/6/20 9:30 am, Jonathan Brandmeyer wrote: > We've patched the RTEMS kernel in order to support using the Zynq on-chip > memory > as inner-cacheable memory. The enclosed patch should apply cleanly to master. > > Background: During normal startup, the ROM bootloader performs vendor-specific > initialization of core 1, and then sits in a wait-for-event loop until a > special value has been written to a specific address in OCM. In that state, > the > MMU has not yet been initialized and core 1 is treating OCM as Device memory. > > By the time the RTEMS boot gets to _CPU_SMP_Start_processor, core 0's MMU has > already been initialized with the application-defined memory map. I'd like to > use the on-chip memory as inner cacheable memory in my application. In order > to > ensure that the kick address write actually becomes visible to core 1, a cache > line flush of the affected line is necessary prior to sending the event that > wakes up the other core.
Have the patches been tested with the OCM in the default state? Chris > > I also added an invalidation prior to the kick-address write out of an > abundance > of caution. it shouldn't be necessary, but I had a hard time proving it > definitively. > > There are a plethora of cache maintenance functions available for the job in > RTEMS. I picked an inline helper that operates directly on CP15. The code's > commentary suggests that the L2 hasn't been initialized yet, and the > higher-level `rtems_cache_*_multiple_data_lines` API affects both L1D and L2. > Also, I'm using inner-cacheable/outer-shareable memory attributes for OCM > specifically because of where it sits in the SOC's busswork, so it turns out > that we *never* need to flush L2 for that memory anyway. > > -- > Jonathan Brandmeyer > PlanetiQ > > _______________________________________________ > devel mailing list > devel@rtems.org > http://lists.rtems.org/mailman/listinfo/devel > _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel