On 8/26/23 04:22, Marc Zyngier wrote:

Hi Marc!

The GIC definitely has the NS bit routed to it. Otherwise, the secure
configuration would just be an utter joke. Just try it.

Thank you for your response. I'd like to revisit my prior point about the distinction between the NS bit and AxPROT[1] bits in the context of monitor mode: in monitor mode, the NS bit does not determine the security state of the CPU core (monitor mode is always secure). But even then, the NS bit is still significant for other purposes, such as to bank accesses to certain CP15 registers -- and if I understand Chen-Yu correctly, some GIC registers also. That would require a special NS bit signal routed to the GIC so that it can distinguish between "secure, NS=0" and "secure, NS=1" accesses, which is why I asked if such a thing exists.

I understand that the GIC is designed to be aware of the security state (using the existing AxPROT[1] signals) so that it can protect the sensitive registers. And unless I misunderstand, this seems to be the point that you made here (my interpretation -- correct me if I'm wrong -- is that you are using "NS bit" as a metonym for "security state"). However I must clarify that my question was to seek further information from Chen-Yu about the possibility that NS is significant when accessing the GIC, even in monitor mode. Alternatively, his point might be merely highlighting that the GIC permits different types of access depending on the CPU's security state, which aligns with the viewpoint you've reiterated.

I apologize if my previous message didn't convey this context clearly enough. My goal was to unravel this nuanced aspect of the NS bit when in monitor mode, and to determine if NS needs to be getting set/cleared during GIC setup to maneuver around the banking, or if the value of the NS bit when in psci_arch_init() is truly of no consequence due to monitor mode.

Well, history is unfortunately against you on that front. Running on
the secure side definitely was a requirement when this code was
initially written, as the AW BSP *required* to run on the secure side.

If that requirement is no more, great. But I don't think you can
decide that unilaterally.

I have no idea when/if this requirement was changed. It might have never happened "formally": perhaps at some point, the SCR.NS=1 code got added after the call to psci_arch_init(), breaking (that version of) the AW BSP, and nobody ever complained or changed it back... so it stayed.

But we're starting to digress from what _this_ patch does. The intent here is only to remove two lines of code that (we're discussing to confirm) have no effect. I'm not touching the code that *actually* determines the world into which monitor mode exits.

Cheers,
Sam

Reply via email to