On 21/03/2019 20:02, Sodagudi Prasad wrote:
> On 2019-03-21 06:34, Julien Thierry wrote:
>> Hi Prasad,
>>
>> On 21/03/2019 02:07, Prasad Sodagudi wrote:
>>> Preserves the bitfields of PMCR_EL0(AArch64) during PMU reset.
>>> Reset routine should write a 1 to PMCR.C and PMCR.P fields only
>>> to reset the counters. Other fields should not be changed
>>> as they could be set before PMU initialization and their
>>> value must be preserved even after reset.
>>>
>>
>> Are there any particular bit you are concerned about? Apart from the RO
>> ones and the Res0 ones (to which we are already writing 0), I see:
>>
>> DP -> irrelevant for non-secure
>> X -> This one is the only potentially interesting, however it resets to
>> an architecturally unknown value, so unless we know for a fact it was
>> set before hand, we probably want to clear it
>> D -> ignored when we have LC set (and we do)
>> E -> Since this is the function we use to reset the PMU on the current
>> CPU, we probably want to set this bit to 0 regardless of its previous
>> value
>>
>> So, is there any issue this patch is solving?
>
> Hi Julien,
>
> Thanks for taking a look into this patch. Yes. On our Qualcomm
> platforms, observed that X bit is getting cleared by kernel.
> This bit is getting set by firmware for Qualcomm use cases and
> non-secure world is resetting without this patch.
> I think, changing that register this register modifications to
> read-modify-write style make sense.
>
Maybe for the X bit, but for the E bit this seems like the wrong thing
to do. We want to set the E bit to 0 here.
And for platforms that don't have a firmware touching the X bit (or
rather the pmcr as a whole), I'd like to understand whether it would be
valid to leave this bit set to an architecturally unknown value and
preserving that value.
Thanks,
>>> Signed-off-by: Prasad Sodagudi <psoda...@codeaurora.org>
>>> ---
>>> arch/arm64/kernel/perf_event.c | 4 ++--
>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/kernel/perf_event.c
>>> b/arch/arm64/kernel/perf_event.c
>>> index 4addb38..0c1afdd 100644
>>> --- a/arch/arm64/kernel/perf_event.c
>>> +++ b/arch/arm64/kernel/perf_event.c
>>> @@ -868,8 +868,8 @@ static void armv8pmu_reset(void *info)
>>> * Initialize & Reset PMNC. Request overflow interrupt for
>>> * 64 bit cycle counter but cheat in armv8pmu_write_counter().
>>> */
>>> - armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C |
>>> - ARMV8_PMU_PMCR_LC);
>>> + armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_P |
>>> + ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_LC);
>>> }
>>>
>>> static int __armv8_pmuv3_map_event(struct perf_event *event,
>>>
>
--
Julien Thierry