On 2019/7/4 14:46, Jia He wrote:
>
> On 2019/6/29 10:42, Xiongfeng Wang wrote:
>> We set 'cpu_possible_mask' based on the enabled GICC node in MADT. If
>> the GICC node is disabled, we will skip initializing the kernel data
>> structure for that CPU.
>>
>> To support CPU hotplug, we need to initialize some CPU related data
>> structure in advance. This patch mark all the GICC nodes as possible CPU
>> and only these enabled GICC nodes as present CPU.
>>
>> Signed-off-by: Xiongfeng Wang <wangxiongfe...@huawei.com>
>> ---
>> arch/arm64/kernel/setup.c | 2 +-
>> arch/arm64/kernel/smp.c | 11 +++++------
>> 2 files changed, 6 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
>> index 7e541f9..7f4d12a 100644
>> --- a/arch/arm64/kernel/setup.c
>> +++ b/arch/arm64/kernel/setup.c
>> @@ -359,7 +359,7 @@ static int __init topology_init(void)
>> for_each_online_node(i)
>> register_one_node(i);
>> - for_each_possible_cpu(i) {
>> + for_each_online_cpu(i) {
>
> Have you considered the case in non-acpi mode? and setting "maxcpus=n" in
> host kernel boot
>
> parameters?
Thanks for your mention. I haven't considered non-acpi mode. I should add ACPI
check in
'smp_prepare_cpus()'.
'maxcpus' is check when we actually online the CPU, so I think it is not
influenced.
Thanks,
Xiongfeng
>
> ---
> Cheers,
> Justin (Jia He)
>
>
>> struct cpu *cpu = &per_cpu(cpu_data.cpu, i);
>> cpu->hotpluggable = 1;
>> register_cpu(cpu, i);
>> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
>> index 6dcf960..6d9983c 100644
>> --- a/arch/arm64/kernel/smp.c
>> +++ b/arch/arm64/kernel/smp.c
>> @@ -525,16 +525,14 @@ struct acpi_madt_generic_interrupt
>> *acpi_cpu_get_madt_gicc(int cpu)
>> {
>> u64 hwid = processor->arm_mpidr;
>> - if (!(processor->flags & ACPI_MADT_ENABLED)) {
>> - pr_debug("skipping disabled CPU entry with 0x%llx MPIDR\n", hwid);
>> - return;
>> - }
>> -
>> if (hwid & ~MPIDR_HWID_BITMASK || hwid == INVALID_HWID) {
>> pr_err("skipping CPU entry with invalid MPIDR 0x%llx\n", hwid);
>> return;
>> }
>> + if (!(processor->flags & ACPI_MADT_ENABLED))
>> + pr_debug("disabled CPU entry with 0x%llx MPIDR\n", hwid);
>> +
>> if (is_mpidr_duplicate(cpu_count, hwid)) {
>> pr_err("duplicate CPU MPIDR 0x%llx in MADT\n", hwid);
>> return;
>> @@ -755,7 +753,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
>> if (err)
>> continue;
>> - set_cpu_present(cpu, true);
>> + if ((cpu_madt_gicc[cpu].flags & ACPI_MADT_ENABLED))
>> + set_cpu_present(cpu, true);
>> numa_store_cpu_info(cpu);
>> }
>> }
>