Thanks Gerd raise this open -- how to support more processors due to hob size 
limitation. 

Looks multiple hobs is the only way since we have to store each cpu's info? 
Sorry, allow me ask a stupid question: why DataLength in hob defined as UINT16 
causing the hob size limitation? Any design background here?    

For smbase case: 
I doubt CpuIndex is really required, because we can't avoid define another hob, 
and we can't avoid add statement for each hob cpu ranges (0 - 8191, 8192 - 
16382,...), then what's meaning for the CpuIndex, we don't expect hob producer 
create smaller granularity CPU ranges that one hob CpuIndex associate with 
previous NumberOfCpus. With above consideration, I prefer keep existing patch 
as is, but only add statement gSmmBaseHobGuid only support max 8191 processes 
(which means to fix the CPU range for each hob)?

Thanks,
Jiaxin



> -----Original Message-----
> From: Laszlo Ersek <ler...@redhat.com>
> Sent: Friday, January 20, 2023 4:21 PM
> To: Ni, Ray <ray...@intel.com>; Gerd Hoffmann <kra...@redhat.com>; Wu,
> Jiaxin <jiaxin...@intel.com>
> Cc: devel@edk2.groups.io; Dong, Eric <eric.d...@intel.com>; Zeng, Star
> <star.z...@intel.com>; Kumar, Rahul R <rahul.r.ku...@intel.com>; Kinney,
> Michael D <michael.d.kin...@intel.com>; Zimmer, Vincent
> <vincent.zim...@intel.com>
> Subject: Re: [PATCH v3 1/5] UefiCpuPkg/SmmBaseHob.h: Add SMM Base
> HOB Data
> 
> On 1/18/23 16:06, Ni, Ray wrote:
> 
> > #pragma pack(1)
> > typedef struct {
> >   UINT32    CpuIndex;
> >   UINT32    NumberOfCpus;  // align to
> EFI_SEC_PLATFORM_INFORMATION_RECORD2.NumberOfCpus
> >   UINT64    SmBase[];
> > } SMM_BASE_HOB_DATA;
> > #pragma pack()
> >
> > For system with less than 8K CPUs, one HOB is produced. CpuIndex is set to
> 0 indicating
> > the HOB describes the CPU from 0 to NumberOfCpus-1.
> >
> > The HOB list may contains multiple such HOB instances each describing the
> information for
> > CPU from CpuIndex to CpuIndex + NumberOfCpus - 1.
> > The instance order in the HOB list is random so consumer cannot assume
> the CpuIndex
> > of first instance is 0.
> 
> When using discontiguous blocks that are limited to ~64KB each:
> 
> - The consumer won't be able to access elements of the "conceptual" big
> array in a truly random (= random-access) fashion. There won't be a
> single contiguous domain of valid subscripts. It's "bank switching", and
> switching banks should be avoided IMO. It effectively replaces the
> vector data structure with a linked list. The consequence is that the
> consumer will have to either (a) build a new (temporary, or permanent)
> index table of sorts, for implementing the "conceptual" big array as a
> factual contiguous array, or (b) traverse the HOB list multiple times.
> 
> - If the element size of the array increases (which is otherwise
> possible to do compatibly, e.g. by placing a GUID and/or revision# in
> the HOB), the number of elements that fit in a single HOB decreases. I
> think that's an artifact that needlessly complicates debugging, and
> maybe performance too (it might increase the number of full-list
> traversals).
> 
> - With relatively many elements fitting into a single HOB, on most
> platforms, just one HOB is going to be used. While that may be good for
> performance, it is not good for code coverage (testing). The quirky
> indexing method will not be exercised by most platforms.
> 
> What's wrong with:
> 
> - restricting the creation of all such HOBs after
> "gEfiPeiMemoryDiscoveredPpiGuid"
> 
> - using a page allocation, for representing the array contiguously
> 
> - in the HOB, only storing the address of the page allocation.
> 
> Page allocations performed after gEfiPeiMemoryDiscoveredPpiGuid will not
> be moved, so the address in the HOB would be stable.
> 
> Laszlo



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#99253): https://edk2.groups.io/g/devel/message/99253
Mute This Topic: https://groups.io/mt/96350760/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to