Reviewed-by: Ray Ni <ray...@intel.com> Thanks, Ray > -----Original Message----- > From: Wu, Jiaxin <jiaxin...@intel.com> > Sent: Friday, December 15, 2023 5:55 PM > To: devel@edk2.groups.io > Cc: Laszlo Ersek <ler...@redhat.com>; Dong, Eric <eric.d...@intel.com>; Ni, > Ray <ray...@intel.com>; Zeng, Star <star.z...@intel.com>; Gerd Hoffmann > <kra...@redhat.com>; Kumar, Rahul R <rahul.r.ku...@intel.com> > Subject: [PATCH v4 8/8] UefiCpuPkg/PiSmmCpuDxeSmm: Consume > SmmCpuSyncLib > > There is the SmmCpuSyncLib Library class define the SMM CPU sync > flow, which is aligned with existing SMM CPU driver sync behavior. > This patch is to consume SmmCpuSyncLib instance directly. > > With this change, SMM CPU Sync flow/logic can be customized > with different implementation no matter for any purpose, e.g. > performance tuning, handle specific register, etc. > > Cc: Laszlo Ersek <ler...@redhat.com> > Cc: Eric Dong <eric.d...@intel.com> > Cc: Ray Ni <ray...@intel.com> > Cc: Zeng Star <star.z...@intel.com> > Cc: Gerd Hoffmann <kra...@redhat.com> > Cc: Rahul Kumar <rahul1.ku...@intel.com> > Signed-off-by: Jiaxin Wu <jiaxin...@intel.com> > --- > UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c | 274 > +++++++-------------------- > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h | 6 +- > UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf | 1 + > 3 files changed, 68 insertions(+), 213 deletions(-) > > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c > b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c > index 9b477b6695..4fbb0bba87 100644 > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c > @@ -27,122 +27,10 @@ MM_COMPLETION > mSmmStartupThisApToken; > // > // Processor specified by mPackageFirstThreadIndex[PackageIndex] will do > the package-scope register check. > // > UINT32 *mPackageFirstThreadIndex = NULL; > > -/** > - Performs an atomic compare exchange operation to get semaphore. > - The compare exchange operation must be performed using > - MP safe mechanisms. > - > - @param Sem IN: 32-bit unsigned integer > - OUT: original integer - 1 > - @return Original integer - 1 > - > -**/ > -UINT32 > -WaitForSemaphore ( > - IN OUT volatile UINT32 *Sem > - ) > -{ > - UINT32 Value; > - > - for ( ; ;) { > - Value = *Sem; > - if ((Value != 0) && > - (InterlockedCompareExchange32 ( > - (UINT32 *)Sem, > - Value, > - Value - 1 > - ) == Value)) > - { > - break; > - } > - > - CpuPause (); > - } > - > - return Value - 1; > -} > - > -/** > - Performs an atomic compare exchange operation to release semaphore. > - The compare exchange operation must be performed using > - MP safe mechanisms. > - > - @param Sem IN: 32-bit unsigned integer > - OUT: original integer + 1 > - @return Original integer + 1 > - > -**/ > -UINT32 > -ReleaseSemaphore ( > - IN OUT volatile UINT32 *Sem > - ) > -{ > - UINT32 Value; > - > - do { > - Value = *Sem; > - } while (Value + 1 != 0 && > - InterlockedCompareExchange32 ( > - (UINT32 *)Sem, > - Value, > - Value + 1 > - ) != Value); > - > - return Value + 1; > -} > - > -/** > - Performs an atomic compare exchange operation to lock semaphore. > - The compare exchange operation must be performed using > - MP safe mechanisms. > - > - @param Sem IN: 32-bit unsigned integer > - OUT: -1 > - @return Original integer > - > -**/ > -UINT32 > -LockdownSemaphore ( > - IN OUT volatile UINT32 *Sem > - ) > -{ > - UINT32 Value; > - > - do { > - Value = *Sem; > - } while (InterlockedCompareExchange32 ( > - (UINT32 *)Sem, > - Value, > - (UINT32)-1 > - ) != Value); > - > - return Value; > -} > - > -/** > - Used for BSP to wait all APs. > - Wait all APs to performs an atomic compare exchange operation to release > semaphore. > - > - @param NumberOfAPs AP number > - > -**/ > -VOID > -WaitForAllAPs ( > - IN UINTN NumberOfAPs > - ) > -{ > - UINTN BspIndex; > - > - BspIndex = mSmmMpSyncData->BspIndex; > - while (NumberOfAPs-- > 0) { > - WaitForSemaphore (mSmmMpSyncData->CpuData[BspIndex].Run); > - } > -} > - > /** > Used for BSP to release all APs. > Performs an atomic compare exchange operation to release semaphore > for each AP. > > @@ -154,57 +42,15 @@ ReleaseAllAPs ( > { > UINTN Index; > > for (Index = 0; Index < mMaxNumberOfCpus; Index++) { > if (IsPresentAp (Index)) { > - ReleaseSemaphore (mSmmMpSyncData->CpuData[Index].Run); > + SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SyncContext, > Index, gSmmCpuPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu); > } > } > } > > -/** > - Used for BSP to release one AP. > - > - @param ApSem IN: 32-bit unsigned integer > - OUT: original integer + 1 > -**/ > -VOID > -ReleaseOneAp ( > - IN OUT volatile UINT32 *ApSem > - ) > -{ > - ReleaseSemaphore (ApSem); > -} > - > -/** > - Used for AP to wait BSP. > - > - @param ApSem IN: 32-bit unsigned integer > - OUT: original integer - 1 > -**/ > -VOID > -WaitForBsp ( > - IN OUT volatile UINT32 *ApSem > - ) > -{ > - WaitForSemaphore (ApSem); > -} > - > -/** > - Used for AP to release BSP. > - > - @param BspSem IN: 32-bit unsigned integer > - OUT: original integer + 1 > -**/ > -VOID > -ReleaseBsp ( > - IN OUT volatile UINT32 *BspSem > - ) > -{ > - ReleaseSemaphore (BspSem); > -} > - > /** > Check whether the index of CPU perform the package level register > programming during System Management Mode initialization. > > The index of Processor specified by > mPackageFirstThreadIndex[PackageIndex] > @@ -292,35 +138,35 @@ AllCpusInSmmExceptBlockedDisabled ( > > BlockedCount = 0; > DisabledCount = 0; > > // > - // Check to make sure mSmmMpSyncData->Counter is valid and not > locked. > + // Check to make sure the CPU arrival count is valid and not locked. > // > - ASSERT (*mSmmMpSyncData->Counter <= mNumberOfCpus); > + ASSERT (SmmCpuSyncGetArrivedCpuCount > (mSmmMpSyncData->SyncContext) <= mNumberOfCpus); > > // > // Check whether all CPUs in SMM. > // > - if (*mSmmMpSyncData->Counter == mNumberOfCpus) { > + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) > == mNumberOfCpus) { > return TRUE; > } > > // > // Check for the Blocked & Disabled Exceptions Case. > // > GetSmmDelayedBlockedDisabledCount (NULL, &BlockedCount, > &DisabledCount); > > // > - // *mSmmMpSyncData->Counter might be updated by all APs concurrently. > The value > + // The CPU arrival count might be updated by all APs concurrently. The > value > // can be dynamic changed. If some Aps enter the SMI after the > BlockedCount & > - // DisabledCount check, then the *mSmmMpSyncData->Counter will be > increased, thus > - // leading the *mSmmMpSyncData->Counter + BlockedCount + > DisabledCount > mNumberOfCpus. > + // DisabledCount check, then the CPU arrival count will be increased, thus > + // leading the retrieved CPU arrival count + BlockedCount + > DisabledCount > mNumberOfCpus. > // since the BlockedCount & DisabledCount are local variable, it's ok here > only for > // the checking of all CPUs In Smm. > // > - if (*mSmmMpSyncData->Counter + BlockedCount + DisabledCount >= > mNumberOfCpus) { > + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) > + BlockedCount + DisabledCount >= mNumberOfCpus) { > return TRUE; > } > > return FALSE; > } > @@ -396,11 +242,11 @@ SmmWaitForApArrival ( > PERF_FUNCTION_BEGIN (); > > DelayedCount = 0; > BlockedCount = 0; > > - ASSERT (*mSmmMpSyncData->Counter <= mNumberOfCpus); > + ASSERT (SmmCpuSyncGetArrivedCpuCount > (mSmmMpSyncData->SyncContext) <= mNumberOfCpus); > > LmceEn = FALSE; > LmceSignal = FALSE; > if (mMachineCheckSupported) { > LmceEn = IsLmceOsEnabled (); > @@ -447,11 +293,11 @@ SmmWaitForApArrival ( > // d) We don't add code to check SMI disabling status to skip sending IPI > to > SMI disabled APs, because: > // - In traditional flow, SMI disabling is discouraged. > // - In relaxed flow, CheckApArrival() will check SMI disabling status > before calling this function. > // In both cases, adding SMI-disabling checking code increases > overhead. > // > - if (*mSmmMpSyncData->Counter < mNumberOfCpus) { > + if (SmmCpuSyncGetArrivedCpuCount (mSmmMpSyncData->SyncContext) > < mNumberOfCpus) { > // > // Send SMI IPIs to bring outside processors in > // > for (Index = 0; Index < mMaxNumberOfCpus; Index++) { > if (!(*(mSmmMpSyncData->CpuData[Index].Present)) && > (gSmmCpuPrivate->ProcessorInfo[Index].ProcessorId != INVALID_APIC_ID)) { > @@ -610,18 +456,20 @@ VOID > BSPHandler ( > IN UINTN CpuIndex, > IN SMM_CPU_SYNC_MODE SyncMode > ) > { > + UINTN CpuCount; > UINTN Index; > MTRR_SETTINGS Mtrrs; > UINTN ApCount; > BOOLEAN ClearTopLevelSmiResult; > UINTN PresentCount; > > ASSERT (CpuIndex == mSmmMpSyncData->BspIndex); > - ApCount = 0; > + CpuCount = 0; > + ApCount = 0; > > PERF_FUNCTION_BEGIN (); > > // > // Flag BSP's presence > @@ -659,28 +507,31 @@ BSPHandler ( > // Wait for APs to arrive > // > SmmWaitForApArrival (); > > // > - // Lock the counter down and retrieve the number of APs > + // Lock door for late coming CPU checkin and retrieve the Arrived > number of APs > // > *mSmmMpSyncData->AllCpusInSync = TRUE; > - ApCount = LockdownSemaphore > (mSmmMpSyncData->Counter) - 1; > + > + SmmCpuSyncLockDoor (mSmmMpSyncData->SyncContext, CpuIndex, > &CpuCount); > + > + ApCount = CpuCount - 1; > > // > // Wait for all APs to get ready for programming MTRRs > // > - WaitForAllAPs (ApCount); > + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, > CpuIndex); > > if (SmmCpuFeaturesNeedConfigureMtrrs ()) { > // > // Signal all APs it's time for backup MTRRs > // > ReleaseAllAPs (); > > // > - // WaitForAllAPs() may wait for ever if an AP happens to enter SMM > at > + // SmmCpuSyncWaitForAPs() may wait for ever if an AP happens to > enter SMM at > // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops > has been set > // to a large enough value to avoid this situation. > // Note: For HT capable CPUs, threads within a core share the same > set of MTRRs. > // We do the backup first and then set MTRR to avoid race condition > for threads > // in the same core. > @@ -688,28 +539,28 @@ BSPHandler ( > MtrrGetAllMtrrs (&Mtrrs); > > // > // Wait for all APs to complete their MTRR saving > // > - WaitForAllAPs (ApCount); > + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, > ApCount, CpuIndex); > > // > // Let all processors program SMM MTRRs together > // > ReleaseAllAPs (); > > // > - // WaitForAllAPs() may wait for ever if an AP happens to enter SMM > at > + // SmmCpuSyncWaitForAPs() may wait for ever if an AP happens to > enter SMM at > // exactly this point. Please make sure PcdCpuSmmMaxSyncLoops > has been set > // to a large enough value to avoid this situation. > // > ReplaceOSMtrrs (CpuIndex); > > // > // Wait for all APs to complete their MTRR programming > // > - WaitForAllAPs (ApCount); > + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, > ApCount, CpuIndex); > } > } > > // > // The BUSY lock is initialized to Acquired state > @@ -741,14 +592,18 @@ BSPHandler ( > // make those APs to exit SMI synchronously. APs which arrive later will > be excluded and > // will run through freely. > // > if ((SyncMode != SmmCpuSyncModeTradition) > && !SmmCpuFeaturesNeedConfigureMtrrs ()) { > // > - // Lock the counter down and retrieve the number of APs > + // Lock door for late coming CPU checkin and retrieve the Arrived > number of APs > // > *mSmmMpSyncData->AllCpusInSync = TRUE; > - ApCount = LockdownSemaphore > (mSmmMpSyncData->Counter) - 1; > + > + SmmCpuSyncLockDoor (mSmmMpSyncData->SyncContext, CpuIndex, > &CpuCount); > + > + ApCount = CpuCount - 1; > + > // > // Make sure all APs have their Present flag set > // > while (TRUE) { > PresentCount = 0; > @@ -771,11 +626,11 @@ BSPHandler ( > ReleaseAllAPs (); > > // > // Wait for all APs to complete their pending tasks > // > - WaitForAllAPs (ApCount); > + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, > CpuIndex); > > if (SmmCpuFeaturesNeedConfigureMtrrs ()) { > // > // Signal APs to restore MTRRs > // > @@ -788,11 +643,11 @@ BSPHandler ( > MtrrSetAllMtrrs (&Mtrrs); > > // > // Wait for all APs to complete MTRR programming > // > - WaitForAllAPs (ApCount); > + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, > CpuIndex); > } > > // > // Stop source level debug in BSP handler, the code below will not be > // debugged. > @@ -816,11 +671,11 @@ BSPHandler ( > > // > // Gather APs to exit SMM synchronously. Note the Present flag is cleared > by now but > // WaitForAllAps does not depend on the Present flag. > // > - WaitForAllAPs (ApCount); > + SmmCpuSyncWaitForAPs (mSmmMpSyncData->SyncContext, ApCount, > CpuIndex); > > // > // At this point, all APs should have exited from APHandler(). > // Migrate the SMM MP performance logging to standard SMM > performance logging. > // Any SMM MP performance logging after this point will be migrated in > next SMI. > @@ -842,11 +697,11 @@ BSPHandler ( > } > > // > // Allow APs to check in from this point on > // > - *mSmmMpSyncData->Counter = 0; > + SmmCpuSyncContextReset (mSmmMpSyncData->SyncContext); > *mSmmMpSyncData->AllCpusInSync = FALSE; > mSmmMpSyncData->AllApArrivedWithException = FALSE; > > PERF_FUNCTION_END (); > } > @@ -912,21 +767,21 @@ APHandler ( > > if (!(*mSmmMpSyncData->InsideSmm)) { > // > // Give up since BSP is unable to enter SMM > // and signal the completion of this AP > - // Reduce the mSmmMpSyncData->Counter! > + // Reduce the CPU arrival count! > // > - WaitForSemaphore (mSmmMpSyncData->Counter); > + SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SyncContext, > CpuIndex); > return; > } > } else { > // > // Don't know BSP index. Give up without sending IPI to BSP. > - // Reduce the mSmmMpSyncData->Counter! > + // Reduce the CPU arrival count! > // > - WaitForSemaphore (mSmmMpSyncData->Counter); > + SmmCpuSyncCheckOutCpu (mSmmMpSyncData->SyncContext, > CpuIndex); > return; > } > } > > // > @@ -942,50 +797,50 @@ APHandler ( > > if ((SyncMode == SmmCpuSyncModeTradition) || > SmmCpuFeaturesNeedConfigureMtrrs ()) { > // > // Notify BSP of arrival at this point > // > - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); > + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > } > > if (SmmCpuFeaturesNeedConfigureMtrrs ()) { > // > // Wait for the signal from BSP to backup MTRRs > // > - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); > + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Backup OS MTRRs > // > MtrrGetAllMtrrs (&Mtrrs); > > // > // Signal BSP the completion of this AP > // > - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); > + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Wait for BSP's signal to program MTRRs > // > - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); > + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Replace OS MTRRs with SMI MTRRs > // > ReplaceOSMtrrs (CpuIndex); > > // > // Signal BSP the completion of this AP > // > - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); > + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > } > > while (TRUE) { > // > // Wait for something to happen > // > - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); > + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Check if BSP wants to exit SMM > // > if (!(*mSmmMpSyncData->InsideSmm)) { > @@ -1021,16 +876,16 @@ APHandler ( > > if (SmmCpuFeaturesNeedConfigureMtrrs ()) { > // > // Notify BSP the readiness of this AP to program MTRRs > // > - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); > + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Wait for the signal from BSP to program MTRRs > // > - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); > + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Restore OS MTRRs > // > SmmCpuFeaturesReenableSmrr (); > @@ -1038,26 +893,26 @@ APHandler ( > } > > // > // Notify BSP the readiness of this AP to Reset states/semaphore for this > processor > // > - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); > + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Wait for the signal from BSP to Reset states/semaphore for this > processor > // > - WaitForBsp (mSmmMpSyncData->CpuData[CpuIndex].Run); > + SmmCpuSyncWaitForBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > > // > // Reset states/semaphore for this processor > // > *(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE; > > // > // Notify BSP the readiness of this AP to exit SMM > // > - ReleaseBsp (mSmmMpSyncData->CpuData[BspIndex].Run); > + SmmCpuSyncReleaseBsp (mSmmMpSyncData->SyncContext, CpuIndex, > BspIndex); > } > > /** > Checks whether the input token is the current used token. > > @@ -1321,11 +1176,11 @@ InternalSmmStartupThisAp ( > mSmmMpSyncData->CpuData[CpuIndex].Status = CpuStatus; > if (mSmmMpSyncData->CpuData[CpuIndex].Status != NULL) { > *mSmmMpSyncData->CpuData[CpuIndex].Status = EFI_NOT_READY; > } > > - ReleaseOneAp (mSmmMpSyncData->CpuData[CpuIndex].Run); > + SmmCpuSyncReleaseOneAp (mSmmMpSyncData->SyncContext, CpuIndex, > gSmmCpuPrivate->SmmCoreEntryContext.CurrentlyExecutingCpu); > > if (Token == NULL) { > AcquireSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy); > ReleaseSpinLock (mSmmMpSyncData->CpuData[CpuIndex].Busy); > } > @@ -1725,14 +1580,15 @@ SmiRendezvous ( > // > goto Exit; > } else { > // > // Signal presence of this processor > - // mSmmMpSyncData->Counter is increased here! > - // "ReleaseSemaphore (mSmmMpSyncData->Counter) == 0" means BSP > has already ended the synchronization. > + // CPU check in here! > + // "SmmCpuSyncCheckInCpu (mSmmMpSyncData->SyncContext, > CpuIndex)" return error means failed > + // to check in CPU. BSP has already ended the synchronization. > // > - if (ReleaseSemaphore (mSmmMpSyncData->Counter) == 0) { > + if (RETURN_ERROR (SmmCpuSyncCheckInCpu > (mSmmMpSyncData->SyncContext, CpuIndex))) { > // > // BSP has already ended the synchronization, so QUIT!!! > // Existing AP is too late now to enter SMI since BSP has already > ended the synchronization!!! > // > > @@ -1824,12 +1680,10 @@ SmiRendezvous ( > } else { > APHandler (CpuIndex, ValidSmi, > mSmmMpSyncData->EffectiveSyncMode); > } > } > > - ASSERT (*mSmmMpSyncData->CpuData[CpuIndex].Run == 0); > - > // > // Wait for BSP's signal to exit SMI > // > while (*mSmmMpSyncData->AllCpusInSync) { > CpuPause (); > @@ -1945,12 +1799,10 @@ InitializeSmmCpuSemaphores ( > SemaphoreBlock = AllocatePages (Pages); > ASSERT (SemaphoreBlock != NULL); > ZeroMem (SemaphoreBlock, TotalSize); > > SemaphoreAddr = > (UINTN)SemaphoreBlock; > - mSmmCpuSemaphores.SemaphoreGlobal.Counter = (UINT32 > *)SemaphoreAddr; > - SemaphoreAddr += > SemaphoreSize; > mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm = (BOOLEAN > *)SemaphoreAddr; > SemaphoreAddr += > SemaphoreSize; > mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync = (BOOLEAN > *)SemaphoreAddr; > SemaphoreAddr += > SemaphoreSize; > mSmmCpuSemaphores.SemaphoreGlobal.PFLock = (SPIN_LOCK > *)SemaphoreAddr; > @@ -1960,12 +1812,10 @@ InitializeSmmCpuSemaphores ( > SemaphoreAddr += SemaphoreSize; > > SemaphoreAddr = > (UINTN)SemaphoreBlock + GlobalSemaphoresSize; > mSmmCpuSemaphores.SemaphoreCpu.Busy = (SPIN_LOCK > *)SemaphoreAddr; > SemaphoreAddr += ProcessorCount * > SemaphoreSize; > - mSmmCpuSemaphores.SemaphoreCpu.Run = (UINT32 > *)SemaphoreAddr; > - SemaphoreAddr += ProcessorCount * > SemaphoreSize; > mSmmCpuSemaphores.SemaphoreCpu.Present = (BOOLEAN > *)SemaphoreAddr; > > mPFLock = > mSmmCpuSemaphores.SemaphoreGlobal.PFLock; > mConfigSmmCodeAccessCheckLock = > mSmmCpuSemaphores.SemaphoreGlobal.CodeAccessCheckLock; > > @@ -1980,10 +1830,12 @@ VOID > EFIAPI > InitializeMpSyncData ( > VOID > ) > { > + RETURN_STATUS Status; > + > UINTN CpuIndex; > > if (mSmmMpSyncData != NULL) { > // > // mSmmMpSyncDataSize includes one structure of > SMM_DISPATCHER_MP_SYNC_DATA, one > @@ -2009,32 +1861,36 @@ InitializeMpSyncData ( > } > } > > mSmmMpSyncData->EffectiveSyncMode = mCpuSmmSyncMode; > > - mSmmMpSyncData->Counter = > mSmmCpuSemaphores.SemaphoreGlobal.Counter; > + Status = SmmCpuSyncContextInit > (gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus, > &mSmmMpSyncData->SyncContext); > + if (EFI_ERROR (Status)) { > + DEBUG ((DEBUG_ERROR, "InitializeMpSyncData: > SmmCpuSyncContextInit return error %r!\n", Status)); > + CpuDeadLoop (); > + return; > + } > + > + ASSERT (mSmmMpSyncData->SyncContext != NULL); > + > mSmmMpSyncData->InsideSmm = > mSmmCpuSemaphores.SemaphoreGlobal.InsideSmm; > mSmmMpSyncData->AllCpusInSync = > mSmmCpuSemaphores.SemaphoreGlobal.AllCpusInSync; > ASSERT ( > - mSmmMpSyncData->Counter != NULL && > mSmmMpSyncData->InsideSmm != NULL && > + mSmmMpSyncData->InsideSmm != NULL && > mSmmMpSyncData->AllCpusInSync != NULL > ); > - *mSmmMpSyncData->Counter = 0; > *mSmmMpSyncData->InsideSmm = FALSE; > *mSmmMpSyncData->AllCpusInSync = FALSE; > > mSmmMpSyncData->AllApArrivedWithException = FALSE; > > for (CpuIndex = 0; CpuIndex < > gSmmCpuPrivate->SmmCoreEntryContext.NumberOfCpus; CpuIndex++) { > mSmmMpSyncData->CpuData[CpuIndex].Busy = > (SPIN_LOCK > *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Busy + mSemaphoreSize * > CpuIndex); > - mSmmMpSyncData->CpuData[CpuIndex].Run = > - (UINT32 *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Run + > mSemaphoreSize * CpuIndex); > mSmmMpSyncData->CpuData[CpuIndex].Present = > (BOOLEAN > *)((UINTN)mSmmCpuSemaphores.SemaphoreCpu.Present + > mSemaphoreSize * CpuIndex); > *(mSmmMpSyncData->CpuData[CpuIndex].Busy) = 0; > - *(mSmmMpSyncData->CpuData[CpuIndex].Run) = 0; > *(mSmmMpSyncData->CpuData[CpuIndex].Present) = FALSE; > } > } > } > > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h > b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h > index f18345881b..a2fa4f6734 100644 > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.h > @@ -52,10 +52,11 @@ SPDX-License-Identifier: BSD-2-Clause-Patent > #include <Library/PeCoffGetEntryPointLib.h> > #include <Library/RegisterCpuFeaturesLib.h> > #include <Library/PerformanceLib.h> > #include <Library/CpuPageTableLib.h> > #include <Library/MmSaveStateLib.h> > +#include <Library/SmmCpuSyncLib.h> > > #include <AcpiCpuData.h> > #include <CpuHotPlugData.h> > > #include <Register/Intel/Cpuid.h> > @@ -403,11 +404,10 @@ SmmRelocationSemaphoreComplete ( > /// > typedef struct { > SPIN_LOCK *Busy; > volatile EFI_AP_PROCEDURE2 Procedure; > volatile VOID *Parameter; > - volatile UINT32 *Run; > volatile BOOLEAN *Present; > PROCEDURE_TOKEN *Token; > EFI_STATUS *Status; > } SMM_CPU_DATA_BLOCK; > > @@ -421,29 +421,28 @@ typedef struct { > // > // Pointer to an array. The array should be located immediately after this > structure > // so that UC cache-ability can be set together. > // > SMM_CPU_DATA_BLOCK *CpuData; > - volatile UINT32 *Counter; > volatile UINT32 BspIndex; > volatile BOOLEAN *InsideSmm; > volatile BOOLEAN *AllCpusInSync; > volatile SMM_CPU_SYNC_MODE EffectiveSyncMode; > volatile BOOLEAN SwitchBsp; > volatile BOOLEAN *CandidateBsp; > volatile BOOLEAN AllApArrivedWithException; > EFI_AP_PROCEDURE StartupProcedure; > VOID *StartupProcArgs; > + SMM_CPU_SYNC_CONTEXT *SyncContext; > } SMM_DISPATCHER_MP_SYNC_DATA; > > #define SMM_PSD_OFFSET 0xfb00 > > /// > /// All global semaphores' pointer > /// > typedef struct { > - volatile UINT32 *Counter; > volatile BOOLEAN *InsideSmm; > volatile BOOLEAN *AllCpusInSync; > SPIN_LOCK *PFLock; > SPIN_LOCK *CodeAccessCheckLock; > } SMM_CPU_SEMAPHORE_GLOBAL; > @@ -451,11 +450,10 @@ typedef struct { > /// > /// All semaphores for each processor > /// > typedef struct { > SPIN_LOCK *Busy; > - volatile UINT32 *Run; > volatile BOOLEAN *Present; > SPIN_LOCK *Token; > } SMM_CPU_SEMAPHORE_CPU; > > /// > diff --git a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf > b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf > index 372596f24c..793220aba3 100644 > --- a/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf > +++ b/UefiCpuPkg/PiSmmCpuDxeSmm/PiSmmCpuDxeSmm.inf > @@ -101,10 +101,11 @@ > SmmCpuFeaturesLib > PeCoffGetEntryPointLib > PerformanceLib > CpuPageTableLib > MmSaveStateLib > + SmmCpuSyncLib > > [Protocols] > gEfiSmmAccess2ProtocolGuid ## CONSUMES > gEfiSmmConfigurationProtocolGuid ## PRODUCES > gEfiSmmCpuProtocolGuid ## PRODUCES > -- > 2.16.2.windows.1
-=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#112677): https://edk2.groups.io/g/devel/message/112677 Mute This Topic: https://groups.io/mt/103187898/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/leave/9847357/21656/1706620634/xyzzy [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-