Star,

Regards,
Jian


> -----Original Message-----
> From: Zeng, Star
> Sent: Monday, October 22, 2018 4:24 PM
> To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
> Cc: Kinney, Michael D <michael.d.kin...@intel.com>; Ni, Ruiyu
> <ruiyu...@intel.com>; Yao, Jiewen <jiewen....@intel.com>; Laszlo Ersek
> <ler...@redhat.com>; Zeng, Star <star.z...@intel.com>
> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
> memory detection
> 
> On 2018/10/22 15:12, Wang, Jian J wrote:
> > Star,
> >
> > Thanks for the comment. Please see my opinions below.
> >
> > Regards,
> > Jian
> >
> >
> >> -----Original Message-----
> >> From: Zeng, Star
> >> Sent: Monday, October 22, 2018 10:53 AM
> >> To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
> >> Cc: Kinney, Michael D <michael.d.kin...@intel.com>; Ni, Ruiyu
> >> <ruiyu...@intel.com>; Yao, Jiewen <jiewen....@intel.com>; Laszlo Ersek
> >> <ler...@redhat.com>; Zeng, Star <star.z...@intel.com>
> >> Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
> >> memory detection
> >>
> >> On 2018/10/19 9:50, Jian J Wang wrote:
> >>> UAF (Use-After-Free) memory detection is new feature introduced to
> >>> detect illegal access to memory which has been freed. The principle
> >>> behind is similar to heap guard feature, that is the core turn all pool
> >>> memory allocation to page allocation and mark them to be not-present
> >>> once they are freed.
> >>>
> >>> This also implies that, once a page is allocated and freed, it cannot
> >>> be re-allocated. This will bring another issue, which is that there's
> >>> risk that memory space will be used out. To address it, the memory
> >>> service add logic to put part (at most 64 pages a time) of freed pages
> >>> back into page pool, so that the memory service can still have memory
> >>> to allocate, when all memory space have been allocated once. This is
> >>> called memory promotion. The promoted pages are always from the eldest
> >>> pages which haven been freed.
> >>>
> >>> To use this feature, one can simply set following PCD to 1
> >>>
> >> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >>>
> >>> Please note this feature cannot be used with heap guard feature controlled
> >>> by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
> >>>
> >>> Cc: Star Zeng <star.z...@intel.com>
> >>> Cc: Michael D Kinney <michael.d.kin...@intel.com>
> >>> Cc: Jiewen Yao <jiewen....@intel.com>
> >>> Cc: Ruiyu Ni <ruiyu...@intel.com>
> >>> Cc: Laszlo Ersek <ler...@redhat.com>
> >>> Contributed-under: TianoCore Contribution Agreement 1.1
> >>> Signed-off-by: Jian J Wang <jian.j.w...@intel.com>
> >>
> >> I did not review the whole patch thoroughly. But I suggest that putting
> >> the code for UAF in a separated file if it is feasible. I am aware that
> >> UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
> >> HeapGuard.c and GuardPage.c (this file name is just for example)?
> >>
> >
> > I think we can take the UAF as a special kind of heap guard: guarding the
> > freed heap memory instead of guarding the allocated heap memory. So
> > UAF is a complement of Heap Guard or part of Heap Guard. From this
> > perspective, it's not necessary to put it in separate file.
> 
> Very good point. From this perspective, I agree.
> Then I would like to suggest not introducing a new PCD
> PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
> PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
> The benefits are
> 1. No confusion between Heap Guard and Use After Free.
> 2. No need introduce new PCD.
> 3. Can reuse BIT6 - Enable non-stop mode.
> 4. No need update DxeIplPeim to enable IA32 PAE paging.
> 

Good idea. I'll try it. Thanks.

> >
> >>> ---
> >>>    MdeModulePkg/Core/Dxe/DxeMain.h              |   1 +
> >>>    MdeModulePkg/Core/Dxe/DxeMain.inf            |   1 +
> >>>    MdeModulePkg/Core/Dxe/Gcd/Gcd.c              |   8 +
> >>>    MdeModulePkg/Core/Dxe/Mem/HeapGuard.c        | 393
> >> ++++++++++++++++++++++++++-
> >>>    MdeModulePkg/Core/Dxe/Mem/HeapGuard.h        |  66 +++++
> >>>    MdeModulePkg/Core/Dxe/Mem/Page.c             |  39 ++-
> >>>    MdeModulePkg/Core/Dxe/Mem/Pool.c             |  21 +-
> >>>    MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c |  16 +-
> >>>    8 files changed, 519 insertions(+), 26 deletions(-)
> >>>
> >>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
> >> b/MdeModulePkg/Core/Dxe/DxeMain.h
> >>> index 2dec9da5e3..ae75cc5b25 100644
> >>> --- a/MdeModulePkg/Core/Dxe/DxeMain.h
> >>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.h
> >>> @@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY
> >> KIND, EITHER EXPRESS OR IMPLIED.
> >>>    #include <Library/DxeServicesLib.h>
> >>>    #include <Library/DebugAgentLib.h>
> >>>    #include <Library/CpuExceptionHandlerLib.h>
> >>> +#include <Library/DebugPrintErrorLevelLib.h>
> >>>
> >>>
> >>>    //
> >>> diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
> >> b/MdeModulePkg/Core/Dxe/DxeMain.inf
> >>> index 10375443c0..d91258c087 100644
> >>> --- a/MdeModulePkg/Core/Dxe/DxeMain.inf
> >>> +++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
> >>> @@ -198,6 +198,7 @@
> >>>      gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
> ##
> >> CONSUMES
> >>>      gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
> >> ## CONSUMES
> >>>      gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard                      
> >>>      ##
> >> CONSUMES
> >>> +
> >> gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
> >> ## CONSUMES
> >>>
> >>>    # [Hob]
> >>>    # RESOURCE_DESCRIPTOR   ## CONSUMES
> >>> diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >>> index d9c65a8045..e43ec74010 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> >>> @@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
> >>>        EFI_GCD_MEMORY_SPACE_DESCRIPTOR  *MemorySpaceMap;
> >>>        UINTN                            Index;
> >>>
> >>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> >>> +      return;
> >>> +    }
> >>
> >> I am aware the purpose of this change to optimize some code when
> >> DEBUG_GCD is not set.
> >> But I do not suggest we newly introduce the dependency to
> >> DebugPrintErrorLevelLib, I think this check should be hidden in the
> >> instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
> (this
> >> macro name is just for example) can be introduced if we really want to
> >> do that.
> >>
> >
> > There're two purpose here, one is for optimization but another is to fix a 
> > true
> > issue here: GCD memory lock reentrance (see below for details)
> >
> >     CoreDumpGcdMemorySpaceMap()
> > => CoreGetMemorySpaceMap()
> > => CoreAcquireGcdMemoryLock () *
> >     AllocatePool()
> > => InternalAllocatePool()
> > => CoreAllocatePool()
> > => CoreAllocatePoolI()
> > => CoreAllocatePoolPagesI()
> > => CoreAllocatePoolPages()
> > => FindFreePages()
> > => PromoteMemoryResource()
> > => CoreAcquireGcdMemoryLock() **
> >
> > If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
> above issue
> > in a different way. For example, change CoreGetMemorySpaceMap() to avoid
> lock
> > conflict, if possible. In addition, we could set 
> > PcdFixedDebugPrintErrorLevel to
> control
> > the DEBUG_GCD print.
> 
> Got it, thanks.
> I still not suggest using GetDebugPrintErrorLevel with new
> DebugPrintErrorLevelLib dependency.
> Seemingly, the problem even could not be resolved thoroughly because it
> may still happen when DEBUG_GCD is set, right?
> 
> A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
> PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
> updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
> Maybe there is other good idea.
> 

The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
know if AllocatePool will change GCD memory map? I can't see it from code.
But maybe I miss something.

  CoreAcquireGcdMemoryLock ();
  ...
  ...
  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof 
(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
  ...
  ...
Done:
  CoreReleaseGcdMemoryLock ();

=======>

  CoreAcquireGcdMemoryLock ();
  ...
  CoreReleaseGcdMemoryLock ();
  ...
  *MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof 
(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
  ...
  CoreAcquireGcdMemoryLock ();
  ...
Done:
  CoreReleaseGcdMemoryLock ();

> 
> Thanks,
> Star
> 
> >
> >>
> >> Thanks,
> >> Star
> >>
> >>> +
> >>>        Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
> >> &MemorySpaceMap);
> >>>        ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
> >>>
> >>> @@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
> >>>        EFI_GCD_IO_SPACE_DESCRIPTOR  *IoSpaceMap;
> >>>        UINTN                        Index;
> >>>
> >>> +    if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
> >>> +      return;
> >>> +    }
> >>> +
> >>>        Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
> >>>        ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
> >>>
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >>> index 663f969c0d..b120c04f8f 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> >>> @@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> >> mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >>>    GLOBAL_REMOVE_IF_UNREFERENCED UINTN
> >> mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
> >>>                                        = 
> >>> GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
> >>>
> >>> +//
> >>> +// Used for Use-After-Free memory detection to promote freed but not
> used
> >> pages.
> >>> +//
> >>> +GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
> >> mLastPromotedPage = BASE_4GB;
> >>> +
> >>>    /**
> >>>      Set corresponding bits in bitmap table to 1 according to the address.
> >>>
> >>> @@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
> >>>
> >>>      @return An integer containing the guarded memory bitmap.
> >>>    **/
> >>> -UINTN
> >>> +UINT64
> >>>    GetGuardedMemoryBits (
> >>>      IN EFI_PHYSICAL_ADDRESS    Address,
> >>>      IN UINTN                   NumberOfPages
> >>> @@ -387,7 +392,7 @@ GetGuardedMemoryBits (
> >>>    {
> >>>      UINT64            *BitMap;
> >>>      UINTN             Bits;
> >>> -  UINTN             Result;
> >>> +  UINT64            Result;
> >>>      UINTN             Shift;
> >>>      UINTN             BitsToUnitEnd;
> >>>
> >>> @@ -1203,6 +1208,373 @@ SetAllGuardPages (
> >>>      }
> >>>    }
> >>>
> >>> +/**
> >>> +  Check to see if the Use-After-Free (UAF) feature is enabled or not.
> >>> +
> >>> +  @return TRUE/FALSE.
> >>> +**/
> >>> +BOOLEAN
> >>> +IsUafEnabled (
> >>> +  VOID
> >>> +  )
> >>> +{
> >>> +  ASSERT (!IsHeapGuardEnabled());
> >>> +  return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
> >>> +}
> >>> +
> >>> +/**
> >>> +  Find the address of top-most guarded free page.
> >>> +
> >>> +  @param[out]  Address    Start address of top-most guarded free page.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +GetLastGuardedFreePageAddress (
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *Address
> >>> +  )
> >>> +{
> >>> +  EFI_PHYSICAL_ADDRESS    AddressGranularity;
> >>> +  EFI_PHYSICAL_ADDRESS    BaseAddress;
> >>> +  UINTN                   Level;
> >>> +  UINT64                  Map;
> >>> +  INTN                    Index;
> >>> +
> >>> +  ASSERT (mMapLevel >= 1);
> >>> +
> >>> +  BaseAddress = 0;
> >>> +  Map = mGuardedMemoryMap;
> >>> +  for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> >>> +       Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
> >>> +       ++Level) {
> >>> +    AddressGranularity = LShiftU64 (1, mLevelShift[Level]);
> >>> +
> >>> +    //
> >>> +    // Find the non-NULL entry at largest index.
> >>> +    //
> >>> +    for (Index = (INTN)mLevelMask[Level]; Index >= 0 ; --Index) {
> >>> +      if (((UINT64 *)(UINTN)Map)[Index] != 0) {
> >>> +        BaseAddress += MultU64x32 (AddressGranularity, (UINT32)Index);
> >>> +        Map = ((UINT64 *)(UINTN)Map)[Index];
> >>> +        break;
> >>> +      }
> >>> +    }
> >>> +  }
> >>> +
> >>> +  //
> >>> +  // Find the non-zero MSB then get the page address.
> >>> +  //
> >>> +  while (Map != 0) {
> >>> +    Map = RShiftU64 (Map, 1);
> >>> +    BaseAddress += EFI_PAGES_TO_SIZE (1);
> >>> +  }
> >>> +
> >>> +  *Address = BaseAddress;
> >>> +}
> >>> +
> >>> +/**
> >>> +  Record freed pages.
> >>> +
> >>> +  @param[in]  BaseAddress   Base address of just freed pages.
> >>> +  @param[in]  Pages         Number of freed pages.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +MarkFreedPages (
> >>> +  IN EFI_PHYSICAL_ADDRESS     BaseAddress,
> >>> +  IN UINTN                    Pages
> >>> +  )
> >>> +{
> >>> +  SetGuardedMemoryBits (BaseAddress, Pages);
> >>> +}
> >>> +
> >>> +/**
> >>> +  Record freed pages as well as mark them as not-present, if possible.
> >>> +
> >>> +  @param[in]  BaseAddress   Base address of just freed pages.
> >>> +  @param[in]  Pages         Number of freed pages.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +EFIAPI
> >>> +GuardFreedPages (
> >>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> >>> +  IN  UINTN                   Pages
> >>> +  )
> >>> +{
> >>> +  EFI_STATUS      Status;
> >>> +
> >>> +  //
> >>> +  // Do nothing if
> >>> +  // - Use-After-Free detection is not enabled
> >>> +  // - Freed memory is legacy memory lower than 1MB
> >>> +  //
> >>> +  if (!IsUafEnabled () || BaseAddress < BASE_1MB) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  MarkFreedPages (BaseAddress, Pages);
> >>> +  if (gCpu != NULL) {
> >>> +    //
> >>> +    // Set flag to make sure allocating memory without GUARD for page
> table
> >>> +    // operation; otherwise infinite loops could be caused.
> >>> +    //
> >>> +    mOnGuarding = TRUE;
> >>> +    //
> >>> +    // Note: This might overwrite other attributes needed by other 
> >>> features,
> >>> +    // such as NX memory protection.
> >>> +    //
> >>> +    Status = gCpu->SetMemoryAttributes (
> >>> +                     gCpu,
> >>> +                     BaseAddress,
> >>> +                     EFI_PAGES_TO_SIZE(Pages),
> >>> +                     EFI_MEMORY_RP
> >>> +                     );
> >>> +    //
> >>> +    // Normally we should check the returned Status. But there might be
> >> memory
> >>> +    // alloc/free involved in SetMemoryAttributes(), which would fail 
> >>> this
> >>> +    // calling. It's rare case so it's OK to let a few tiny holes be 
> >>> not-guarded.
> >>> +    //
> >>> +    if (EFI_ERROR (Status)) {
> >>> +      DEBUG ((DEBUG_WARN, "Failed to guard freed pages: %p (%d)\n",
> >> BaseAddress, Pages));
> >>> +    }
> >>> +    mOnGuarding = FALSE;
> >>> +  }
> >>> +}
> >>> +
> >>> +/**
> >>> +  Mark all pages freed before CPU Arch Protocol as not-present.
> >>> +
> >>> +**/
> >>> +VOID
> >>> +GuardAllFreedPages (
> >>> +  VOID
> >>> +  )
> >>> +{
> >>> +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> >>> +  UINT64    TableEntry;
> >>> +  UINT64    Address;
> >>> +  UINT64    GuardPage;
> >>> +  INTN      Level;
> >>> +  UINTN     BitIndex;
> >>> +  UINTN     GuardPageNumber;
> >>> +
> >>> +  if (mGuardedMemoryMap == 0 ||
> >>> +      mMapLevel == 0 ||
> >>> +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> >>> +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> >>> +
> >>> +  SetMem (Tables, sizeof(Tables), 0);
> >>> +  SetMem (Addresses, sizeof(Addresses), 0);
> >>> +  SetMem (Indices, sizeof(Indices), 0);
> >>> +
> >>> +  Level           = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> >>> +  Tables[Level]   = mGuardedMemoryMap;
> >>> +  Address         = 0;
> >>> +  GuardPage       = (UINT64)-1;
> >>> +  GuardPageNumber = 0;
> >>> +
> >>> +  while (TRUE) {
> >>> +    if (Indices[Level] > Entries[Level]) {
> >>> +      Tables[Level] = 0;
> >>> +      Level        -= 1;
> >>> +    } else {
> >>> +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> >>> +      Address     = Addresses[Level];
> >>> +
> >>> +      if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> >>> +        Level            += 1;
> >>> +        Tables[Level]     = TableEntry;
> >>> +        Addresses[Level]  = Address;
> >>> +        Indices[Level]    = 0;
> >>> +
> >>> +        continue;
> >>> +      } else {
> >>> +        BitIndex = 1;
> >>> +        while (BitIndex != 0) {
> >>> +          if ((TableEntry & BitIndex) != 0) {
> >>> +            if (GuardPage == (UINT64)-1) {
> >>> +              GuardPage = Address;
> >>> +            }
> >>> +            ++GuardPageNumber;
> >>> +          } else if (GuardPageNumber > 0) {
> >>> +            GuardFreedPages (GuardPage, GuardPageNumber);
> >>> +            GuardPageNumber = 0;
> >>> +            GuardPage       = (UINT64)-1;
> >>> +          }
> >>> +
> >>> +          if (TableEntry == 0) {
> >>> +            break;
> >>> +          }
> >>> +
> >>> +          Address += EFI_PAGES_TO_SIZE (1);
> >>> +          BitIndex = LShiftU64(BitIndex, 1);
> >>> +        }
> >>> +      }
> >>> +    }
> >>> +
> >>> +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> >>> +      break;
> >>> +    }
> >>> +
> >>> +    Indices[Level] += 1;
> >>> +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> >>> +    Addresses[Level] = Address | LShiftU64(Indices[Level], 
> >>> Shifts[Level]);
> >>> +
> >>> +  }
> >>> +
> >>> +  //
> >>> +  // Update the maximum address of freed page which can be used for
> >> memory
> >>> +  // promotion upon out-of-memory-space.
> >>> +  //
> >>> +  GetLastGuardedFreePageAddress (&Address);
> >>> +  if (Address != 0) {
> >>> +    mLastPromotedPage = Address;
> >>> +  }
> >>> +}
> >>> +
> >>> +/**
> >>> +  This function checks to see if the given memory map descriptor in a
> memory
> >> map
> >>> +  can be merged with any guarded free pages.
> >>> +
> >>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> >>> +  @param  MaxAddress        Maximum address to stop the merge.
> >>> +
> >>> +  @return VOID
> >>> +
> >>> +**/
> >>> +VOID
> >>> +MergeGuardPages (
> >>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> >>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> >>> +  )
> >>> +{
> >>> +  EFI_PHYSICAL_ADDRESS        EndAddress;
> >>> +  UINT64                      Bitmap;
> >>> +  INTN                        Pages;
> >>> +
> >>> +  if (!IsUafEnabled () ||
> >>> +      MemoryMapEntry->Type >= EfiMemoryMappedIO) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  Bitmap = 0;
> >>> +  Pages  = EFI_SIZE_TO_PAGES (MaxAddress - MemoryMapEntry-
> >>> PhysicalStart);
> >>> +  Pages -= MemoryMapEntry->NumberOfPages;
> >>> +  while (Pages > 0) {
> >>> +    if (Bitmap == 0) {
> >>> +      EndAddress = MemoryMapEntry->PhysicalStart +
> >>> +                   EFI_PAGES_TO_SIZE (MemoryMapEntry->NumberOfPages);
> >>> +      Bitmap = GetGuardedMemoryBits(EndAddress,
> >> GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    }
> >>> +
> >>> +    if ((Bitmap & 1) == 0) {
> >>> +      break;
> >>> +    }
> >>> +
> >>> +    Pages--;
> >>> +    MemoryMapEntry->NumberOfPages++;
> >>> +    Bitmap = RShiftU64 (Bitmap, 1);
> >>> +  }
> >>> +}
> >>> +
> >>> +/**
> >>> +  Put part (at most 64 pages a time) guarded free pages back to free page
> >> pool.
> >>> +
> >>> +  Use-After-Free detection makes use of 'Used then throw away' way to
> >> detect
> >>> +  any illegal access to freed memory. The thrown-away memory will be
> >> marked as
> >>> +  not-present so that any access to those memory (after free) will 
> >>> trigger
> >>> +  page-fault.
> >>> +
> >>> +  The problem is that this will consume lots of memory space. Once no
> >> memory
> >>> +  left in pool to allocate, we have to restore part of the freed pages 
> >>> to their
> >>> +  normal function. Otherwise the whole system will stop functioning.
> >>> +
> >>> +  @param  StartAddress    Start address of promoted memory.
> >>> +  @param  EndAddress      End address of promoted memory.
> >>> +
> >>> +  @return TRUE    Succeeded to promote memory.
> >>> +  @return FALSE   No free memory found.
> >>> +
> >>> +**/
> >>> +BOOLEAN
> >>> +PromoteGuardedFreePages (
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> >>> +  )
> >>> +{
> >>> +  EFI_STATUS              Status;
> >>> +  UINTN                   AvailablePages;
> >>> +  UINT64                  Bitmap;
> >>> +  EFI_PHYSICAL_ADDRESS    Start;
> >>> +
> >>> +  if (!IsUafEnabled ()) {
> >>> +    return FALSE;
> >>> +  }
> >>> +
> >>> +  //
> >>> +  // Similar to memory allocation service, always search the freed pages 
> >>> in
> >>> +  // descending direction.
> >>> +  //
> >>> +  Start           = mLastPromotedPage;
> >>> +  AvailablePages  = 0;
> >>> +  while (AvailablePages == 0) {
> >>> +    Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    //
> >>> +    // If the address wraps around, try the really freed pages at top.
> >>> +    //
> >>> +    if (Start > mLastPromotedPage) {
> >>> +      GetLastGuardedFreePageAddress (&Start);
> >>> +      ASSERT (Start != 0);
> >>> +      Start -= EFI_PAGES_TO_SIZE(GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    }
> >>> +
> >>> +    Bitmap = GetGuardedMemoryBits(Start,
> >> GUARDED_HEAP_MAP_ENTRY_BITS);
> >>> +    while (Bitmap > 0) {
> >>> +      if ((Bitmap & 1) != 0) {
> >>> +        ++AvailablePages;
> >>> +      } else if (AvailablePages == 0) {
> >>> +        Start += EFI_PAGES_TO_SIZE (1);
> >>> +      } else {
> >>> +        break;
> >>> +      }
> >>> +
> >>> +      Bitmap = RShiftU64(Bitmap, 1);
> >>> +    }
> >>> +  }
> >>> +
> >>> +  if (AvailablePages) {
> >>> +    DEBUG((DEBUG_INFO, "Promoted pages: %lX (%X)\r\n", Start,
> >> AvailablePages));
> >>> +    if (gCpu != NULL) {
> >>> +      //
> >>> +      // Set flag to make sure allocating memory without GUARD for page
> table
> >>> +      // operation; otherwise infinite loops could be caused.
> >>> +      //
> >>> +      mOnGuarding = TRUE;
> >>> +      Status = gCpu->SetMemoryAttributes (gCpu, Start,
> >> EFI_PAGES_TO_SIZE(AvailablePages), 0);
> >>> +      ASSERT_EFI_ERROR (Status);
> >>> +      mOnGuarding = FALSE;
> >>> +    }
> >>> +
> >>> +    mLastPromotedPage = Start;
> >>> +    *StartAddress     = Start;
> >>> +    *EndAddress       = Start + EFI_PAGES_TO_SIZE (AvailablePages) - 1;
> >>> +    return TRUE;
> >>> +  }
> >>> +
> >>> +  return FALSE;
> >>> +}
> >>> +
> >>>    /**
> >>>      Notify function used to set all Guard pages before CPU Arch Protocol
> >> installed.
> >>>    **/
> >>> @@ -1212,7 +1584,14 @@ HeapGuardCpuArchProtocolNotify (
> >>>      )
> >>>    {
> >>>      ASSERT (gCpu != NULL);
> >>> -  SetAllGuardPages ();
> >>> +
> >>> +  if ((PcdGet8 (PcdHeapGuardPropertyMask) & 0x0F) != 0) {
> >>> +    SetAllGuardPages ();
> >>> +  }
> >>> +
> >>> +  if (IsUafEnabled ()) {
> >>> +    GuardAllFreedPages ();
> >>> +  }
> >>>    }
> >>>
> >>>    /**
> >>> @@ -1264,6 +1643,14 @@ DumpGuardedMemoryBitmap (
> >>>      CHAR8     *Ruler1;
> >>>      CHAR8     *Ruler2;
> >>>
> >>> +  if (!IsHeapGuardEnabled () && !IsUafEnabled ()) {
> >>> +    return;
> >>> +  }
> >>> +
> >>> +  if ((GetDebugPrintErrorLevel () & HEAP_GUARD_DEBUG_LEVEL) == 0) {
> >>> +    return;
> >>> +  }
> >>> +
> >>>      if (mGuardedMemoryMap == 0 ||
> >>>          mMapLevel == 0 ||
> >>>          mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >>> index 8c34692439..3f2a5283c2 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> >>> @@ -407,6 +407,72 @@ HeapGuardCpuArchProtocolNotify (
> >>>      VOID
> >>>      );
> >>>
> >>> +/**
> >>> +  This function checks to see if the given memory map descriptor in a
> memory
> >> map
> >>> +  can be merged with any guarded free pages.
> >>> +
> >>> +  @param  MemoryMapEntry    A pointer to a descriptor in MemoryMap.
> >>> +  @param  MaxAddress        Maximum address to stop the merge.
> >>> +
> >>> +  @return VOID
> >>> +
> >>> +**/
> >>> +VOID
> >>> +MergeGuardPages (
> >>> +  IN EFI_MEMORY_DESCRIPTOR      *MemoryMapEntry,
> >>> +  IN EFI_PHYSICAL_ADDRESS       MaxAddress
> >>> +  );
> >>> +
> >>> +/**
> >>> +  Record freed pages as well as mark them as not-present, if possible.
> >>> +
> >>> +  @param[in]  BaseAddress   Base address of just freed pages.
> >>> +  @param[in]  Pages         Number of freed pages.
> >>> +
> >>> +  @return VOID.
> >>> +**/
> >>> +VOID
> >>> +EFIAPI
> >>> +GuardFreedPages (
> >>> +  IN  EFI_PHYSICAL_ADDRESS    BaseAddress,
> >>> +  IN  UINTN                   Pages
> >>> +  );
> >>> +
> >>> +/**
> >>> +  Check to see if the use-after-free (UAF) feature is enabled or not.
> >>> +
> >>> +  @return TRUE/FALSE.
> >>> +**/
> >>> +BOOLEAN
> >>> +IsUafEnabled (
> >>> +  VOID
> >>> +  );
> >>> +
> >>> +/**
> >>> +  Put part (at most 64 pages a time) guarded free pages back to free page
> >> pool.
> >>> +
> >>> +  Use-After-Free detection makes use of 'Used then throw away' way to
> >> detect
> >>> +  any illegal access to freed memory. The thrown-away memory will be
> >> marked as
> >>> +  not-present so that any access to those memory (after free) will 
> >>> trigger
> >>> +  page-fault.
> >>> +
> >>> +  The problem is that this will consume lots of memory space. Once no
> >> memory
> >>> +  left in pool to allocate, we have to restore part of the freed pages 
> >>> to their
> >>> +  normal function. Otherwise the whole system will stop functioning.
> >>> +
> >>> +  @param  StartAddress    Start address of promoted memory.
> >>> +  @param  EndAddress      End address of promoted memory.
> >>> +
> >>> +  @return TRUE    Succeeded to promote memory.
> >>> +  @return FALSE   No free memory found.
> >>> +
> >>> +**/
> >>> +BOOLEAN
> >>> +PromoteGuardedFreePages (
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *StartAddress,
> >>> +  OUT EFI_PHYSICAL_ADDRESS      *EndAddress
> >>> +  );
> >>> +
> >>>    extern BOOLEAN mOnGuarding;
> >>>
> >>>    #endif
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Page.c
> >> b/MdeModulePkg/Core/Dxe/Mem/Page.c
> >>> index 3b4cc08e7c..9f4419fa5b 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/Page.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/Page.c
> >>> @@ -401,9 +401,11 @@ PromoteMemoryResource (
> >>>      VOID
> >>>      )
> >>>    {
> >>> -  LIST_ENTRY         *Link;
> >>> -  EFI_GCD_MAP_ENTRY  *Entry;
> >>> -  BOOLEAN            Promoted;
> >>> +  LIST_ENTRY            *Link;
> >>> +  EFI_GCD_MAP_ENTRY     *Entry;
> >>> +  BOOLEAN               Promoted;
> >>> +  EFI_PHYSICAL_ADDRESS  StartAddress;
> >>> +  EFI_PHYSICAL_ADDRESS  EndAddress;
> >>>
> >>>      DEBUG ((DEBUG_PAGE, "Promote the memory resource\n"));
> >>>
> >>> @@ -451,6 +453,24 @@ PromoteMemoryResource (
> >>>
> >>>      CoreReleaseGcdMemoryLock ();
> >>>
> >>> +  if (!Promoted) {
> >>> +    //
> >>> +    // If Use-After-Free detection is enabled, we might promote pages 
> >>> from
> >>> +    // guarded free pages.
> >>> +    //
> >>> +    Promoted = PromoteGuardedFreePages (&StartAddress, &EndAddress);
> >>> +    if (Promoted) {
> >>> +      CoreAcquireGcdMemoryLock ();
> >>> +      CoreAddRange (
> >>> +        EfiConventionalMemory,
> >>> +        StartAddress,
> >>> +        EndAddress,
> >>> +        EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT |
> >> EFI_MEMORY_WB
> >>> +        );
> >>> +      CoreReleaseGcdMemoryLock ();
> >>> +    }
> >>> +  }
> >>> +
> >>>      return Promoted;
> >>>    }
> >>>    /**
> >>> @@ -896,9 +916,13 @@ CoreConvertPagesEx (
> >>>        }
> >>>
> >>>        //
> >>> -    // Add our new range in
> >>> +    // Add our new range in. Don't do this for freed pages if 
> >>> Use-After-Free
> >>> +    // detection is enabled.
> >>>        //
> >>> -    CoreAddRange (MemType, Start, RangeEnd, Attribute);
> >>> +    if (!IsUafEnabled () || !ChangingType || MemType !=
> >> EfiConventionalMemory) {
> >>> +      CoreAddRange (MemType, Start, RangeEnd, Attribute);
> >>> +    }
> >>> +
> >>>        if (ChangingType && (MemType == EfiConventionalMemory)) {
> >>>          //
> >>>          // Avoid calling DEBUG_CLEAR_MEMORY() for an address of 0 because
> >> this
> >>> @@ -1514,6 +1538,7 @@ CoreFreePages (
> >>>
> >>>      Status = CoreInternalFreePages (Memory, NumberOfPages,
> &MemoryType);
> >>>      if (!EFI_ERROR (Status)) {
> >>> +    GuardFreedPages (Memory, NumberOfPages);
> >>>        CoreUpdateProfile (
> >>>          (EFI_PHYSICAL_ADDRESS) (UINTN) RETURN_ADDRESS (0),
> >>>          MemoryProfileActionFreePages,
> >>> @@ -1908,9 +1933,7 @@ Done:
> >>>      *MemoryMapSize = BufferSize;
> >>>
> >>>      DEBUG_CODE (
> >>> -    if (PcdGet8 (PcdHeapGuardPropertyMask) & (BIT1|BIT0)) {
> >>> -      DumpGuardedMemoryBitmap ();
> >>> -    }
> >>> +    DumpGuardedMemoryBitmap ();
> >>>      );
> >>>
> >>>      return Status;
> >>> diff --git a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >> b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >>> index 1ff2061f7f..0cd74267b3 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Mem/Pool.c
> >>> @@ -26,7 +26,8 @@ typedef struct {
> >>>    } POOL_FREE;
> >>>
> >>>
> >>> -#define POOL_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','0')
> >>> +#define POOL_HEAD_SIGNATURE       SIGNATURE_32('p','h','d','0')
> >>> +#define POOLPAGE_HEAD_SIGNATURE   SIGNATURE_32('p','h','d','1')
> >>>    typedef struct {
> >>>      UINT32          Signature;
> >>>      UINT32          Reserved;
> >>> @@ -367,6 +368,7 @@ CoreAllocatePoolI (
> >>>      UINTN       NoPages;
> >>>      UINTN       Granularity;
> >>>      BOOLEAN     HasPoolTail;
> >>> +  BOOLEAN     PageAsPool;
> >>>
> >>>      ASSERT_LOCKED (&mPoolMemoryLock);
> >>>
> >>> @@ -386,6 +388,7 @@ CoreAllocatePoolI (
> >>>
> >>>      HasPoolTail  = !(NeedGuard &&
> >>>                       ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> >>> +  PageAsPool = (IsUafEnabled () && !mOnGuarding);
> >>>
> >>>      //
> >>>      // Adjusting the Size to be of proper alignment so that
> >>> @@ -406,7 +409,7 @@ CoreAllocatePoolI (
> >>>      // If allocation is over max size, just allocate pages for the 
> >>> request
> >>>      // (slow)
> >>>      //
> >>> -  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard) {
> >>> +  if (Index >= SIZE_TO_LIST (Granularity) || NeedGuard || PageAsPool) {
> >>>        if (!HasPoolTail) {
> >>>          Size -= sizeof (POOL_TAIL);
> >>>        }
> >>> @@ -498,7 +501,7 @@ Done:
> >>>        //
> >>>        // If we have a pool buffer, fill in the header & tail info
> >>>        //
> >>> -    Head->Signature = POOL_HEAD_SIGNATURE;
> >>> +    Head->Signature = (PageAsPool) ? POOLPAGE_HEAD_SIGNATURE :
> >> POOL_HEAD_SIGNATURE;
> >>>        Head->Size      = Size;
> >>>        Head->Type      = (EFI_MEMORY_TYPE) PoolType;
> >>>        Buffer          = Head->Data;
> >>> @@ -615,6 +618,7 @@ CoreFreePoolPagesI (
> >>>      CoreFreePoolPages (Memory, NoPages);
> >>>      CoreReleaseMemoryLock ();
> >>>
> >>> +  GuardFreedPages (Memory, NoPages);
> >>>      ApplyMemoryProtectionPolicy (PoolType, EfiConventionalMemory,
> >>>        (EFI_PHYSICAL_ADDRESS)(UINTN)Memory, EFI_PAGES_TO_SIZE
> (NoPages));
> >>>    }
> >>> @@ -685,15 +689,19 @@ CoreFreePoolI (
> >>>      UINTN       Granularity;
> >>>      BOOLEAN     IsGuarded;
> >>>      BOOLEAN     HasPoolTail;
> >>> +  BOOLEAN     PageAsPool;
> >>>
> >>>      ASSERT(Buffer != NULL);
> >>>      //
> >>>      // Get the head & tail of the pool entry
> >>>      //
> >>> -  Head = CR (Buffer, POOL_HEAD, Data, POOL_HEAD_SIGNATURE);
> >>> +  Head = BASE_CR (Buffer, POOL_HEAD, Data);
> >>>      ASSERT(Head != NULL);
> >>>
> >>> -  if (Head->Signature != POOL_HEAD_SIGNATURE) {
> >>> +  if (Head->Signature != POOL_HEAD_SIGNATURE &&
> >>> +      Head->Signature != POOLPAGE_HEAD_SIGNATURE) {
> >>> +    ASSERT (Head->Signature == POOL_HEAD_SIGNATURE ||
> >>> +            Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >>>        return EFI_INVALID_PARAMETER;
> >>>      }
> >>>
> >>> @@ -701,6 +709,7 @@ CoreFreePoolI (
> >>>                    IsMemoryGuarded ((EFI_PHYSICAL_ADDRESS)(UINTN)Head);
> >>>      HasPoolTail = !(IsGuarded &&
> >>>                      ((PcdGet8 (PcdHeapGuardPropertyMask) & BIT7) == 0));
> >>> +  PageAsPool = (Head->Signature == POOLPAGE_HEAD_SIGNATURE);
> >>>
> >>>      if (HasPoolTail) {
> >>>        Tail = HEAD_TO_TAIL (Head);
> >>> @@ -757,7 +766,7 @@ CoreFreePoolI (
> >>>      //
> >>>      // If it's not on the list, it must be pool pages
> >>>      //
> >>> -  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded) {
> >>> +  if (Index >= SIZE_TO_LIST (Granularity) || IsGuarded || PageAsPool) {
> >>>
> >>>        //
> >>>        // Return the memory pages back to free memory
> >>> diff --git a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >> b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >>> index 05eb4f422b..f6595c90ed 100644
> >>> --- a/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >>> +++ b/MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c
> >>> @@ -32,6 +32,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
> ANY
> >> KIND, EITHER EXPRESS OR IMPLIED.
> >>>    #include <Guid/PropertiesTable.h>
> >>>
> >>>    #include "DxeMain.h"
> >>> +#include "HeapGuard.h"
> >>>
> >>>    #define PREVIOUS_MEMORY_DESCRIPTOR(MemoryDescriptor, Size) \
> >>>      ((EFI_MEMORY_DESCRIPTOR *)((UINT8 *)(MemoryDescriptor) - (Size)))
> >>> @@ -205,16 +206,13 @@ MergeMemoryMap (
> >>>        NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> (MemoryMapEntry,
> >> DescriptorSize);
> >>>
> >>>        do {
> >>> -      MemoryBlockLength = (UINT64) (EfiPagesToSize (MemoryMapEntry-
> >>> NumberOfPages));
> >>> +      MergeGuardPages (NewMemoryMapEntry, NextMemoryMapEntry-
> >>> PhysicalStart);
> >>> +      MemoryBlockLength = (UINT64) (EfiPagesToSize
> (NewMemoryMapEntry-
> >>> NumberOfPages));
> >>>          if (((UINTN)NextMemoryMapEntry < (UINTN)MemoryMapEnd) &&
> >>> -          (MemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> >>> -          (MemoryMapEntry->Attribute == NextMemoryMapEntry->Attribute)
> &&
> >>> -          ((MemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> >> NextMemoryMapEntry->PhysicalStart)) {
> >>> -        MemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >>> NumberOfPages;
> >>> -        if (NewMemoryMapEntry != MemoryMapEntry) {
> >>> -          NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >>> NumberOfPages;
> >>> -        }
> >>> -
> >>> +          (NewMemoryMapEntry->Type == NextMemoryMapEntry->Type) &&
> >>> +          (NewMemoryMapEntry->Attribute == NextMemoryMapEntry-
> >Attribute)
> >> &&
> >>> +          ((NewMemoryMapEntry->PhysicalStart + MemoryBlockLength) ==
> >> NextMemoryMapEntry->PhysicalStart)) {
> >>> +        NewMemoryMapEntry->NumberOfPages += NextMemoryMapEntry-
> >>> NumberOfPages;
> >>>            NextMemoryMapEntry = NEXT_MEMORY_DESCRIPTOR
> >> (NextMemoryMapEntry, DescriptorSize);
> >>>            continue;
> >>>          } else {
> >>>

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to