Thanks catching this. New patch has sent out.

Regards,
Jian


> -----Original Message-----
> From: Ni, Ruiyu
> Sent: Thursday, March 15, 2018 11:47 AM
> To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
> Cc: Yao, Jiewen <jiewen....@intel.com>; Dong, Eric <eric.d...@intel.com>;
> Zeng, Star <star.z...@intel.com>
> Subject: Re: [edk2] [PATCH] MdeModulePkg/Core: allow HeapGuard even
> before CpuArchProtocol installed
> 
> On 3/15/2018 10:27 AM, Jian J Wang wrote:
> >> v2 changes:
> >>     Fix a logic hole in bits operation on address on 64K boundary with
> >>     just 64-bit length (SetBits(), ClearBits(), GetBits()).
> >
> > Due to the fact that HeapGuard needs CpuArchProtocol to update page
> > attributes, the feature is normally enabled after CpuArchProtocol is
> > installed. Since there're some drivers are loaded before CpuArchProtocl,
> > they cannot make use HeapGuard feature to detect potential issues.
> >
> > This patch fixes above situation by updating the DXE core to skip the
> > NULL check against global gCpu in the IsMemoryTypeToGuard(), and adding
> > NULL check against gCpu in SetGuardPage() and UnsetGuardPage() to make
> > sure that they can be called but do nothing. This will allow HeapGuard to
> > record all guarded memory without setting the related Guard pages to not-
> > present.
> >
> > Once the CpuArchProtocol is installed, a protocol notify will be called
> > to complete the work of setting Guard pages to not-present.
> >
> > Please note that above changes will cause a #PF in GCD code during cleanup
> > of map entries, which is initiated by CpuDxe driver to update real mtrr
> > and paging attributes back to GCD. During that time, CpuDxe doesn't allow
> > GCD to update memory attributes and then any Guard page cannot be unset.
> > As a result, this will prevent Guarded memory from freeing during memory
> > map cleanup.
> >
> > The solution is to avoid allocating guarded memory as memory map entries
> > in GCD code. It's done by setting global mOnGuarding to TRUE before memory
> > allocation and setting it back to FALSE afterwards in GCD function
> > CoreAllocateGcdMapEntry().
> >
> > Cc: Star Zeng <star.z...@intel.com>
> > Cc: Eric Dong <eric.d...@intel.com>
> > Cc: Jiewen Yao <jiewen....@intel.com>
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Jian J Wang <jian.j.w...@intel.com>
> > ---
> >   MdeModulePkg/Core/Dxe/Gcd/Gcd.c               |  10 ++
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.c         | 148
> ++++++++++++++++++++++++--
> >   MdeModulePkg/Core/Dxe/Mem/HeapGuard.h         |   8 ++
> >   MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c |   5 +
> >   MdeModulePkg/Core/PiSmmCore/HeapGuard.c       |  16 +--
> >   5 files changed, 174 insertions(+), 13 deletions(-)
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > index 8fbc3d282c..77f4adb4bc 100644
> > --- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > +++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
> > @@ -16,6 +16,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
> >
> >   #include "DxeMain.h"
> >   #include "Gcd.h"
> > +#include "Mem/HeapGuard.h"
> >
> >   #define MINIMUM_INITIAL_MEMORY_SIZE 0x10000
> >
> > @@ -391,12 +392,21 @@ CoreAllocateGcdMapEntry (
> >     IN OUT EFI_GCD_MAP_ENTRY  **BottomEntry
> >     )
> >   {
> > +  //
> > +  // Set to mOnGuarding to TRUE before memory allocation. This will make
> sure
> > +  // that the entry memory is not "guarded" by HeapGuard. Otherwise it 
> > might
> > +  // cause problem when it's freed (if HeapGuard is enabled).
> > +  //
> > +  mOnGuarding = TRUE;
> >     *TopEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> > +  mOnGuarding = FALSE;
> >     if (*TopEntry == NULL) {
> >       return EFI_OUT_OF_RESOURCES;
> >     }
> >
> > +  mOnGuarding = TRUE;
> >     *BottomEntry = AllocateZeroPool (sizeof (EFI_GCD_MAP_ENTRY));
> > +  mOnGuarding = FALSE;
> >     if (*BottomEntry == NULL) {
> >       CoreFreePool (*TopEntry);
> >       return EFI_OUT_OF_RESOURCES;
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > index ac043b5d9b..fd6aeee8da 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
> > @@ -70,7 +70,7 @@ SetBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -123,7 +123,7 @@ ClearBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -188,10 +188,14 @@ GetBits (
> >       Lsbs = 0;
> >     }
> >
> > -  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> > -  if (Lsbs > 0) {
> > -    BitMap  += 1;
> > -    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +    Result = *BitMap;
> > +  } else {
> > +    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
> > +    if (Lsbs > 0) {
> > +      BitMap  += 1;
> > +      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +    }
> >     }
> >
> >     return Result;
> > @@ -576,6 +580,10 @@ SetGuardPage (
> >     IN  EFI_PHYSICAL_ADDRESS      BaseAddress
> >     )
> >   {
> > +  if (gCpu == NULL) {
> > +    return;
> > +  }
> > +
> >     //
> >     // Set flag to make sure allocating memory without GUARD for page table
> >     // operation; otherwise infinite loops could be caused.
> > @@ -606,6 +614,10 @@ UnsetGuardPage (
> >   {
> >     UINT64          Attributes;
> >
> > +  if (gCpu == NULL) {
> > +    return;
> > +  }
> > +
> >     //
> >     // Once the Guard page is unset, it will be freed back to memory pool. 
> > NX
> >     // memory protection must be restored for this page if NX is enabled for
> free
> > @@ -652,7 +664,7 @@ IsMemoryTypeToGuard (
> >     UINT64 ConfigBit;
> >     BOOLEAN     InSmm;
> >
> > -  if (gCpu == NULL || AllocateType == AllocateAddress) {
> > +  if (AllocateType == AllocateAddress) {
> >       return FALSE;
> >     }
> >
> > @@ -1164,6 +1176,128 @@ CoreConvertPagesWithGuard (
> >     return CoreConvertPages (Start, NumberOfPages, NewType);
> >   }
> >
> > +/**
> > +  Set all Guard pages which cannot be set before CPU Arch Protocol 
> > installed.
> > +**/
> > +VOID
> > +SetAllGuardPages (
> > +  VOID
> > +  )
> > +{
> > +  UINTN     Entries[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Shifts[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINTN     Indices[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Tables[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    Addresses[GUARDED_HEAP_MAP_TABLE_DEPTH];
> > +  UINT64    TableEntry;
> > +  UINT64    Address;
> > +  UINT64    GuardPage;
> > +  INTN      Level;
> > +  UINTN     Index;
> > +  BOOLEAN   OnGuarding;
> > +
> > +  if (mGuardedMemoryMap == 0 ||
> > +      mMapLevel == 0 ||
> > +      mMapLevel > GUARDED_HEAP_MAP_TABLE_DEPTH) {
> > +    return;
> > +  }
> > +
> > +  CopyMem (Entries, mLevelMask, sizeof (Entries));
> > +  CopyMem (Shifts, mLevelShift, sizeof (Shifts));
> > +
> > +  SetMem (Tables, sizeof(Tables), 0);
> > +  SetMem (Addresses, sizeof(Addresses), 0);
> > +  SetMem (Indices, sizeof(Indices), 0);
> > +
> > +  Level         = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
> > +  Tables[Level] = mGuardedMemoryMap;
> > +  Address       = 0;
> > +  OnGuarding    = FALSE;
> > +
> > +  DEBUG_CODE (
> > +    DumpGuardedMemoryBitmap ();
> > +  );
> > +
> > +  while (TRUE) {
> > +    if (Indices[Level] > Entries[Level]) {
> > +      Tables[Level] = 0;
> > +      Level        -= 1;
> > +    } else {
> > +
> > +      TableEntry  = ((UINT64 *)(UINTN)(Tables[Level]))[Indices[Level]];
> > +      Address     = Addresses[Level];
> > +
> > +      if (TableEntry == 0) {
> > +
> > +        OnGuarding = FALSE;
> > +
> > +      } else if (Level < GUARDED_HEAP_MAP_TABLE_DEPTH - 1) {
> > +
> > +        Level            += 1;
> > +        Tables[Level]     = TableEntry;
> > +        Addresses[Level]  = Address;
> > +        Indices[Level]    = 0;
> > +
> > +        continue;
> > +
> > +      } else {
> > +
> > +        Index = 0;
> > +        while (Index < GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +          if ((TableEntry & 1) == 1) {
> > +            if (OnGuarding) {
> > +              GuardPage = 0;
> > +            } else {
> > +              GuardPage = Address - EFI_PAGE_SIZE;
> > +            }
> > +            OnGuarding = TRUE;
> > +          } else {
> > +            if (OnGuarding) {
> > +              GuardPage = Address;
> > +            } else {
> > +              GuardPage = 0;
> > +            }
> > +            OnGuarding = FALSE;
> > +          }
> > +
> > +          if (GuardPage != 0) {
> > +            SetGuardPage (GuardPage);
> > +          }
> > +
> > +          if (TableEntry == 0) {
> > +            break;
> > +          }
> > +
> > +          TableEntry = RShiftU64 (TableEntry, 1);
> > +          Address   += EFI_PAGE_SIZE;
> > +          Index     += 1;
> > +        }
> > +      }
> > +    }
> > +
> > +    if (Level < (GUARDED_HEAP_MAP_TABLE_DEPTH - (INTN)mMapLevel)) {
> > +      break;
> > +    }
> > +
> > +    Indices[Level] += 1;
> > +    Address = (Level == 0) ? 0 : Addresses[Level - 1];
> > +    Addresses[Level] = Address | LShiftU64(Indices[Level], Shifts[Level]);
> > +
> > +  }
> > +}
> > +
> > +/**
> > +  Notify function used to set all Guard pages before CPU Arch Protocol
> installed.
> > +**/
> > +VOID
> > +HeapGuardCpuArchProtocolNotify (
> > +  VOID
> > +  )
> > +{
> > +  ASSERT (gCpu != NULL);
> > +  SetAllGuardPages ();
> > +}
> > +
> >   /**
> >     Helper function to convert a UINT64 value in binary to a string.
> >
> > diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > index 7208ab1437..8c34692439 100644
> > --- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > +++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.h
> > @@ -399,6 +399,14 @@ IsHeapGuardEnabled (
> >     VOID
> >     );
> >
> > +/**
> > +  Notify function used to set all Guard pages after CPU Arch Protocol 
> > installed.
> > +**/
> > +VOID
> > +HeapGuardCpuArchProtocolNotify (
> > +  VOID
> > +  );
> > +
> >   extern BOOLEAN mOnGuarding;
> >
> >   #endif
> > diff --git a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > index 407aece807..2f7e490af1 100644
> > --- a/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > +++ b/MdeModulePkg/Core/Dxe/Misc/MemoryProtection.c
> > @@ -1001,6 +1001,11 @@ MemoryProtectionCpuArchProtocolNotify (
> >       InitializeDxeNxMemoryProtectionPolicy ();
> >     }
> >
> > +  //
> > +  // Call notify function meant for Heap Guard.
> > +  //
> > +  HeapGuardCpuArchProtocolNotify ();
> > +
> >     if (mImageProtectionPolicy == 0) {
> >       return;
> >     }
> > diff --git a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> > index 923af93de2..f9657f9baa 100644
> > --- a/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> > +++ b/MdeModulePkg/Core/PiSmmCore/HeapGuard.c
> > @@ -73,7 +73,7 @@ SetBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -126,7 +126,7 @@ ClearBits (
> >     StartBit  = (UINTN)GUARDED_HEAP_MAP_ENTRY_BIT_INDEX (Address);
> >     EndBit    = (StartBit + BitNumber - 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> >
> > -  if ((StartBit + BitNumber) > GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +  if ((StartBit + BitNumber) >= GUARDED_HEAP_MAP_ENTRY_BITS) {
> >       Msbs    = (GUARDED_HEAP_MAP_ENTRY_BITS - StartBit) %
> >                 GUARDED_HEAP_MAP_ENTRY_BITS;
> >       Lsbs    = (EndBit + 1) % GUARDED_HEAP_MAP_ENTRY_BITS;
> > @@ -191,10 +191,14 @@ GetBits (
> >       Lsbs = 0;
> >     }
> >
> > -  Result    = RShiftU64 ((*BitMap), StartBit) & (LShiftU64 (1, Msbs) - 1);
> > -  if (Lsbs > 0) {
> > -    BitMap  += 1;
> > -    Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +  if (StartBit == 0 && BitNumber == GUARDED_HEAP_MAP_ENTRY_BITS) {
> > +    Result = *BitMap;
> > +  } else {
> > +    Result    = RShiftU64((*BitMap), StartBit) & (LShiftU64(1, Msbs) - 1);
> > +    if (Lsbs > 0) {
> > +      BitMap  += 1;
> > +      Result  |= LShiftU64 ((*BitMap) & (LShiftU64 (1, Lsbs) - 1), Msbs);
> > +    }
> >     }
> >
> >     return Result;
> >
> Jian,
> I think at least the single patch could be split to three patches.
> Two of them fix the ">=" issue in DxeCore and PiSmmCore.
> One of them fix the heapguard gap.
> 
> --
> Thanks,
> Ray
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to