Star,
Regards,
Jian
-----Original Message-----
From: Zeng, Star
Sent: Monday, October 22, 2018 4:24 PM
To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
Cc: Kinney, Michael D <michael.d.kin...@intel.com>; Ni, Ruiyu
<ruiyu...@intel.com>; Yao, Jiewen <jiewen....@intel.com>; Laszlo Ersek
<ler...@redhat.com>; Zeng, Star <star.z...@intel.com>
Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
memory detection
On 2018/10/22 15:12, Wang, Jian J wrote:
Star,
Thanks for the comment. Please see my opinions below.
Regards,
Jian
-----Original Message-----
From: Zeng, Star
Sent: Monday, October 22, 2018 10:53 AM
To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
Cc: Kinney, Michael D <michael.d.kin...@intel.com>; Ni, Ruiyu
<ruiyu...@intel.com>; Yao, Jiewen <jiewen....@intel.com>; Laszlo Ersek
<ler...@redhat.com>; Zeng, Star <star.z...@intel.com>
Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
memory detection
On 2018/10/19 9:50, Jian J Wang wrote:
UAF (Use-After-Free) memory detection is new feature introduced to
detect illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is the core turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.
This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.
To use this feature, one can simply set following PCD to 1
gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
Please note this feature cannot be used with heap guard feature controlled
by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
Cc: Star Zeng <star.z...@intel.com>
Cc: Michael D Kinney <michael.d.kin...@intel.com>
Cc: Jiewen Yao <jiewen....@intel.com>
Cc: Ruiyu Ni <ruiyu...@intel.com>
Cc: Laszlo Ersek <ler...@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.w...@intel.com>
I did not review the whole patch thoroughly. But I suggest that putting
the code for UAF in a separated file if it is feasible. I am aware that
UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
HeapGuard.c and GuardPage.c (this file name is just for example)?
I think we can take the UAF as a special kind of heap guard: guarding the
freed heap memory instead of guarding the allocated heap memory. So
UAF is a complement of Heap Guard or part of Heap Guard. From this
perspective, it's not necessary to put it in separate file.
Very good point. From this perspective, I agree.
Then I would like to suggest not introducing a new PCD
PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
The benefits are
1. No confusion between Heap Guard and Use After Free.
2. No need introduce new PCD.
3. Can reuse BIT6 - Enable non-stop mode.
4. No need update DxeIplPeim to enable IA32 PAE paging.
Good idea. I'll try it. Thanks.
---
MdeModulePkg/Core/Dxe/DxeMain.h | 1 +
MdeModulePkg/Core/Dxe/DxeMain.inf | 1 +
MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 8 +
MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 393
++++++++++++++++++++++++++-
MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 66 +++++
MdeModulePkg/Core/Dxe/Mem/Page.c | 39 ++-
MdeModulePkg/Core/Dxe/Mem/Pool.c | 21 +-
MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c | 16 +-
8 files changed, 519 insertions(+), 26 deletions(-)
diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
b/MdeModulePkg/Core/Dxe/DxeMain.h
index 2dec9da5e3..ae75cc5b25 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.h
+++ b/MdeModulePkg/Core/Dxe/DxeMain.h
@@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
ANY
KIND, EITHER EXPRESS OR IMPLIED.
#include <Library/DxeServicesLib.h>
#include <Library/DebugAgentLib.h>
#include <Library/CpuExceptionHandlerLib.h>
+#include <Library/DebugPrintErrorLevelLib.h>
//
diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
b/MdeModulePkg/Core/Dxe/DxeMain.inf
index 10375443c0..d91258c087 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.inf
+++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
@@ -198,6 +198,7 @@
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
##
CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard
##
CONSUMES
+
gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
## CONSUMES
# [Hob]
# RESOURCE_DESCRIPTOR ## CONSUMES
diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index d9c65a8045..e43ec74010 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
EFI_GCD_MEMORY_SPACE_DESCRIPTOR *MemorySpaceMap;
UINTN Index;
+ if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
+ return;
+ }
I am aware the purpose of this change to optimize some code when
DEBUG_GCD is not set.
But I do not suggest we newly introduce the dependency to
DebugPrintErrorLevelLib, I think this check should be hidden in the
instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
(this
macro name is just for example) can be introduced if we really want to
do that.
There're two purpose here, one is for optimization but another is to fix a true
issue here: GCD memory lock reentrance (see below for details)
CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock() **
If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
above issue
in a different way. For example, change CoreGetMemorySpaceMap() to avoid
lock
conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
control
the DEBUG_GCD print.
Got it, thanks.
I still not suggest using GetDebugPrintErrorLevel with new
DebugPrintErrorLevelLib dependency.
Seemingly, the problem even could not be resolved thoroughly because it
may still happen when DEBUG_GCD is set, right?
A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
Maybe there is other good idea.
The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
know if AllocatePool will change GCD memory map? I can't see it from code.
But maybe I miss something.
CoreAcquireGcdMemoryLock ();
...
...
*MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
...
...
Done:
CoreReleaseGcdMemoryLock ();
=======>
CoreAcquireGcdMemoryLock ();
...
CoreReleaseGcdMemoryLock ();
...
*MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
...
CoreAcquireGcdMemoryLock ();
...
Done:
CoreReleaseGcdMemoryLock ();
Thanks,
Star
Thanks,
Star
+
Status = CoreGetMemorySpaceMap (&NumberOfDescriptors,
&MemorySpaceMap);
ASSERT (Status == EFI_SUCCESS && MemorySpaceMap != NULL);
@@ -202,6 +206,10 @@ CoreDumpGcdIoSpaceMap (
EFI_GCD_IO_SPACE_DESCRIPTOR *IoSpaceMap;
UINTN Index;
+ if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
+ return;
+ }
+
Status = CoreGetIoSpaceMap (&NumberOfDescriptors, &IoSpaceMap);
ASSERT (Status == EFI_SUCCESS && IoSpaceMap != NULL);
diff --git a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
index 663f969c0d..b120c04f8f 100644
--- a/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
+++ b/MdeModulePkg/Core/Dxe/Mem/HeapGuard.c
@@ -44,6 +44,11 @@ GLOBAL_REMOVE_IF_UNREFERENCED UINTN
mLevelShift[GUARDED_HEAP_MAP_TABLE_DEPTH]
GLOBAL_REMOVE_IF_UNREFERENCED UINTN
mLevelMask[GUARDED_HEAP_MAP_TABLE_DEPTH]
= GUARDED_HEAP_MAP_TABLE_DEPTH_MASKS;
+//
+// Used for Use-After-Free memory detection to promote freed but not
used
pages.
+//
+GLOBAL_REMOVE_IF_UNREFERENCED EFI_PHYSICAL_ADDRESS
mLastPromotedPage = BASE_4GB;
+
/**
Set corresponding bits in bitmap table to 1 according to the address.
@@ -379,7 +384,7 @@ ClearGuardedMemoryBits (
@return An integer containing the guarded memory bitmap.
**/
-UINTN
+UINT64
GetGuardedMemoryBits (
IN EFI_PHYSICAL_ADDRESS Address,
IN UINTN NumberOfPages
@@ -387,7 +392,7 @@ GetGuardedMemoryBits (
{
UINT64 *BitMap;
UINTN Bits;
- UINTN Result;
+ UINT64 Result;
UINTN Shift;
UINTN BitsToUnitEnd;
@@ -1203,6 +1208,373 @@ SetAllGuardPages (
}
}
+/**
+ Check to see if the Use-After-Free (UAF) feature is enabled or not.
+
+ @return TRUE/FALSE.
+**/
+BOOLEAN
+IsUafEnabled (
+ VOID
+ )
+{
+ ASSERT (!IsHeapGuardEnabled());
+ return ((PcdGet8 (PcdUseAfterFreeDetectionPropertyMask) & BIT0) != 0);
+}
+
+/**
+ Find the address of top-most guarded free page.
+
+ @param[out] Address Start address of top-most guarded free page.
+
+ @return VOID.
+**/
+VOID
+GetLastGuardedFreePageAddress (
+ OUT EFI_PHYSICAL_ADDRESS *Address
+ )
+{
+ EFI_PHYSICAL_ADDRESS AddressGranularity;
+ EFI_PHYSICAL_ADDRESS BaseAddress;
+ UINTN Level;
+ UINT64 Map;
+ INTN Index;
+
+ ASSERT (mMapLevel >= 1);
+
+ BaseAddress = 0;
+ Map = mGuardedMemoryMap;
+ for (Level = GUARDED_HEAP_MAP_TABLE_DEPTH - mMapLevel;
+ Level < GUARDED_HEAP_MAP_TABLE_DEPTH;
+ ++Level) {
Path: news.gmane.org!.POSTED!not-for-mail
From: "Wang, Jian J" <jian.j.w...@intel.com>
Newsgroups: gmane.comp.bios.edk2.devel
Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free memory
detection
Date: Tue, 23 Oct 2018 01:24:59 +0000
Lines: 1030
Approved: n...@gmane.org
Message-ID:
<d827630b58408649acb04f44c510003624e86...@shsmsx103.ccr.corp.intel.com>
References: <20181019015013.7488-1-jian.j.w...@intel.com>
<20181019015013.7488-4-jian.j.w...@intel.com>
<e1b4631e-8130-b5e5-8ec7-f9b2fdf3a...@intel.com>
<d827630b58408649acb04f44c510003624e85...@shsmsx103.ccr.corp.intel.com>
<7f42fbf3-1cd8-6ce9-1257-731cab01a...@intel.com>
NNTP-Posting-Host: blaine.gmane.org
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Trace: blaine.gmane.org 1540257783 16198 195.159.176.226 (23 Oct 2018
01:23:03 GMT)
X-Complaints-To: use...@blaine.gmane.org
NNTP-Posting-Date: Tue, 23 Oct 2018 01:23:03 +0000 (UTC)
Cc: "Kinney, Michael D" <michael.d.kin...@intel.com>, "Ni,
Ruiyu" <ruiyu...@intel.com>, "Yao, Jiewen" <jiewen....@intel.com>,
Laszlo Ersek <ler...@redhat.com>
To: "Zeng, Star" <star.z...@intel.com>, "edk2-devel@lists.01.org"
<edk2-devel@lists.01.org>
Original-X-From: edk2-devel-boun...@lists.01.org Tue Oct 23 03:22:58 2018
Return-path: <edk2-devel-boun...@lists.01.org>
Envelope-to: gcbed-e...@m.gmane.org
Original-Received: from ml01.01.org ([198.145.21.10])
by blaine.gmane.org with esmtp (Exim 4.84_2)
(envelope-from <edk2-devel-boun...@lists.01.org>)
id 1gElPL-0003zZ-6O
for gcbed-e...@m.gmane.org; Tue, 23 Oct 2018 03:22:55 +0200
Original-Received: from [127.0.0.1] (localhost [IPv6:::1])
by ml01.01.org (Postfix) with ESMTP id 7535B2117D267;
Mon, 22 Oct 2018 18:25:05 -0700 (PDT)
X-Original-To: edk2-devel@lists.01.org
Delivered-To: edk2-devel@lists.01.org
Received-SPF: Pass (sender SPF authorized) identity=mailfrom;
client-ip=192.55.52.115; helo=mga14.intel.com;
envelope-from=jian.j.w...@intel.com; receiver=edk2-devel@lists.01.org
Original-Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(No client certificate requested)
by ml01.01.org (Postfix) with ESMTPS id A1D2721962301
for <edk2-devel@lists.01.org>; Mon, 22 Oct 2018 18:25:04 -0700 (PDT)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Original-Received: from orsmga006.jf.intel.com ([10.7.209.51])
by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
22 Oct 2018 18:25:03 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.54,414,1534834800"; d="scan'208";a="84756666"
Original-Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204])
by orsmga006.jf.intel.com with ESMTP; 22 Oct 2018 18:25:03 -0700
Original-Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS)
id 14.3.319.2; Mon, 22 Oct 2018 18:25:02 -0700
Original-Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server (TLS)
id 14.3.319.2; Mon, 22 Oct 2018 18:25:02 -0700
Original-Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.224]) by
SHSMSX101.ccr.corp.intel.com ([169.254.1.46]) with mapi id 14.03.0319.002;
Tue, 23 Oct 2018 09:24:59 +0800
Thread-Topic: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
memory detection
Thread-Index: AQHUZ04yDM8+MiGRvEexlNQFNV1LQKUqEAGAgAC2bHD//6XjAIABoN0w
In-Reply-To: <7f42fbf3-1cd8-6ce9-1257-731cab01a...@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-titus-metadata-40:
eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZTQyYmM5MjItOTFjMC00OWYxLTkxNGUtOTEzYTc4NDgxZTA5IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiXC9YeWlQS1lpWlRBTEdTUUtpV3ZhYVkrVjJnc3E1YkZiQnpzZXppdkpJdjA2VVU0UFwvNFZQYmQwZ3BPTGRhUWtsIn0=
x-ctpclassification: CTP_NT
dlp-product: dlpe-windows
dlp-version: 11.0.400.15
dlp-reaction: no-action
x-originating-ip: [10.239.127.40]
X-BeenThere: edk2-devel@lists.01.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: EDK II Development <edk2-devel.lists.01.org>
List-Unsubscribe: <https://lists.01.org/mailman/options/edk2-devel>,
<mailto:edk2-devel-requ...@lists.01.org?subject=unsubscribe>
List-Archive: <http://lists.01.org/pipermail/edk2-devel/>
List-Post: <mailto:edk2-devel@lists.01.org>
List-Help: <mailto:edk2-devel-requ...@lists.01.org?subject=help>
List-Subscribe: <https://lists.01.org/mailman/listinfo/edk2-devel>,
<mailto:edk2-devel-requ...@lists.01.org?subject=subscribe>
Errors-To: edk2-devel-boun...@lists.01.org
Original-Sender: "edk2-devel" <edk2-devel-boun...@lists.01.org>
Xref: news.gmane.org gmane.comp.bios.edk2.devel:46520
Archived-At: <http://permalink.gmane.org/gmane.comp.bios.edk2.devel/46520>
Star,
Regards,
Jian
-----Original Message-----
From: Zeng, Star
Sent: Monday, October 22, 2018 4:24 PM
To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
Cc: Kinney, Michael D <michael.d.kin...@intel.com>; Ni, Ruiyu
<ruiyu...@intel.com>; Yao, Jiewen <jiewen....@intel.com>; Laszlo Ersek
<ler...@redhat.com>; Zeng, Star <star.z...@intel.com>
Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
memory detection
On 2018/10/22 15:12, Wang, Jian J wrote:
Star,
Thanks for the comment. Please see my opinions below.
Regards,
Jian
-----Original Message-----
From: Zeng, Star
Sent: Monday, October 22, 2018 10:53 AM
To: Wang, Jian J <jian.j.w...@intel.com>; edk2-devel@lists.01.org
Cc: Kinney, Michael D <michael.d.kin...@intel.com>; Ni, Ruiyu
<ruiyu...@intel.com>; Yao, Jiewen <jiewen....@intel.com>; Laszlo Ersek
<ler...@redhat.com>; Zeng, Star <star.z...@intel.com>
Subject: Re: [edk2] [PATCH 3/3] MdeModulePkg/Core: add use-after-free
memory detection
On 2018/10/19 9:50, Jian J Wang wrote:
UAF (Use-After-Free) memory detection is new feature introduced to
detect illegal access to memory which has been freed. The principle
behind is similar to heap guard feature, that is the core turn all pool
memory allocation to page allocation and mark them to be not-present
once they are freed.
This also implies that, once a page is allocated and freed, it cannot
be re-allocated. This will bring another issue, which is that there's
risk that memory space will be used out. To address it, the memory
service add logic to put part (at most 64 pages a time) of freed pages
back into page pool, so that the memory service can still have memory
to allocate, when all memory space have been allocated once. This is
called memory promotion. The promoted pages are always from the eldest
pages which haven been freed.
To use this feature, one can simply set following PCD to 1
gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
Please note this feature cannot be used with heap guard feature controlled
by PCD gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask.
Cc: Star Zeng <star.z...@intel.com>
Cc: Michael D Kinney <michael.d.kin...@intel.com>
Cc: Jiewen Yao <jiewen....@intel.com>
Cc: Ruiyu Ni <ruiyu...@intel.com>
Cc: Laszlo Ersek <ler...@redhat.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Jian J Wang <jian.j.w...@intel.com>
I did not review the whole patch thoroughly. But I suggest that putting
the code for UAF in a separated file if it is feasible. I am aware that
UAF will be sharing much code with HeapGuard, so could we have Uaf.c,
HeapGuard.c and GuardPage.c (this file name is just for example)?
I think we can take the UAF as a special kind of heap guard: guarding the
freed heap memory instead of guarding the allocated heap memory. So
UAF is a complement of Heap Guard or part of Heap Guard. From this
perspective, it's not necessary to put it in separate file.
Very good point. From this perspective, I agree.
Then I would like to suggest not introducing a new PCD
PcdUseAfterFreeDetectionPropertyMask, but use one BIT of
PcdHeapGuardPropertyMask for it since we think it is part of Heap Guard.
The benefits are
1. No confusion between Heap Guard and Use After Free.
2. No need introduce new PCD.
3. Can reuse BIT6 - Enable non-stop mode.
4. No need update DxeIplPeim to enable IA32 PAE paging.
Good idea. I'll try it. Thanks.
---
MdeModulePkg/Core/Dxe/DxeMain.h | 1 +
MdeModulePkg/Core/Dxe/DxeMain.inf | 1 +
MdeModulePkg/Core/Dxe/Gcd/Gcd.c | 8 +
MdeModulePkg/Core/Dxe/Mem/HeapGuard.c | 393
++++++++++++++++++++++++++-
MdeModulePkg/Core/Dxe/Mem/HeapGuard.h | 66 +++++
MdeModulePkg/Core/Dxe/Mem/Page.c | 39 ++-
MdeModulePkg/Core/Dxe/Mem/Pool.c | 21 +-
MdeModulePkg/Core/Dxe/Misc/PropertiesTable.c | 16 +-
8 files changed, 519 insertions(+), 26 deletions(-)
diff --git a/MdeModulePkg/Core/Dxe/DxeMain.h
b/MdeModulePkg/Core/Dxe/DxeMain.h
index 2dec9da5e3..ae75cc5b25 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.h
+++ b/MdeModulePkg/Core/Dxe/DxeMain.h
@@ -92,6 +92,7 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF
ANY
KIND, EITHER EXPRESS OR IMPLIED.
#include <Library/DxeServicesLib.h>
#include <Library/DebugAgentLib.h>
#include <Library/CpuExceptionHandlerLib.h>
+#include <Library/DebugPrintErrorLevelLib.h>
//
diff --git a/MdeModulePkg/Core/Dxe/DxeMain.inf
b/MdeModulePkg/Core/Dxe/DxeMain.inf
index 10375443c0..d91258c087 100644
--- a/MdeModulePkg/Core/Dxe/DxeMain.inf
+++ b/MdeModulePkg/Core/Dxe/DxeMain.inf
@@ -198,6 +198,7 @@
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPoolType
##
CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdHeapGuardPropertyMask
## CONSUMES
gEfiMdeModulePkgTokenSpaceGuid.PcdCpuStackGuard
##
CONSUMES
+
gEfiMdeModulePkgTokenSpaceGuid.PcdUseAfterFreeDetectionPropertyMask
## CONSUMES
# [Hob]
# RESOURCE_DESCRIPTOR ## CONSUMES
diff --git a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
index d9c65a8045..e43ec74010 100644
--- a/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
+++ b/MdeModulePkg/Core/Dxe/Gcd/Gcd.c
@@ -160,6 +160,10 @@ CoreDumpGcdMemorySpaceMap (
EFI_GCD_MEMORY_SPACE_DESCRIPTOR *MemorySpaceMap;
UINTN Index;
+ if ((GetDebugPrintErrorLevel () & DEBUG_GCD) == 0) {
+ return;
+ }
I am aware the purpose of this change to optimize some code when
DEBUG_GCD is not set.
But I do not suggest we newly introduce the dependency to
DebugPrintErrorLevelLib, I think this check should be hidden in the
instance of DebugLib. Maybe a new macro DEBUG_CODE_ERROR_LEVEL
(this
macro name is just for example) can be introduced if we really want to
do that.
There're two purpose here, one is for optimization but another is to fix a true
issue here: GCD memory lock reentrance (see below for details)
CoreDumpGcdMemorySpaceMap()
=> CoreGetMemorySpaceMap()
=> CoreAcquireGcdMemoryLock () *
AllocatePool()
=> InternalAllocatePool()
=> CoreAllocatePool()
=> CoreAllocatePoolI()
=> CoreAllocatePoolPagesI()
=> CoreAllocatePoolPages()
=> FindFreePages()
=> PromoteMemoryResource()
=> CoreAcquireGcdMemoryLock() **
If GetDebugPrintErrorLevel() is not recommended, maybe we need to fix
above issue
in a different way. For example, change CoreGetMemorySpaceMap() to avoid
lock
conflict, if possible. In addition, we could set PcdFixedDebugPrintErrorLevel to
control
the DEBUG_GCD print.
Got it, thanks.
I still not suggest using GetDebugPrintErrorLevel with new
DebugPrintErrorLevelLib dependency.
Seemingly, the problem even could not be resolved thoroughly because it
may still happen when DEBUG_GCD is set, right?
A possible solution is adding CoreAcquireGcdMemoryLockOrFail, updating
PromoteMemoryResource to use CoreAcquireGcdMemoryLockOrFail, and
updating the ASSERT in CoreDumpGcdMemorySpaceMap with error check.
Maybe there is other good idea.
The actual issue is that, In the code of CoreGetMemorySpaceMap(), AllocatePool()
is inside the GCD lock scope. Maybe we could move it out of the lock. Do you
know if AllocatePool will change GCD memory map? I can't see it from code.
But maybe I miss something.
CoreAcquireGcdMemoryLock ();
...
...
*MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
...
...
Done:
CoreReleaseGcdMemoryLock ();
=======>
CoreAcquireGcdMemoryLock ();
...
CoreReleaseGcdMemoryLock ();
...
*MemorySpaceMap = AllocatePool (*NumberOfDescriptors * sizeof
(EFI_GCD_MEMORY_SPACE_DESCRIPTOR));
...
CoreAcquireGcdMemoryLock ();
...
Done:
CoreReleaseGcdMemoryLock ();