在 2024/7/20 0:04, Dave Jiang 写道:


On 7/1/24 7:12 PM, Shiyang Ruan wrote:


在 2024/6/25 21:56, Shiyang Ruan 写道:


在 2024/6/22 1:51, Dan Williams 写道:
Shiyang Ruan wrote:
Background:
Since CXL device is a memory device, while CPU consumes a poison page of
CXL device, it always triggers a MCE by interrupt (INT18), no matter
which-First path is configured.  This is the first report.  Then
currently, in FW-First path, the poison event is transferred according
to the following process: CXL device -> firmware -> OS:ACPI->APEI->GHES
   -> CPER -> trace report.  This is the second one.  These two reports
are indicating the same poisoning page, which is the so-called "duplicate
report"[1].  And the memory_failure() handling I'm trying to add in
OS-First path could also be another duplicate report.

Hope the flow below could make it easier to understand:
CPU accesses bad memory on CXL device, then
   -> MCE (INT18), *always* report (1)
   -> * FW-First (implemented now)
        -> CXL device -> FW
           -> OS:ACPI->APEI->GHES->CPER -> trace report (2.a)
      * OS-First (not implemented yet, I'm working on it)
        -> CXL device -> MSI
           -> OS:CXL driver -> memory_failure() (2.b)
so, the (1) and (2.a/b) are duplicated.

(I didn't get response in my reply for [1] while I have to make patch to
solve this problem, so please correct me if my understanding is wrong.)

The CPU MCE may not be in the loop. Consider the case of patrol scrub,
or device-DMA accessing poison. In that case the device will signal a
component event and the CPU may never issue the MCE.

My other patch: "cxl/core: add poison creation event handler", adds calling 
memory_failure() when an event is received form device, which checks the poison record 
(insert if not exists) to make sure poison would be reported/handled, but not twice, no 
matter CPU issues the MCE later or earlier.  And the lock of poison record makes sure 
that it's fine even in race condition.


What is missing for me from this description is *why* does the duplicate
report matter in practice? If all that happens is that the kernel
repeats the lookup to offline the page and set the HWPoison bit, is that
duplicated work worth adding more tracking?

Besides setting the HWPoison bit for the poison page, memory_failure() also 
finds and notifies those processes who are accessing the poison page, and tries 
to recovery the page.  And there seems no lock to prevent the 2nd 
memory_failure() and clearing poison operation from being called in race, then 
the HWPoison bit could be unsure.  I think that's the problem.

  > So, I think all CXL poison notification events should trigger an
  > action optional memory_failure(). I expect this needs to make sure
  > that duplicates re not a problem. I.e. in the case of CPU consumption
  > of CXL poison, that causes a synchronous MF_ACTION_REQUIRED event via
  > the MCE path *and* it may trigger the device to send an error record
  > for the same page. As far as I can see, duplicate reports (MCE + CXL
  > device) are unavoidable.

As you mentioned in my other patch, this problem should be solved at first.  My 
patch adds the poison record (tracking, which might not become very large) to 
prevent duplicating report.

Ping~

And I had some problems when using xarray, please see below.



This patch adds a new notifier_block and MCE_PRIO_CXL, for CXL memdev
to check whether the current poison page has been reported (if yes,
stop the notifier chain, won't call the following memory_failure()
to report), into `x86_mce_decoder_chain`.  In this way, if the poison
page already handled(recorded and reported) in (1) or (2), the other one
won't duplicate the report.  The record could be clear when
cxl_clear_poison() is called.

[1] 
https://lore.kernel.org/linux-cxl/664d948fb86f0_e8be29...@dwillia2-mobl3.amr.corp.intel.com.notmuch/

Signed-off-by: Shiyang Ruan <ruansy.f...@fujitsu.com>
---
   arch/x86/include/asm/mce.h |   1 +
   drivers/cxl/core/mbox.c    | 130 +++++++++++++++++++++++++++++++++++++
   drivers/cxl/core/memdev.c  |   6 +-
   drivers/cxl/cxlmem.h       |   3 +
   4 files changed, 139 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index dfd2e9699bd7..d8109c48e7d9 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -182,6 +182,7 @@ enum mce_notifier_prios {
       MCE_PRIO_NFIT,
       MCE_PRIO_EXTLOG,
       MCE_PRIO_UC,
+    MCE_PRIO_CXL,
       MCE_PRIO_EARLY,
       MCE_PRIO_CEC,
       MCE_PRIO_HIGHEST = MCE_PRIO_CEC
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 2626f3fff201..0eb3c5401e81 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -4,6 +4,8 @@
   #include <linux/debugfs.h>
   #include <linux/ktime.h>
   #include <linux/mutex.h>
+#include <linux/notifier.h>
+#include <asm/mce.h>
   #include <asm/unaligned.h>
   #include <cxlpci.h>
   #include <cxlmem.h>
@@ -880,6 +882,9 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
           if (cxlr)
               hpa = cxl_trace_hpa(cxlr, cxlmd, dpa);
+        if (hpa != ULLONG_MAX && cxl_mce_recorded(hpa))
+            return;
+
           if (event_type == CXL_CPER_EVENT_GEN_MEDIA)
               trace_cxl_general_media(cxlmd, type, cxlr, hpa,
                           &evt->gen_media);
@@ -1408,6 +1413,127 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds)
   }
   EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL);
+struct cxl_mce_record {
+    struct list_head node;
+    u64 hpa;
+};
+LIST_HEAD(cxl_mce_records);
+DEFINE_MUTEX(cxl_mce_mutex);

I would recommend an xarray for this use case as that already has its
own internal locking and efficient memory allocation for new nodes.

However, the "why" question needs to be answered first.

xarray does look better.  I didn't think of it befre.  Thanks for suggestion.

I'm trying using xarray but but I'm running into two problems.

The first one is: xarray stores an entry with a specified index, but here we 
only need to store the PFN.  I'm not sure how to specific an index for a PFN.  
So I think xarray might not suitable for my case.
The second one: because we only need to store/search PFN, the list node only 
contains a list_head and a PFN. But xarray seems to require more memory for 
each node, which causes more overhead.

Use PFN as the index of the xarray since it's unique. And that makes finding 
the entry easy with xarray because you just pass in the PFN as the key. You can 
store an xarray entry with no data.

I didn't know it can store index with no data.  I'll try in this way.

At this point you are just detecting whether the entry has been stored (valid) 
or not. Hope that helps.

Thank you very much!


--
Ruan.



Maybe I'm overthinking this?


--
Thanks
Ruan.


--
Thanks
Ruan.

Reply via email to