Davidlohr Bueso wrote: > On Wed, 30 Nov 2022, Dave Jiang wrote: > > >Bypass cpu_cache_invalidate_memregion() and checks when doing testing > >using CONFIG_NVDIMM_SECURITY_TEST flag. The bypass allows testing on > >QEMU where cpu_cache_has_invalidate_memregion() fails. Usage of > >cpu_cache_invalidate_memregion() is not needed for cxl_test security > >testing. > > We'll also want something similar for the non-pmem specific security > bits
Wait, you expect someone is going to build a device *with* security commands but *without* pmem? In the volatile case the device can just secure erase itself without user intervention every time power is removed, no need for explicit user action to trigger that. So the data-at-rest security argument goes away with a pure volatile device, no? > think the current naming is very generic but the functionality is > too tied to pmem. So I would either rename these to 'cxl_pmem...' > or make them more generic by placing them in cxlmem.h and taking the > dev pointer directly as well as the iores. This does remind me that security is just one use case CXL has for cpu_cache_has_invalidate_memregion(). It also needs to be used for any HDM decoder changes where the HPA to DPA translation has changed. I think this means that any user created region needs to flush CPU caches before that region goes into service. So I like the idea of separate cxl_pmem_invalidate_memregion() and cxl_region_invalidate_memregion() with something like: diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 1e61d1bafc0c..430e8e5ba7d9 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -1403,6 +1403,8 @@ static int attach_target(struct cxl_region *cxlr, const char *decoder, int pos) goto out; down_read(&cxl_dpa_rwsem); rc = cxl_region_attach(cxlr, to_cxl_endpoint_decoder(dev), pos); + if (rc == 0) + set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); up_read(&cxl_dpa_rwsem); up_write(&cxl_region_rwsem); out: @@ -1958,6 +1960,33 @@ static int devm_cxl_add_pmem_region(struct cxl_region *cxlr) return rc; } +static bool cxl_region_has_invalidate_memregion(struct cxl_region *cxlr) +{ + if (!cpu_cache_has_invalidate_memregion()) { + if (IS_ENABLED(CONFIG_CXL_REGION_TEST)) { + dev_warn_once( + &cxlr->dev, + "Bypassing cpu_cache_has_invalidate_memregion() check for testing!\n"); + return true; + } + return false; + } + + return true; +} + +static void cxl_region_invalidate_memregion(struct cxl_region *cxlr) +{ + if (IS_ENABLED(CONFIG_CXL_REGION_TEST)) { + dev_warn_once( + &cxlr->dev, + "Bypassing cpu_cache_invalidate_memergion() for testing!\n"); + return; + } + + cpu_cache_invalidate_memregion(IORES_DESC_CXL); +} + static int cxl_region_probe(struct device *dev) { struct cxl_region *cxlr = to_cxl_region(dev); @@ -1975,12 +2004,22 @@ static int cxl_region_probe(struct device *dev) rc = -ENXIO; } + if (test_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags) && + !cxl_region_has_invalidate_memregion(cxlr)) { + dev_err(&cxlr->dev, "Failed to synchronize CPU cache state\n"); + rc = -ENXIO; + } else if (test_and_clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags)) + cxl_region_invalidate_memregion(cxlr); + /* * From this point on any path that changes the region's state away from * CXL_CONFIG_COMMIT is also responsible for releasing the driver. */ up_read(&cxl_region_rwsem); + if (rc) + return rc; + switch (cxlr->mode) { case CXL_DECODER_PMEM: return devm_cxl_add_pmem_region(cxlr); @@ -2008,4 +2047,5 @@ void cxl_region_exit(void) } MODULE_IMPORT_NS(CXL); +MODULE_IMPORT_NS(DEVMEM); MODULE_ALIAS_CXL(CXL_DEVICE_REGION); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 9a212ab3cae4..827b1ad6cae4 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -388,12 +388,19 @@ struct cxl_region_params { int nr_targets; }; +/* + * Flag whether this region needs to have its HPA span synchronized with + * CPU cache state at region activation time. + */ +#define CXL_REGION_F_INCOHERENT BIT(0) + /** * struct cxl_region - CXL region * @dev: This region's device * @id: This region's id. Id is globally unique across all regions * @mode: Endpoint decoder allocation / access mode * @type: Endpoint decoder target type + * @flags: Region state flags * @cxl_nvb: nvdimm bridge for coordinating @cxlr_pmem setup / shutdown * @cxlr_pmem: (for pmem regions) cached copy of the nvdimm bridge * @params: active + config params for the region @@ -403,6 +410,7 @@ struct cxl_region { int id; enum cxl_decoder_mode mode; enum cxl_decoder_type type; + unsigned long flags; struct cxl_nvdimm_bridge *cxl_nvb; struct cxl_pmem_region *cxlr_pmem; struct cxl_region_params params;