On Jun 19 10:30, Andrzej Jakowski wrote: > On 6/18/20 2:25 AM, Klaus Jensen wrote: > > On Jun 16 22:18, Andrzej Jakowski wrote: > >> So far it was not possible to have CMB and PMR emulated on the same > >> device, because BAR2 was used exclusively either of PMR or CMB. This > >> patch places CMB at BAR4 offset so it not conflicts with MSI-X vectors. > >> > >> Signed-off-by: Andrzej Jakowski <andrzej.jakow...@linux.intel.com> > >> --- > >> hw/block/nvme.c | 122 ++++++++++++++++++++++++++++--------------- > >> hw/block/nvme.h | 5 +- > >> include/block/nvme.h | 4 +- > >> 3 files changed, 86 insertions(+), 45 deletions(-) > >> > >> diff --git a/hw/block/nvme.c b/hw/block/nvme.c > >> index 9f11f3e9da..62681966b9 100644 > >> --- a/hw/block/nvme.c > >> +++ b/hw/block/nvme.c > >> @@ -22,12 +22,12 @@ > >> * [pmrdev=<mem_backend_file_id>,] \ > >> * max_ioqpairs=<N[optional]> > >> * > >> - * Note cmb_size_mb denotes size of CMB in MB. CMB is assumed to be at > >> - * offset 0 in BAR2 and supports only WDS, RDS and SQS for now. > >> + * Note cmb_size_mb denotes size of CMB in MB. CMB when configured is > >> assumed > >> + * to be resident in BAR4 at certain offset - this is because BAR4 is also > >> + * used for storing MSI-X table that is available at offset 0 in BAR4. > > > > I would still really like a R-b by a more PCI-competent reviewer to > > ensure that it is sane to have the MSI-X table here in prefetchable > > 64-bit address space. > > Having Reviewed-by for that make sense to me. And let me offer some > evidences of real devices having MSI-X in prefetchable region and > non-prefetchable region. Based on those examples I don't think it matters > where you place your MSI-X vector. > > Why do you think it may not be sane to place MSI-x table in prefetchable > region? > > Device with MSI-X in non-prefetchable region: > Region 0: Memory at fb42c000 (64-bit, non-prefetchable) [size=16K] > Capabilities: [80] MSI-X: Enable+ Count=1 Masked- > Vector table: BAR=0 offset=00002000 > PBA: BAR=0 offset=00003000 > > Device with MSI-X in prefetchable region > Region 0: Memory at fbc00000 (64-bit, prefetchable) [size=2M] > Region 2: I/O ports at e020 [size=32] > Region 4: Memory at fbe04000 (64-bit, prefetchable) [size=16K] > Capabilities: [40] Power Management version 3 > Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA > PME(D0+,D1-,D2-,D3hot+,D3cold+) > Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME- > Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ > Address: 0000000000000000 Data: 0000 > Masking: 00000000 Pending: 00000000 > Capabilities: [70] MSI-X: Enable+ Count=64 Masked- > Vector table: BAR=4 offset=00000000 > PBA: BAR=4 offset=00002000 > >
That's good enough for me. I rest my case! > > > > >> * > >> - * cmb_size_mb= and pmrdev= options are mutually exclusive due to > >> limitation > >> - * in available BAR's. cmb_size_mb= will take precedence over pmrdev= when > >> - * both provided. > >> + * pmrdev is assumed to be resident in BAR2/BAR3. When configured it > >> consumes > >> + * whole BAR2/BAR3 exclusively. > >> * Enabling pmr emulation can be achieved by pointing to > >> memory-backend-file. > >> * For example: > >> * -object memory-backend-file,id=<mem_id>,share=on,mem-path=<file_path>, > >> \ > >> @@ -69,18 +69,19 @@ > >> > >> static void nvme_process_sq(void *opaque); > >> > >> -static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) > >> +static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr, int size) > >> { > >> - hwaddr low = n->ctrl_mem.addr; > >> - hwaddr hi = n->ctrl_mem.addr + int128_get64(n->ctrl_mem.size); > >> + hwaddr low = n->bar4.addr + n->cmb_offset; > >> + hwaddr hi = low + NVME_CMBSZ_GETSIZE(n->bar.cmbsz); > > > > Isn't it better to use n->bar4.size? > > My understanding is that cmb doesn't necessarily need to occupy whole BAR, > which is required to be power-of-two in size. > You are completely right of course.