On Tue, Mar 6, 2018 at 12:14 PM, Logan Gunthorpe wrote:
>
> On 05/03/18 05:49 PM, Oliver wrote:
>>
>> It's in arch/powerpc/kernel/io.c as _memcpy_toio() and it has two full
>> barriers!
>>
>> Awesome!
>>
>> Our io.h indicates that our iomem accessors are designed to provide
On 05/03/18 05:49 PM, Oliver wrote:
It's in arch/powerpc/kernel/io.c as _memcpy_toio() and it has two full barriers!
Awesome!
Our io.h indicates that our iomem accessors are designed to provide x86ish
strong ordering of accesses to MMIO space. The git log indicates
arch/powerpc/kernel/io.c
On Tue, Mar 6, 2018 at 4:10 AM, Logan Gunthorpe wrote:
>
>
> On 05/03/18 09:00 AM, Keith Busch wrote:
>>
>> On Mon, Mar 05, 2018 at 12:33:29PM +1100, Oliver wrote:
>>>
>>> On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe
>>> wrote:
@@ -429,10
On Mon, Mar 05, 2018 at 01:42:12PM -0700, Keith Busch wrote:
> On Mon, Mar 05, 2018 at 01:10:53PM -0700, Jason Gunthorpe wrote:
> > So when reading the above mlx code, we see the first wmb() being used
> > to ensure that CPU stores to cachable memory are visible to the DMA
> > triggered by the
On Mon, Mar 05, 2018 at 01:10:53PM -0700, Jason Gunthorpe wrote:
> So when reading the above mlx code, we see the first wmb() being used
> to ensure that CPU stores to cachable memory are visible to the DMA
> triggered by the doorbell ring.
IIUC, we don't need a similar barrier for NVMe to ensure
On 05/03/18 01:10 PM, Jason Gunthorpe wrote:
So when reading the above mlx code, we see the first wmb() being used
to ensure that CPU stores to cachable memory are visible to the DMA
triggered by the doorbell ring.
Oh, yes, that makes sense. Disregard my previous email as I was wrong.
Logan
On 05/03/18 12:57 PM, Sagi Grimberg wrote:
Keith, while we're on this, regardless of cmb, is SQE memcopy and DB
update ordering always guaranteed?
If you look at mlx4 (rdma device driver) that works exactly the same as
nvme you will find:
--
qp->sq.head += nreq;
- if (nvmeq->sq_cmds_io)
- memcpy_toio(>sq_cmds_io[tail], cmd, sizeof(*cmd));
- else
- memcpy(>sq_cmds[tail], cmd, sizeof(*cmd));
+ memcpy(>sq_cmds[tail], cmd, sizeof(*cmd));
Hmm, how safe is replacing memcpy_toio() with regular memcpy()? On PPC
On 05/03/18 11:02 AM, Sinan Kaya wrote:
writel has a barrier inside on ARM64.
https://elixir.bootlin.com/linux/latest/source/arch/arm64/include/asm/io.h#L143
Yes, and no barrier inside memcpy_toio as it uses __raw_writes. This
should be sufficient as we are only accessing addresses that
On 3/5/2018 12:10 PM, Logan Gunthorpe wrote:
>
>
> On 05/03/18 09:00 AM, Keith Busch wrote:
>> On Mon, Mar 05, 2018 at 12:33:29PM +1100, Oliver wrote:
>>> On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe
>>> wrote:
@@ -429,10 +429,7 @@ static void
On 05/03/18 09:00 AM, Keith Busch wrote:
On Mon, Mar 05, 2018 at 12:33:29PM +1100, Oliver wrote:
On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe wrote:
@@ -429,10 +429,7 @@ static void __nvme_submit_cmd(struct nvme_queue *nvmeq,
{
u16 tail = nvmeq->sq_tail;
On Mon, Mar 05, 2018 at 12:33:29PM +1100, Oliver wrote:
> On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe wrote:
> > @@ -429,10 +429,7 @@ static void __nvme_submit_cmd(struct nvme_queue *nvmeq,
> > {
> > u16 tail = nvmeq->sq_tail;
>
> > - if
On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe wrote:
> Register the CMB buffer as p2pmem and use the appropriate allocation
> functions to create and destroy the IO SQ.
>
> If the CMB supports WDS and RDS, publish it for use as p2p memory
> by other devices.
>
>
Register the CMB buffer as p2pmem and use the appropriate allocation
functions to create and destroy the IO SQ.
If the CMB supports WDS and RDS, publish it for use as p2p memory
by other devices.
Signed-off-by: Logan Gunthorpe
---
drivers/nvme/host/pci.c | 75
14 matches
Mail list logo