Re: [PATCH V2 6/6] nvme-pci: remove .init_request callback

2017-12-24 Thread Sagi Grimberg



Please prepare a formal one(at least tested in normal case), either I
or Zhang Yi may test/verify it.


OK.


@@ -1387,10 +1385,7 @@ static int nvme_alloc_sq_cmds(struct nvme_dev *dev,
struct nvme_queue *nvmeq,
  static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid,
 int depth, int node)
  {
-   struct nvme_queue *nvmeq = kzalloc_node(sizeof(*nvmeq), GFP_KERNEL,
-   node);
-   if (!nvmeq)
-   return NULL;
+   struct nvme_queue *nvmeq = &dev->queues[qid];


Maybe you need to zero *nvmeq again since it is done in current code.


Relying on this is not a good idea, so I think we better off without
it because I want to know about it and fix it.


@@ -2470,8 +2465,9 @@ static int nvme_probe(struct pci_dev *pdev, const
struct pci_device_id *id)
 dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
 if (!dev)
 return -ENOMEM;
-   dev->queues = kzalloc_node((num_possible_cpus() + 1) * sizeof(void
*),
-   GFP_KERNEL, node);
+
+   alloc_size = (num_possible_cpus() + 1) * sizeof(struct nvme_queue
*);


The element size should be 'sizeof(struct nvme_queue)'.


Right.


[RFC] distinguish foreground and background IOs in block throttle

2017-12-24 Thread xuejiufei
Hi all,

Cgroup writeback is supported since v4.2. I found there exists a
problem in the following case.

A cgroup may send both buffer and direct/sync IOs. The foreground
thread will be stalled when periodic writeback IOs is flushed because
the service queue already has a plenty of writeback IOs, then
foreground IOs should be enqueued with its FIFO policy.

I wonder if we can distinguish foreground and background IOs in block
throttle to fix the above problem.

Any suggestion are always appreciated.


Thanks,
Jiufei