On 02/22/2018 10:52 AM, Javier Gonzalez wrote:
On 22 Feb 2018, at 10.45, Matias Bjørling <m...@lightnvm.io> wrote:

On 02/22/2018 08:55 AM, Javier Gonzalez wrote:
On 22 Feb 2018, at 08.45, Matias Bjørling <m...@lightnvm.io> wrote:

On 02/21/2018 10:26 AM, Javier González wrote:
Complete the generic geometry structure with the maxoc and maxocpu
felds, present in the 2.0 spec.
Signed-off-by: Javier González <jav...@cnexlabs.com>
---
  drivers/nvme/host/lightnvm.c | 4 ++++
  include/linux/lightnvm.h     | 2 ++
  2 files changed, 6 insertions(+)
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index cca32da05316..9c1f8225c4e1 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -318,6 +318,8 @@ static int nvme_nvm_setup_12(struct nvme_nvm_id12 *id,
        dev_geo->c.ws_min = sec_per_pg;
        dev_geo->c.ws_opt = sec_per_pg;
        dev_geo->c.mw_cunits = 8;            /* default to MLC safe values */
+       dev_geo->c.maxoc = dev_geo->all_luns;     /* default to 1 chunk per LUN 
*/
+       dev_geo->c.maxocpu = 1;                      /* default to 1 chunk per 
LUN */

One can't assume that it is 1 open chunk per lun. If you need this for specific 
hardware, make a quirk for it.
Which default you want for 1.2 if not specified then? I use 1 because it
has been the implicit default until now.

INT_MAX, since it then allows the maximum of open chunks. It cannot be assumed 
that other 1.2 devices is limited to a single open chunk.

So you want the default to be that all blocks on the device can be
opened at the same time. Interesting... I guess that such a SSD will
have a AA battery attached to it, but fine by me if that's how you want
it.

I feel you're a bit sarcastic here. One may think of SLC and other memories that does one-shot programming. In that case no caching is needed, and therefore power-caps can be limited on the hardware.


Assuming this, can we instead set it to the reported number of chunks,
since this is the hard limit anyway.


Works for me.

Reply via email to