As a start add a new submit_io method to the nvm_dev_ops, and add
an implementation similar to pscsi_execute_cmd in
drivers/target/target_core_pscsi.c for nvme, and a trivial no op
for a null-nvm driver replacing the null-blk additions.  This
will give you very similar behavior to your current code, while
allowing to drop all the hacks in the block code.  Note that simple
plugging will work just fine which should be all you'll need.


A quick question. The flow is getting into place and it is looking good.

However, the code path is still left with a per device flash block management core data structure in gendisk->nvm. ->nvm holds the device configuration (number of flash chips, channels, flash page sizes, etc.), free/used blocks in the media and other small structures. Basically keeping track of the state of the blocks on the media.

It is nice to have it associated with gendisk, as it then easily can be accessed from lightnvm code, without knowing which device driver that is underneath.

If moving it outside gendisk, one approach would be to create a separate block device for each open-channel ssd initialized. E.g. /dev/nvme0n1 has its block management information exposed through /dev/lnvm/nvme0n1_bm. For each *_bm, the private field holds a map between request_queue and bm. Effectively using a gendisk to act as a link between the real device and any FTL target. This seems just as hacky as the gendisk approach.

Any other approaches or is gendisk good for now?

Thanks, Matias




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to