On 07/16/2015 02:46 PM, Christoph Hellwig wrote:
Hi Matias,
the underlying lighnvm driver (nvme or NULL) shouldn't register
a gendisk - the only gendisk you'll need is that for the block
device that sits on top of lightnvm.
That could work as well. I'll refactor the nvme/null drivers to allow
Hi Matias,
the underlying lighnvm driver (nvme or NULL) shouldn't register
a gendisk - the only gendisk you'll need is that for the block
device that sits on top of lightnvm.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kerne
As a start add a new submit_io method to the nvm_dev_ops, and add
an implementation similar to pscsi_execute_cmd in
drivers/target/target_core_pscsi.c for nvme, and a trivial no op
for a null-nvm driver replacing the null-blk additions. This
will give you very similar behavior to your current cod
I don't think the current abuses of the block API are acceptable though.
The crazy deep merging shouldn't be too relevant for SSD-type devices
so I think you'd do better than trying to reuse the TYPE_FS level
blk-mq merging code. If you want to reuse the request
allocation/submission code that's
On Sat, Jun 13, 2015 at 06:17:11PM +0200, Matias Bjorling wrote:
> > Note that for NVMe it might still make sense to implement this using
> > blk-mq and a struct request, but those should be internal similar to
> > how NVMe implements admin commands.
>
> How about handling I/O merges? In the case
On 06/11/2015 12:29 PM, Christoph Hellwig wrote:
> On Wed, Jun 10, 2015 at 08:11:42PM +0200, Matias Bjorling wrote:
>> 1. A get/put flash block API, that user-space applications can use.
>> That will enable application-driven FTLs. E.g. RocksDB can be integrated
>> tightly with the SSD. Allowing da
On Wed, Jun 10, 2015 at 08:11:42PM +0200, Matias Bjorling wrote:
> 1. A get/put flash block API, that user-space applications can use.
> That will enable application-driven FTLs. E.g. RocksDB can be integrated
> tightly with the SSD. Allowing data placement and garbage collection to
> be strictly c
On 06/09/2015 09:46 AM, Christoph Hellwig wrote:
> Hi Matias,
>
> I've been looking over this and I really think it needs a fundamental
> rearchitecture still. The design of using a separate stacking
> block device and all kinds of private hooks does not look very
> maintainable.
>
> Here is my
Hi Matias,
I've been looking over this and I really think it needs a fundamental
rearchitecture still. The design of using a separate stacking
block device and all kinds of private hooks does not look very
maintainable.
Here is my counter suggestion:
- the stacking block device goes away
- th
...@lightnvm.io; Stephen Bates; keith.bu...@intel.com; Matias Bjørling
Subject: [PATCH v4 0/8] Support for Open-Channel SSDs
Hi,
This is an updated version based on the feedback from Christoph.
Patch 1-2 are fixes and preparation for the nvme driver. The first fixes a flag
bug. The second allows rq
Hi,
This is an updated version based on the feedback from Christoph.
Patch 1-2 are fixes and preparation for the nvme driver. The first fixes
a flag bug. The second allows rq->special in nvme_submit_sync_cmd to
be set and used.
Patch 3 fixes capacity reporting in null_blk.
Patch 4-8 introduces Li
11 matches
Mail list logo