On Tue, Aug 08, 2017 at 07:31:22PM -0700, Matthew Wilcox wrote:
> On Wed, Aug 09, 2017 at 10:51:13AM +0900, Minchan Kim wrote:
> > On Tue, Aug 08, 2017 at 06:29:04AM -0700, Matthew Wilcox wrote:
> > > On Tue, Aug 08, 2017 at 05:49:59AM -0700, Matthew Wilcox wrote:
> > > > + struct bio sbio;
>
On Wed, Aug 09, 2017 at 10:51:13AM +0900, Minchan Kim wrote:
> On Tue, Aug 08, 2017 at 06:29:04AM -0700, Matthew Wilcox wrote:
> > On Tue, Aug 08, 2017 at 05:49:59AM -0700, Matthew Wilcox wrote:
> > > + struct bio sbio;
> > > + struct bio_vec sbvec;
> >
> > ... this needs to be sbvec[nr_pages], of
On Tue, Aug 08, 2017 at 06:29:04AM -0700, Matthew Wilcox wrote:
> On Tue, Aug 08, 2017 at 05:49:59AM -0700, Matthew Wilcox wrote:
> > + struct bio sbio;
> > + struct bio_vec sbvec;
>
> ... this needs to be sbvec[nr_pages], of course.
>
> > - bio = mpage_alloc(bdev, blocks[0] << (blk
Hi Matthew,
On Tue, Aug 08, 2017 at 05:49:59AM -0700, Matthew Wilcox wrote:
> On Tue, Aug 08, 2017 at 03:50:20PM +0900, Minchan Kim wrote:
> > There is no need to use dynamic bio allocation for BDI_CAP_SYNC
> > devices. They can with on-stack-bio without concern about waiting
> > bio allocation fr
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely ther
The IO context conversion for rw_bytes missed a case in the BTT write
path (btt_map_write) which should've been marked as atomic.
In reality this should not cause a problem, because map writes are to
small for nsio_rw_bytes to attempt error clearing, but it should be
fixed for posterity.
Add a mi
In preparation for BTT error clearing, refactor the initial offset
calculations. Until now, all callers of arena_{read,write}_bytes assumed
a relative offset to the arena, and it was later adjusted for the
initial offset. Until now, every time we calculated a relative offset,
we passed it to these
Add helpers for converting a raw map entry to just the block number, or
either of the 'e' or 'z' flags in preparation for actually using the
error flag to mark blocks with media errors.
Signed-off-by: Vishal Verma
---
drivers/nvdimm/btt.c | 8
drivers/nvdimm/btt.h | 4
2 files chan
In preparation for the error clearing rework, add sector_size in the
arena_info struct.
Signed-off-by: Vishal Verma
---
drivers/nvdimm/btt.c | 1 +
drivers/nvdimm/btt.h | 2 ++
2 files changed, 3 insertions(+)
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 374ae62..e9dd651 10064
With the ACPI NFIT 'DSM' methods, acpi can be called from IO paths.
Specifically, the DSM to clear media errors is called during writes, so
that we can provide a writes-fix-errors model.
However it is easy to imagine a scenario like:
-> write through the nvdimm driver
-> acpi allocation
-
In btt_map_read, we read the map twice to make sure that the map entry
didn't change after we added it to the read tracking table. In
anticipation of expanding the use of the error bit, also make sure that
the error and zero flags are constant across the two map reads.
Signed-off-by: Vishal Verma
Clearing errors or badblocks during a BTT write requires sending an ACPI
DSM, which means potentially sleeping. Since a BTT IO happens in atomic
context (preemption disabled, spinlocks may be held), we cannot perform
error clearing in the course of an IO. Due to this error clearing for
BTT IOs has
changes in v5:
- Add patch 6 that refactors initial_offset calculations, and fix a bug
that caused error clearing to be missed in some cases (Toshi)
(I have a unit test for this that is mostly ready, but it depends
in a better error injection capability in nfit_test, so I will send
it
Finally circling back on this...
On Thu, Jun 15, 2017 at 3:42 PM, Dave Jiang wrote:
> The daxctl io option allows I/Os to be performed between block/file to
> and from device dax files. It also provides a way to zero a device dax
> device.
>
> i.e. daxctl io --input=/home/myfile --output=/dev/dax
On 08/08/2017 06:16 AM, Sinan Kaya wrote:
> Hi Dave,
>
> On 8/7/2017 12:39 PM, Dave Jiang wrote:
>> Adding a dmaengine transaction operation that allows copy to/from a
>> scatterlist and a flat buffer.
>>
>> Signed-off-by: Dave Jiang
>> ---
>> Documentation/dmaengine/provider.txt |3 +++
>>
On Tue, Aug 08, 2017 at 05:23:50PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> On (08/08/17 17:13), Minchan Kim wrote:
> > Hi Sergey,
> >
> > On Tue, Aug 08, 2017 at 04:02:26PM +0900, Sergey Senozhatsky wrote:
> > > On (08/08/17 15:50), Minchan Kim wrote:
> > > > With on-stack-bio, rw_p
On Tue, Aug 08, 2017 at 05:49:59AM -0700, Matthew Wilcox wrote:
> + struct bio sbio;
> + struct bio_vec sbvec;
... this needs to be sbvec[nr_pages], of course.
> - bio = mpage_alloc(bdev, blocks[0] << (blkbits - 9),
> + if (bdi_cap_synchronous_io(inode_to_bdi(inode
Hi Dave,
On 8/7/2017 12:39 PM, Dave Jiang wrote:
> Adding a dmaengine transaction operation that allows copy to/from a
> scatterlist and a flat buffer.
>
> Signed-off-by: Dave Jiang
> ---
> Documentation/dmaengine/provider.txt |3 +++
> drivers/dma/dmaengine.c |2 ++
> incl
On Tue, Aug 08, 2017 at 03:50:20PM +0900, Minchan Kim wrote:
> There is no need to use dynamic bio allocation for BDI_CAP_SYNC
> devices. They can with on-stack-bio without concern about waiting
> bio allocation from mempool under heavy memory pressure.
This seems ... more complex than necessary?
Hello Minchan,
On (08/08/17 17:13), Minchan Kim wrote:
> Hi Sergey,
>
> On Tue, Aug 08, 2017 at 04:02:26PM +0900, Sergey Senozhatsky wrote:
> > On (08/08/17 15:50), Minchan Kim wrote:
> > > With on-stack-bio, rw_page interface doesn't provide a clear performance
> > > benefit for zram and surely
Hi Sergey,
On Tue, Aug 08, 2017 at 04:02:26PM +0900, Sergey Senozhatsky wrote:
> On (08/08/17 15:50), Minchan Kim wrote:
> > With on-stack-bio, rw_page interface doesn't provide a clear performance
> > benefit for zram and surely has a maintenance burden, so remove the
> > last user to remove rw_p
On (08/08/17 15:50), Minchan Kim wrote:
> With on-stack-bio, rw_page interface doesn't provide a clear performance
> benefit for zram and surely has a maintenance burden, so remove the
> last user to remove rw_page completely.
OK, never really liked it, I think we had that conversation before.
as
22 matches
Mail list logo