On Oct 19 11:50, Klaus Jensen wrote:
> On Oct 19 11:17, Dmitry Fomichev wrote:
> > +static bool nvme_finalize_zoned_write(NvmeNamespace *ns, NvmeRequest *req,
> > +                                      bool failed)
> > +{
> > +    NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
> > +    NvmeZone *zone;
> > +    uint64_t slba, start_wp = req->cqe.result64;
> > +    uint32_t nlb;
> > +
> > +    if (rw->opcode != NVME_CMD_WRITE &&
> > +        rw->opcode != NVME_CMD_ZONE_APPEND &&
> > +        rw->opcode != NVME_CMD_WRITE_ZEROES) {
> > +        return false;
> > +    }
> > +
> > +    slba = le64_to_cpu(rw->slba);
> > +    nlb = le16_to_cpu(rw->nlb) + 1;
> > +    zone = nvme_get_zone_by_slba(ns, slba);
> > +
> > +    if (!failed && zone->w_ptr < start_wp + nlb) {
> > +        /*
> > +         * A preceding queued write to the zone has failed,
> > +         * now this write is not at the WP, fail it too.
> > +         */
> > +        failed = true;
> > +    }
> > +
> > +    if (failed) {
> > +        if (zone->w_ptr > start_wp) {
> > +            zone->w_ptr = start_wp;
> > +            zone->d.wp = start_wp;
> > +        }
> 
> This doesn't fix the data corruption. The example from my last review
> still applies.
> 

An easy fix is to just unconditionally advance the write pointer in all
cases.

Attachment: signature.asc
Description: PGP signature

Reply via email to