On Fri, 2015-12-04 at 23:58 +0000, Verma, Vishal L wrote:
> On Fri, 2015-12-04 at 15:30 -0800, James Bottomley wrote:
> [...]
> > > + * We return
> > > + *  0 if there are no known bad blocks in the range
> > > + *  1 if there are known bad block which are all acknowledged
> > > + * -1 if there are bad blocks which have not yet been acknowledged
> > > in metadata.
> > > + * plus the start/length of the first bad section we overlap.
> > > + */
> > 
> > This comment should be docbook.
> 
> Applicable to all your comments - (and they are all valid), I simply
> copied over all this from md. I'm happy to make the changes to comments,
> and the other two things (see below) if that's the right thing to do --
> I just tried to keep my own changes to the original md badblocks code
> minimal.
> Would it be better (for review-ability) if I made these changes in a new
> patch on top of this, or should I just squash them into this one?

If you were moving it, that might be appropriate.  However, this is
effectively new code because you're not removing the original, so we
should begin at least with a coherent API. (i.e. corrections to the
original patch rather than incremental).

Thanks,

James


> > 
> > > +int badblocks_check(struct badblocks *bb, sector_t s, int sectors,
> > > +                 sector_t *first_bad, int *bad_sectors)
> > [...]
> > > +
> > > +/*
> > > + * Add a range of bad blocks to the table.
> > > + * This might extend the table, or might contract it
> > > + * if two adjacent ranges can be merged.
> > > + * We binary-search to find the 'insertion' point, then
> > > + * decide how best to handle it.
> > > + */
> > 
> > And this one, plus you don't document returns.  It looks like this
> > function returns 1 on success and zero on failure, which is really
> > counter-intuitive for the kernel: zero is usually returned on success
> > and negative error on failure.
> > 
> > > +int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
> > > +                 int acknowledged)
> > [...]
> > > +
> > > +/*
> > > + * Remove a range of bad blocks from the table.
> > > + * This may involve extending the table if we spilt a region,
> > > + * but it must not fail.  So if the table becomes full, we just
> > > + * drop the remove request.
> > > + */
> > 
> > Docbook and document returns.  This time they're the kernel standard
> > of
> > 0 on success and negative error on failure making the convention for
> > badblocks_set even more counterintuitive.
> > 
> > > +int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
> > > +{
> > [...]
> > > +#define DO_DEBUG 1
> > 
> > Why have this at all if it's unconditionally defined and always set.
> 
> Neil - any reason or anything you had in mind for this? Or is it just an
> artifact and can be removed.
> 
> > 
> > > +ssize_t badblocks_store(struct badblocks *bb, const char *page,
> > > size_t len,
> > > +                 int unack)
> > [...]
> > > +int badblocks_init(struct badblocks *bb, int enable)
> > > +{
> > > + bb->count = 0;
> > > + if (enable)
> > > +         bb->shift = 0;
> > > + else
> > > +         bb->shift = -1;
> > > + bb->page = kmalloc(PAGE_SIZE, GFP_KERNEL);
> > 
> > Why not __get_free_page(GFP_KERNEL)?  The problem with kmalloc of an
> > exactly known page sized quantity is that the slab tracker for this
> > requires two contiguous pages for each page because of the overhead.
> 
> Cool, I didn't know about __get_free_page - I can fix this up too.
> 
> > 
> > James
> > 
> > NrybXǧv^)޺{.n+{"{ayʇڙ,jfhzwj:+vwjmzZ+ݢj"!



--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to