Use a slightly larger than average EC so these PEBs will be
reinitialised with erase counts that make them less likely to
be reused than other (perhaps less worn or error-prone) PEBs

We have more frequent ECC failures on reads of page 0 of some PEBs
which manifest itself commonly during ubiattach. We believe this is due to
"program disturb" and want those PEB to be re-used later than average.

Signed-off-by: Andrew Worsley <amwors...@gmail.com>
---
 drivers/mtd/ubi/attach.c  | 2 +-
 drivers/mtd/ubi/fastmap.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
index 93ceea4f27d5..f97e2ba56fb2 100644
--- a/drivers/mtd/ubi/attach.c
+++ b/drivers/mtd/ubi/attach.c
@@ -1414,7 +1414,7 @@ static int scan_all(struct ubi_device *ubi, struct 
ubi_attach_info *ai,
 
        /* Calculate mean erase counter */
        if (ai->ec_count)
-               ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count);
+               ai->mean_ec = div_u64(ai->ec_sum+ai->ec_count-1, ai->ec_count);
 
        err = late_analysis(ubi, ai);
        if (err)
diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
index 462526a10537..91e513788f38 100644
--- a/drivers/mtd/ubi/fastmap.c
+++ b/drivers/mtd/ubi/fastmap.c
@@ -684,7 +684,7 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
                        be32_to_cpu(fmec->ec), 1);
        }
 
-       ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count);
+       ai->mean_ec = div_u64(ai->ec_sum+ai->ec_count-1, ai->ec_count);
        ai->bad_peb_count = be32_to_cpu(fmhdr->bad_peb_count);
 
        /* Iterate over all volumes and read their EBA table */
-- 
2.11.0

Reply via email to