Re: [PATCH 5/5] lightnvm: pblk: fix erase counters on error fail

2017-04-23 Thread Javier González
> On 23 Apr 2017, at 19.59, Matias Bjørling  wrote:
> 
> On 04/22/2017 11:31 AM, Javier González wrote:
>>> On 22 Apr 2017, at 11.22, Matias Bjørling  wrote:
>>> 
>>> On 04/22/2017 01:32 AM, Javier González wrote:
 When block erases fail, these blocks are marked bad. The number of valid
 blocks in the line was not updated, which could cause an infinite loop
 on the erase path.
 
 Fix this atomic counter and, in order to avoid taking an irq lock on the
 interrupt context, make the erase counters atomic too.
>>> 
>>> I can't find out where the counters are used in irq context? Can you
>>> point me in the right direction? I'll prefer for these counters to go
>>> in under the existing line_lock.
>> 
>> This counter is line->blk_in_line, which is used on pblk_mark_bb. This
>> is triggered on the erase completion path. Note that we do not want to
>> wait until the recovery kicks in on the workqueue since the line might
>> start getting recycled straight away if we are close to reaching
>> capacity. This is indeed the scenario that triggers the race condition.
>> 
>> Making all erase counters atomic allows us to avoid taking the
>> line_lock. Note that the counters do not depend on each other.
>> 
 Also, in the case that a significant number of blocks become bad in a
 line, the result is the double shared metadata buffer (emeta) to stop
 the pipeline until all metadata is flushed to the media. Increase the
 number of metadata lines from 2 to 4 to avoid this case.
>>> 
>>> How does moving to 4 lines solve this case? The way I read it is that
>>> it only postpones when this occurs?
>> 
>> The chances of having more than 2 lines falling out of blocks after
>> pre-condition are slim. Adding two more lines should be enough.
>> 
 [...]
 
 -#define PBLK_DATA_LINES 2
 +#define PBLK_DATA_LINES 4
>>> 
>>> Why this change? I like to keep new features for 4.13. Only bugfixes for 
>>> 4.12.
>> 
>> This is the 4 lines referred above. I see it as a bug fix since we risk
>> stalling the pipeline if a line goes above a certain number of bad
>> blocks on initialization, but we can leave it out and fix this when we
>> add in-line metadata on the write thread for 4.12
>> 
>> Thanks,
>> Javier
> 
> Okay. It tickles me a bit. However, to make it pretty, some
> refactoring is needed, which we won't push for this release.
> 
> Reviewed-by: Matias Bjørling 


Yes, you are right. I think it will become much better as we implement
the FTL log and refactor the metadata path.

Thanks!
Javier


signature.asc
Description: Message signed with OpenPGP


Re: [PATCH 5/5] lightnvm: pblk: fix erase counters on error fail

2017-04-23 Thread Matias Bjørling

On 04/22/2017 11:31 AM, Javier González wrote:



On 22 Apr 2017, at 11.22, Matias Bjørling  wrote:

On 04/22/2017 01:32 AM, Javier González wrote:

When block erases fail, these blocks are marked bad. The number of valid
blocks in the line was not updated, which could cause an infinite loop
on the erase path.

Fix this atomic counter and, in order to avoid taking an irq lock on the
interrupt context, make the erase counters atomic too.


I can't find out where the counters are used in irq context? Can you
point me in the right direction? I'll prefer for these counters to go
in under the existing line_lock.



This counter is line->blk_in_line, which is used on pblk_mark_bb. This
is triggered on the erase completion path. Note that we do not want to
wait until the recovery kicks in on the workqueue since the line might
start getting recycled straight away if we are close to reaching
capacity. This is indeed the scenario that triggers the race condition.

Making all erase counters atomic allows us to avoid taking the
line_lock. Note that the counters do not depend on each other.


Also, in the case that a significant number of blocks become bad in a
line, the result is the double shared metadata buffer (emeta) to stop
the pipeline until all metadata is flushed to the media. Increase the
number of metadata lines from 2 to 4 to avoid this case.


How does moving to 4 lines solve this case? The way I read it is that
it only postpones when this occurs?



The chances of having more than 2 lines falling out of blocks after
pre-condition are slim. Adding two more lines should be enough.



[...]

-#define PBLK_DATA_LINES 2
+#define PBLK_DATA_LINES 4


Why this change? I like to keep new features for 4.13. Only bugfixes for 4.12.


This is the 4 lines referred above. I see it as a bug fix since we risk
stalling the pipeline if a line goes above a certain number of bad
blocks on initialization, but we can leave it out and fix this when we
add in-line metadata on the write thread for 4.12

Thanks,
Javier



Okay. It tickles me a bit. However, to make it pretty, some refactoring 
is needed, which we won't push for this release.


Reviewed-by: Matias Bjørling 


Re: [PATCH 5/5] lightnvm: pblk: fix erase counters on error fail

2017-04-22 Thread Javier González

> On 22 Apr 2017, at 11.22, Matias Bjørling  wrote:
> 
> On 04/22/2017 01:32 AM, Javier González wrote:
>> When block erases fail, these blocks are marked bad. The number of valid
>> blocks in the line was not updated, which could cause an infinite loop
>> on the erase path.
>> 
>> Fix this atomic counter and, in order to avoid taking an irq lock on the
>> interrupt context, make the erase counters atomic too.
> 
> I can't find out where the counters are used in irq context? Can you
> point me in the right direction? I'll prefer for these counters to go
> in under the existing line_lock.
> 

This counter is line->blk_in_line, which is used on pblk_mark_bb. This
is triggered on the erase completion path. Note that we do not want to
wait until the recovery kicks in on the workqueue since the line might
start getting recycled straight away if we are close to reaching
capacity. This is indeed the scenario that triggers the race condition.

Making all erase counters atomic allows us to avoid taking the
line_lock. Note that the counters do not depend on each other.

>> Also, in the case that a significant number of blocks become bad in a
>> line, the result is the double shared metadata buffer (emeta) to stop
>> the pipeline until all metadata is flushed to the media. Increase the
>> number of metadata lines from 2 to 4 to avoid this case.
> 
> How does moving to 4 lines solve this case? The way I read it is that
> it only postpones when this occurs?
> 

The chances of having more than 2 lines falling out of blocks after
pre-condition are slim. Adding two more lines should be enough.

>> 
>> [...]
>> 
>> -#define PBLK_DATA_LINES 2
>> +#define PBLK_DATA_LINES 4
> 
> Why this change? I like to keep new features for 4.13. Only bugfixes for 4.12.

This is the 4 lines referred above. I see it as a bug fix since we risk
stalling the pipeline if a line goes above a certain number of bad
blocks on initialization, but we can leave it out and fix this when we
add in-line metadata on the write thread for 4.12

Thanks,
Javier


signature.asc
Description: Message signed with OpenPGP


Re: [PATCH 5/5] lightnvm: pblk: fix erase counters on error fail

2017-04-22 Thread Matias Bjørling

On 04/22/2017 01:32 AM, Javier González wrote:

When block erases fail, these blocks are marked bad. The number of valid
blocks in the line was not updated, which could cause an infinite loop
on the erase path.

Fix this atomic counter and, in order to avoid taking an irq lock on the
interrupt context, make the erase counters atomic too.


I can't find out where the counters are used in irq context? Can you 
point me in the right direction? I'll prefer for these counters to go in 
under the existing line_lock.




Also, in the case that a significant number of blocks become bad in a
line, the result is the double shared metadata buffer (emeta) to stop
the pipeline until all metadata is flushed to the media. Increase the
number of metadata lines from 2 to 4 to avoid this case.


How does moving to 4 lines solve this case? The way I read it is that it 
only postpones when this occurs?




Fixes: a4bd217b4326 "lightnvm: physical block device (pblk) target"

Signed-off-by: Javier González 
---
 drivers/lightnvm/pblk-core.c  | 28 +++-
 drivers/lightnvm/pblk-gc.c|  2 +-
 drivers/lightnvm/pblk-init.c  |  9 ++---
 drivers/lightnvm/pblk-map.c   |  4 ++--
 drivers/lightnvm/pblk-rl.c|  6 --
 drivers/lightnvm/pblk-write.c |  4 ++--
 drivers/lightnvm/pblk.h   |  6 +++---
 7 files changed, 37 insertions(+), 22 deletions(-)

diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index ac3742b..5e44768 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@ -29,6 +29,7 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line 
*line,
pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos);
atomic_long_inc(>erase_failed);

+   atomic_dec(>blk_in_line);
if (test_and_set_bit(pos, line->blk_bitmap))
pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n",
line->id, pos);
@@ -832,21 +833,28 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line 
*line)
struct ppa_addr ppa;
int bit = -1;

-   /* Erase one block at the time and only erase good blocks */
-   while ((bit = find_next_zero_bit(line->erase_bitmap, lm->blk_per_line,
-   bit + 1)) < lm->blk_per_line) {
+   /* Erase only good blocks, one at a time */
+   do {
+   spin_lock(>lock);
+   bit = find_next_zero_bit(line->erase_bitmap, lm->blk_per_line,
+   bit + 1);
+   if (bit >= lm->blk_per_line) {
+   spin_unlock(>lock);
+   break;
+   }
+
ppa = pblk->luns[bit].bppa; /* set ch and lun */
ppa.g.blk = line->id;

-   /* If the erase fails, the block is bad and should be marked */
-   line->left_eblks--;
+   atomic_dec(>left_eblks);
WARN_ON(test_and_set_bit(bit, line->erase_bitmap));
+   spin_unlock(>lock);

if (pblk_blk_erase_sync(pblk, ppa)) {
pr_err("pblk: failed to erase line %d\n", line->id);
return -ENOMEM;
}
-   }
+   } while (1);

return 0;
 }
@@ -1007,6 +1015,7 @@ static int pblk_line_init_bb(struct pblk *pblk, struct 
pblk_line *line,
 static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
 {
struct pblk_line_meta *lm = >lm;
+   int blk_in_line = atomic_read(>blk_in_line);

line->map_bitmap = mempool_alloc(pblk->line_meta_pool, GFP_ATOMIC);
if (!line->map_bitmap)
@@ -1030,12 +1039,13 @@ static int pblk_line_prepare(struct pblk *pblk, struct 
pblk_line *line)
return -EINTR;
}
line->state = PBLK_LINESTATE_OPEN;
+
+   atomic_set(>left_eblks, blk_in_line);
+   atomic_set(>left_seblks, blk_in_line);
spin_unlock(>lock);

/* Bad blocks do not need to be erased */
bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
-   line->left_eblks = line->blk_in_line;
-   atomic_set(>left_seblks, line->left_eblks);

kref_init(>ref);

@@ -1231,7 +1241,7 @@ struct pblk_line *pblk_line_replace_data(struct pblk 
*pblk)
left_seblks = atomic_read(>left_seblks);
if (left_seblks) {
/* If line is not fully erased, erase it */
-   if (new->left_eblks) {
+   if (atomic_read(>left_eblks)) {
if (pblk_line_erase(pblk, new))
return NULL;
} else {
diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c
index f173fd4..eaf479c 100644
--- a/drivers/lightnvm/pblk-gc.c
+++ b/drivers/lightnvm/pblk-gc.c
@@ -332,7 +332,7 @@ static void pblk_gc_run(struct pblk *pblk)
}


[PATCH 5/5] lightnvm: pblk: fix erase counters on error fail

2017-04-21 Thread Javier González
When block erases fail, these blocks are marked bad. The number of valid
blocks in the line was not updated, which could cause an infinite loop
on the erase path.

Fix this atomic counter and, in order to avoid taking an irq lock on the
interrupt context, make the erase counters atomic too.

Also, in the case that a significant number of blocks become bad in a
line, the result is the double shared metadata buffer (emeta) to stop
the pipeline until all metadata is flushed to the media. Increase the
number of metadata lines from 2 to 4 to avoid this case.

Fixes: a4bd217b4326 "lightnvm: physical block device (pblk) target"

Signed-off-by: Javier González 
---
 drivers/lightnvm/pblk-core.c  | 28 +++-
 drivers/lightnvm/pblk-gc.c|  2 +-
 drivers/lightnvm/pblk-init.c  |  9 ++---
 drivers/lightnvm/pblk-map.c   |  4 ++--
 drivers/lightnvm/pblk-rl.c|  6 --
 drivers/lightnvm/pblk-write.c |  4 ++--
 drivers/lightnvm/pblk.h   |  6 +++---
 7 files changed, 37 insertions(+), 22 deletions(-)

diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index ac3742b..5e44768 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@ -29,6 +29,7 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line 
*line,
pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos);
atomic_long_inc(>erase_failed);
 
+   atomic_dec(>blk_in_line);
if (test_and_set_bit(pos, line->blk_bitmap))
pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n",
line->id, pos);
@@ -832,21 +833,28 @@ int pblk_line_erase(struct pblk *pblk, struct pblk_line 
*line)
struct ppa_addr ppa;
int bit = -1;
 
-   /* Erase one block at the time and only erase good blocks */
-   while ((bit = find_next_zero_bit(line->erase_bitmap, lm->blk_per_line,
-   bit + 1)) < lm->blk_per_line) {
+   /* Erase only good blocks, one at a time */
+   do {
+   spin_lock(>lock);
+   bit = find_next_zero_bit(line->erase_bitmap, lm->blk_per_line,
+   bit + 1);
+   if (bit >= lm->blk_per_line) {
+   spin_unlock(>lock);
+   break;
+   }
+
ppa = pblk->luns[bit].bppa; /* set ch and lun */
ppa.g.blk = line->id;
 
-   /* If the erase fails, the block is bad and should be marked */
-   line->left_eblks--;
+   atomic_dec(>left_eblks);
WARN_ON(test_and_set_bit(bit, line->erase_bitmap));
+   spin_unlock(>lock);
 
if (pblk_blk_erase_sync(pblk, ppa)) {
pr_err("pblk: failed to erase line %d\n", line->id);
return -ENOMEM;
}
-   }
+   } while (1);
 
return 0;
 }
@@ -1007,6 +1015,7 @@ static int pblk_line_init_bb(struct pblk *pblk, struct 
pblk_line *line,
 static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
 {
struct pblk_line_meta *lm = >lm;
+   int blk_in_line = atomic_read(>blk_in_line);
 
line->map_bitmap = mempool_alloc(pblk->line_meta_pool, GFP_ATOMIC);
if (!line->map_bitmap)
@@ -1030,12 +1039,13 @@ static int pblk_line_prepare(struct pblk *pblk, struct 
pblk_line *line)
return -EINTR;
}
line->state = PBLK_LINESTATE_OPEN;
+
+   atomic_set(>left_eblks, blk_in_line);
+   atomic_set(>left_seblks, blk_in_line);
spin_unlock(>lock);
 
/* Bad blocks do not need to be erased */
bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
-   line->left_eblks = line->blk_in_line;
-   atomic_set(>left_seblks, line->left_eblks);
 
kref_init(>ref);
 
@@ -1231,7 +1241,7 @@ struct pblk_line *pblk_line_replace_data(struct pblk 
*pblk)
left_seblks = atomic_read(>left_seblks);
if (left_seblks) {
/* If line is not fully erased, erase it */
-   if (new->left_eblks) {
+   if (atomic_read(>left_eblks)) {
if (pblk_line_erase(pblk, new))
return NULL;
} else {
diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c
index f173fd4..eaf479c 100644
--- a/drivers/lightnvm/pblk-gc.c
+++ b/drivers/lightnvm/pblk-gc.c
@@ -332,7 +332,7 @@ static void pblk_gc_run(struct pblk *pblk)
}
 
line = list_first_entry(group_list, struct pblk_line, list);
-   nr_blocks_free += line->blk_in_line;
+   nr_blocks_free += atomic_read(>blk_in_line);
 
spin_lock(>lock);
WARN_ON(line->state != PBLK_LINESTATE_CLOSED);
diff --git a/drivers/lightnvm/pblk-init.c