On Tue, Aug 9, 2016 at 11:38 PM, Damien Le Moal wrote:
> Shaun,
>
> On 8/10/16 12:58, Shaun Tancheff wrote:
>>
>> On Tue, Aug 9, 2016 at 3:09 AM, Damien Le Moal
>> wrote:
On Aug 9, 2016, at 15:47, Hannes Reinecke wrote:
>>
>>
>> [trim]
>>
> Since disk type == 0 for everything that
On 8/5/2016 12:49 PM, Khan, Imran wrote:
> On 8/1/2016 2:58 PM, Khan, Imran wrote:
>> On 7/30/2016 7:54 AM, Akinobu Mita wrote:
>>> 2016-07-28 22:18 GMT+09:00 Khan, Imran :
Hi,
Recently we have observed some increased latency in CPU hotplug
event in CPU online path. For onl
On Mon, Aug 15, 2016 at 11:00 PM, Damien Le Moal wrote:
>
> Shaun,
>
>> On Aug 14, 2016, at 09:09, Shaun Tancheff wrote:
> […]
>>> No, surely not.
>>> But one of the _big_ advantages for the RB tree is blkdev_discard().
>>> Without the RB tree any mkfs program will issue a 'discard' for ever
> "Tom" == Tom Yan writes:
Tom,
Tom> The thing is, as of ACS-4, blocks that carry DSM/TRIM LBA Range
Tom> Entries are always 512-byte.
Lovely. And SAT conveniently ignores this entirely.
Tom> Honestly, I have no idea how that would work on a 4Kn SSD, if it is
Tom> / will ever be a thing.
> "Tom" == Tom Yan writes:
Tom,
>> It would be pretty unusual for a device that is smart enough to
>> report a transfer length limit to be constrained to 1 MB and change.
Tom> Well, it is done pretty much for libata's SATL.
But why?
>> rw_max = min(BLK_DEF_MAX_SECTORS, q->limits.max_dev_s
> "Tom" == Tom Yan writes:
Tom,
>> 0x7f, the maximum number of block layer sectors that can be
>> expressed in a single bio.
Tom> Hmm, so when we queue any of the limits, we convert a certain
Tom> maximum number of physical sectors (which we has already been
Tom> doing)
logical sectors
Shaun,
> On Aug 14, 2016, at 09:09, Shaun Tancheff wrote:
[…]
>>>
>> No, surely not.
>> But one of the _big_ advantages for the RB tree is blkdev_discard().
>> Without the RB tree any mkfs program will issue a 'discard' for every
>> sector. We will be able to coalesce those into one discard per
On 08/14/2016 11:23 AM, Dan Williams wrote:
[ adding Bart back to the cc ]
On Sun, Aug 14, 2016 at 11:08 AM, Dan Williams wrote:
On Sun, Aug 14, 2016 at 10:20 AM, James Bottomley
wrote:
[..]
I like it. I still think the bdi registration code should be in
charge of taking the extra referenc
On Mon, Aug 15, 2016 at 11:23:28AM -0700, Christoph Hellwig wrote:
> On Mon, Aug 15, 2016 at 11:11:22PM +0800, Ming Lei wrote:
> > After arbitrary bio size is supported, the incoming bio may
> > be very big. We have to split the bio into small bios so that
> > each holds at most BIO_MAX_PAGES bvecs
On Mon, Aug 15, 2016 at 11:11:22PM +0800, Ming Lei wrote:
> After arbitrary bio size is supported, the incoming bio may
> be very big. We have to split the bio into small bios so that
> each holds at most BIO_MAX_PAGES bvecs for safety reason, such
> as bio_clone().
I still think working around a
On Mon, Aug 15, 2016 at 12:16:30PM -0600, Jens Axboe wrote:
>> This really should be a:
>>
>> if (req_op(rq) != req_op(pos))
>>
>> I'l lleave it up to Jens if he wants that in this patch or not, otherwise
>> I'll send an incremental patch.
>
> Let's get a v2 with that fixed up, it makes a big
On 08/15/2016 12:13 PM, Christoph Hellwig wrote:
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -366,7 +366,10 @@ void elv_dispatch_sort(struct request_queue *q, struct
request *rq)
list_for_each_prev(entry, &q->queue_head) {
struct request *pos = list_entry_rq(entry);
On Mon, Aug 15, 2016 at 11:43:12AM -0500, Shaun Tancheff wrote:
> Hmm ... Since REQ_SECURE implied REQ_DISCARD doesn't this
> mean that we should include REQ_OP_SECURE_ERASE checking
> wherever REQ_OP_DISCARD is being checked now in drivers/scsi/sd.c ?
>
> (It's only in 3 spots so it's a quickie p
> --- a/block/elevator.c
> +++ b/block/elevator.c
> @@ -366,7 +366,10 @@ void elv_dispatch_sort(struct request_queue *q, struct
> request *rq)
> list_for_each_prev(entry, &q->queue_head) {
> struct request *pos = list_entry_rq(entry);
>
> - if ((req_op(rq) == REQ_
On 08/15/2016 10:15 AM, Jens Axboe wrote:
Can you reproduce at will? Would be nice to know if it hit the error case,
which is where it would hang.
Hello Jens,
Unfortunately this hang is only triggered sporadically by my tests.
Since about four weeks ago I triggered several thousand
scsi_remo
On 08/15/2016 09:53 AM, Bart Van Assche wrote:
On 08/02/2016 10:21 AM, Jens Axboe wrote:
On 08/02/2016 06:58 AM, Jinpu Wang wrote:
Hi Jens,
I found in blk_mq_register_disk, we blk_mq_disable_hotplug which in
turn mutex_lock(&all_q_mutex);
queue_for_each_hw_ctx(q, hctx, i) {
r
On Mon, Aug 15, 2016 at 9:07 AM, Adrian Hunter wrote:
> Commit 288dab8a35a0 ("block: add a separate operation type for secure
> erase") split REQ_OP_SECURE_ERASE from REQ_OP_DISCARD without considering
> all the places REQ_OP_DISCARD was being used to mean either. Fix those.
>
> Signed-off-by: Adr
On 08/15/2016 09:01 AM, Jinpu Wang wrote:
It's more likely you hit another bug, my colleague Roman fix that:
http://www.spinics.net/lists/linux-block/msg04552.html
Hello Jinpu,
Interesting. However, I see that wrote the following: "Firstly this
wrong sequence raises two kernel warnings: 1st.
Hi Bart,
>>
>> Nope, your analysis looks correct. This should fix it:
>>
>> http://git.kernel.dk/cgit/linux-block/commit/?h=for-linus&id=6316338a94b2319abe9d3790eb9cdc56ef81ac1a
>
> Hi Jens,
>
> Will that patch be included in stable kernels? I just encountered a
> deadlock with kernel v4.7 that lo
On 08/02/2016 10:21 AM, Jens Axboe wrote:
> On 08/02/2016 06:58 AM, Jinpu Wang wrote:
>> Hi Jens,
>>
>> I found in blk_mq_register_disk, we blk_mq_disable_hotplug which in
>> turn mutex_lock(&all_q_mutex);
>> queue_for_each_hw_ctx(q, hctx, i) {
>> ret = blk_mq_register_hctx(hctx);
After arbitrary bio size is supported, the incoming bio may
be very big. We have to split the bio into small bios so that
each holds at most BIO_MAX_PAGES bvecs for safety reason, such
as bio_clone().
This patch fixes the following kernel crash:
> [ 172.660142] BUG: unable to handle kernel NULL
Commit 288dab8a35a0 ("block: add a separate operation type for secure
erase") split REQ_OP_SECURE_ERASE from REQ_OP_DISCARD without considering
all the places REQ_OP_DISCARD was being used to mean either. Fix those.
Signed-off-by: Adrian Hunter
Fixes: 288dab8a35a0 ("block: add a separate operatio
22 matches
Mail list logo