Re: [Xen-devel] [PATCH v2 05/20] block/xen-blkfront: Split blkif_queue_request in 2

2015-07-21 Thread Julien Grall
Hi Roger,

On 21/07/15 10:54, Roger Pau Monné wrote:
> El 09/07/15 a les 22.42, Julien Grall ha escrit:
>> Currently, blkif_queue_request has 2 distinct execution path:
>> - Send a discard request
>> - Send a read/write request
>>
>> The function is also allocating grants to use for generating the
>> request. Although, this is only used for read/write request.
>>
>> Rather than having a function with 2 distinct execution path, separate
>> the function in 2. This will also remove one level of tabulation.
>>
>> Signed-off-by: Julien Grall 
>> Cc: Konrad Rzeszutek Wilk 
>> Cc: Roger Pau Monné 
>> Cc: Boris Ostrovsky 
>> Cc: David Vrabel 
> 
> Patch looks fine, although with so much indentation changes it's kind of
> hard to review.

I wasn't sure how to make this patch more easy to review and it seems
like diff is getting confused.

It's mostly removing one indentation layer (the if (req->cmd_flags ...))
and move the discard code in a separate function.

> Acked-by: Roger Pau Monné 

Thank you.

> Just one minor change below.
> 
> [...]
> 
>> @@ -595,6 +603,24 @@ static int blkif_queue_request(struct request *req)
>>  return 0;
>>  }
>>  
>> +/*
>> + * Generate a Xen blkfront IO request from a blk layer request.  Reads
>> + * and writes are handled as expected.
>> + *
>> + * @req: a request struct
>> + */
>> +static int blkif_queue_request(struct request *req)
>> +{
>> +struct blkfront_info *info = req->rq_disk->private_data;
>> +
>> +if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
>> +return 1;
>> +
>> +if (unlikely(req->cmd_flags & (REQ_DISCARD | REQ_SECURE)))
>> +return blkif_queue_discard_req(req);
>> +else
>> +return blkif_queue_rw_req(req);
> 
> There's no need for the else clause.

I find it more readable and obvious to understand than:

if ( ... )
  return
return;

when there is only one line in the else. IIRC, the resulting assembly
will be the same.

Anyway, I can drop the else if you really want.

Regards,

-- 
Julien Grall
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 05/20] block/xen-blkfront: Split blkif_queue_request in 2

2015-07-21 Thread Roger Pau Monné
El 09/07/15 a les 22.42, Julien Grall ha escrit:
> Currently, blkif_queue_request has 2 distinct execution path:
> - Send a discard request
> - Send a read/write request
> 
> The function is also allocating grants to use for generating the
> request. Although, this is only used for read/write request.
> 
> Rather than having a function with 2 distinct execution path, separate
> the function in 2. This will also remove one level of tabulation.
> 
> Signed-off-by: Julien Grall 
> Cc: Konrad Rzeszutek Wilk 
> Cc: Roger Pau Monné 
> Cc: Boris Ostrovsky 
> Cc: David Vrabel 

Patch looks fine, although with so much indentation changes it's kind of
hard to review.

Acked-by: Roger Pau Monné 

Just one minor change below.

[...]

> @@ -595,6 +603,24 @@ static int blkif_queue_request(struct request *req)
>   return 0;
>  }
>  
> +/*
> + * Generate a Xen blkfront IO request from a blk layer request.  Reads
> + * and writes are handled as expected.
> + *
> + * @req: a request struct
> + */
> +static int blkif_queue_request(struct request *req)
> +{
> + struct blkfront_info *info = req->rq_disk->private_data;
> +
> + if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> + return 1;
> +
> + if (unlikely(req->cmd_flags & (REQ_DISCARD | REQ_SECURE)))
> + return blkif_queue_discard_req(req);
> + else
> + return blkif_queue_rw_req(req);

There's no need for the else clause.

Roger.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 05/20] block/xen-blkfront: Split blkif_queue_request in 2

2015-07-09 Thread Julien Grall
Currently, blkif_queue_request has 2 distinct execution path:
- Send a discard request
- Send a read/write request

The function is also allocating grants to use for generating the
request. Although, this is only used for read/write request.

Rather than having a function with 2 distinct execution path, separate
the function in 2. This will also remove one level of tabulation.

Signed-off-by: Julien Grall 
Cc: Konrad Rzeszutek Wilk 
Cc: Roger Pau Monné 
Cc: Boris Ostrovsky 
Cc: David Vrabel 
---
Changes in v2:
- Patch added
---
 drivers/block/xen-blkfront.c | 280 +++
 1 file changed, 153 insertions(+), 127 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 6d89ed3..7107d58 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -392,13 +392,35 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t 
mode,
return 0;
 }
 
-/*
- * Generate a Xen blkfront IO request from a blk layer request.  Reads
- * and writes are handled as expected.
- *
- * @req: a request struct
- */
-static int blkif_queue_request(struct request *req)
+static int blkif_queue_discard_req(struct request *req)
+{
+   struct blkfront_info *info = req->rq_disk->private_data;
+   struct blkif_request *ring_req;
+   unsigned long id;
+
+   /* Fill out a communications ring structure. */
+   ring_req = RING_GET_REQUEST(&info->ring, info->ring.req_prod_pvt);
+   id = get_id_from_freelist(info);
+   info->shadow[id].request = req;
+
+   ring_req->operation = BLKIF_OP_DISCARD;
+   ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
+   ring_req->u.discard.id = id;
+   ring_req->u.discard.sector_number = (blkif_sector_t)blk_rq_pos(req);
+   if ((req->cmd_flags & REQ_SECURE) && info->feature_secdiscard)
+   ring_req->u.discard.flag = BLKIF_DISCARD_SECURE;
+   else
+   ring_req->u.discard.flag = 0;
+
+   info->ring.req_prod_pvt++;
+
+   /* Keep a private copy so we can reissue requests when recovering. */
+   info->shadow[id].req = *ring_req;
+
+   return 0;
+}
+
+static int blkif_queue_rw_req(struct request *req)
 {
struct blkfront_info *info = req->rq_disk->private_data;
struct blkif_request *ring_req;
@@ -418,9 +440,6 @@ static int blkif_queue_request(struct request *req)
struct scatterlist *sg;
int nseg, max_grefs;
 
-   if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-   return 1;
-
max_grefs = req->nr_phys_segments;
if (max_grefs > BLKIF_MAX_SEGMENTS_PER_REQUEST)
/*
@@ -450,139 +469,128 @@ static int blkif_queue_request(struct request *req)
id = get_id_from_freelist(info);
info->shadow[id].request = req;
 
-   if (unlikely(req->cmd_flags & (REQ_DISCARD | REQ_SECURE))) {
-   ring_req->operation = BLKIF_OP_DISCARD;
-   ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
-   ring_req->u.discard.id = id;
-   ring_req->u.discard.sector_number = 
(blkif_sector_t)blk_rq_pos(req);
-   if ((req->cmd_flags & REQ_SECURE) && info->feature_secdiscard)
-   ring_req->u.discard.flag = BLKIF_DISCARD_SECURE;
-   else
-   ring_req->u.discard.flag = 0;
+   BUG_ON(info->max_indirect_segments == 0 &&
+  req->nr_phys_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST);
+   BUG_ON(info->max_indirect_segments &&
+  req->nr_phys_segments > info->max_indirect_segments);
+   nseg = blk_rq_map_sg(req->q, req, info->shadow[id].sg);
+   ring_req->u.rw.id = id;
+   if (nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST) {
+   /*
+* The indirect operation can only be a BLKIF_OP_READ or
+* BLKIF_OP_WRITE
+*/
+   BUG_ON(req->cmd_flags & (REQ_FLUSH | REQ_FUA));
+   ring_req->operation = BLKIF_OP_INDIRECT;
+   ring_req->u.indirect.indirect_op = rq_data_dir(req) ?
+   BLKIF_OP_WRITE : BLKIF_OP_READ;
+   ring_req->u.indirect.sector_number = 
(blkif_sector_t)blk_rq_pos(req);
+   ring_req->u.indirect.handle = info->handle;
+   ring_req->u.indirect.nr_segments = nseg;
} else {
-   BUG_ON(info->max_indirect_segments == 0 &&
-  req->nr_phys_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST);
-   BUG_ON(info->max_indirect_segments &&
-  req->nr_phys_segments > info->max_indirect_segments);
-   nseg = blk_rq_map_sg(req->q, req, info->shadow[id].sg);
-   ring_req->u.rw.id = id;
-   if (nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST) {
+   ring_req->u.rw.sector_number = (blkif_sector_t)blk_rq_pos(req);
+   ring_req->u.rw.handle = info->handle;
+   ri