On 2017年04月30日 13:48, Mike Christie wrote:
On 04/26/2017 01:25 AM, lixi...@cmss.chinamobile.com wrote:
        for_each_sg(data_sg, sg, data_nents, i) {
@@ -275,22 +371,26 @@ static void alloc_and_scatter_data_area(struct tcmu_dev 
*udev,
                from = kmap_atomic(sg_page(sg)) + sg->offset;
                while (sg_remaining > 0) {
                        if (block_remaining == 0) {
-                               block = find_first_zero_bit(udev->data_bitmap,
-                                               DATA_BLOCK_BITS);
                                block_remaining = DATA_BLOCK_SIZE;
-                               set_bit(block, udev->data_bitmap);
+                               dbi = tcmu_get_empty_block(udev, &to);
+                               if (dbi < 0)

I know it you fixed the missing kunmap_atomic here and missing unlock in
tcmu_queue_cmd_ring in the next patch, but I think normally people
prefer that one patch does not add a bug, then the next patch fixes it.
Do you mean the following kmap_atomic() ?

from = kmap_atomic(sg_page(sg)) + sg->offset;

In this patch there has no new kmap/kunmap introduced. This is the old code and
the kunmap is at the end of aasda().

This as the initial patch, the memory is from slab cache now. But since the second
patch followed will covert to use memory page from the buddy directly.


Thanks,

BRs
Xiubo Li



Reply via email to