srv_sess->sessname, dev_name);
+ kfree(full_path);
+ return ERR_PTR(-EINVAL);
}
/* eliminitate duplicated slashes */
Looks good.
Acked-by: Guoqing Jiang
On 10/21/19 11:40 AM, Jan Kara wrote:
On Mon 21-10-19 11:27:36, Guoqing Jiang wrote:
On 10/21/19 11:12 AM, Jan Kara wrote:
On Mon 21-10-19 10:49:54, Guoqing Jiang wrote:
On 10/21/19 10:38 AM, Jan Kara wrote:
Currently, block device size in not updated on second and further open
for block
On 10/21/19 11:36 AM, Johannes Thumshirn wrote:
On 21/10/2019 11:27, Guoqing Jiang wrote:
static void bdev_disk_changed(struct block_device *bdev, bool
invalidate)
{
- if (invalidate)
- invalidate_partitions(bdev->bd_disk, bdev);
- else
- rescan_partitions(b
On 10/21/19 11:12 AM, Jan Kara wrote:
On Mon 21-10-19 10:49:54, Guoqing Jiang wrote:
On 10/21/19 10:38 AM, Jan Kara wrote:
Currently, block device size in not updated on second and further open
for block devices where partition scan is disabled. This is particularly
annoying for example
On 10/21/19 10:38 AM, Jan Kara wrote:
Currently, block device size in not updated on second and further open
for block devices where partition scan is disabled. This is particularly
annoying for example for DVD drives as that means block device size does
not get updated once the media is inser
On 9/19/19 11:45 AM, Hannes Reinecke wrote:
From: Hannes Reinecke
When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we
need to requeue the I/O, but adding it to the global request list
will mess up with the passed-in request list. So re-add the request
to the original list and le
On 9/12/19 5:29 AM, Yufen Yu wrote:
On 2019/9/12 10:46, Ming Lei wrote:
On Sat, Sep 07, 2019 at 06:24:50PM +0800, Yufen Yu wrote:
There is a race condition between timeout check and completion for
flush request as follow:
timeout_work issue flush issue flush
blk_i
, Acked-by: Guoqing Jiang
Thanks,
Guoqing
Hi Neil,
On 9/9/19 8:58 AM, NeilBrown wrote:
Due to a bug introduced in Linux 3.14 we cannot determine the
correctly layout for a multi-zone RAID0 array - there are two
possibiities.
possibilities.
It is possible to tell the kernel which to chose using a module
parameter, but this can be c
On 9/9/19 8:57 AM, NeilBrown wrote:
If the drives in a RAID0 are not all the same size, the array is
divided into zones.
The first zone covers all drives, to the size of the smallest.
The second zone covers all drives larger than the smallest, up to
the size of the second smallest - etc.
A c
Hi,
On 8/16/19 3:40 PM, Guilherme G. Piccoli wrote:
+static bool linear_is_missing_dev(struct mddev *mddev)
+{
+ struct md_rdev *rdev;
+ static int already_missing;
+ int def_disks, work_disks = 0;
+
+ def_disks = mddev->raid_disks;
+ rdev_for_each(rdev, mddev)
+
On 01/10/2018 02:13 PM, Paolo Valente wrote:
Il giorno 10 gen 2018, alle ore 02:41, Guoqing Jiang ha
scritto:
On 01/09/2018 05:27 PM, Paolo Valente wrote:
For each pair [device for which bfq is selected as I/O scheduler,
group in blkio/io], bfq maintains a corresponding bfq group. Each
vein,
bfqg_stats_xfer_dead is not executed for a root group.
This commit fixes bfq_pd_offline so that the latter executes the above
missing operations for a root group too.
Reported-by: Holger Hoffstätte
Reported-by: Guoqing Jiang
Signed-off-by: Davide Ferrari
Signed-off-by: Paolo Valente
On 01/03/2018 03:44 PM, Paolo Valente wrote:
Il giorno 03 gen 2018, alle ore 04:58, Guoqing Jiang ha
scritto:
Hi,
Hi
In my test, I found some issues when try bfq with xfs.
The test basically just set the disk's scheduler to bfq,
create xfs on top of it, mount fs and write some
Hi,
In my test, I found some issues when try bfq with xfs.
The test basically just set the disk's scheduler to bfq,
create xfs on top of it, mount fs and write something,
then umount the fs. After several rounds of iteration,
I can see different calltraces appeared.
For example, the one which ha
On 12/21/2017 03:53 PM, Paolo Valente wrote:
Il giorno 21 dic 2017, alle ore 08:08, Guoqing Jiang ha
scritto:
Hi,
On 12/08/2017 08:34 AM, Holger Hoffstätte wrote:
So plugging in a device on USB with BFQ as scheduler now works without
hiccup (probably thanks to Ming Lei's last
Hi,
On 12/08/2017 08:34 AM, Holger Hoffstätte wrote:
So plugging in a device on USB with BFQ as scheduler now works without
hiccup (probably thanks to Ming Lei's last patch), but of course I found
another problem. Unmounting the device after use, changing the scheduler
back to deadline or kyber
On 09/29/2017 02:45 AM, Liu Bo wrote:
On Thu, Sep 28, 2017 at 09:57:41AM +0800, Guoqing Jiang wrote:
On 09/28/2017 06:13 AM, Liu Bo wrote:
MD's rdev_set_badblocks() expects that badblocks_set() returns 1 if
badblocks are disabled, otherwise, rdev_set_badblocks() will record
super
On 09/28/2017 06:13 AM, Liu Bo wrote:
MD's rdev_set_badblocks() expects that badblocks_set() returns 1 if
badblocks are disabled, otherwise, rdev_set_badblocks() will record
superblock changes and return success in that case and md will fail to
report an IO error which it should.
This bug has
bio_io_error is capable of replacing the two lines.
Cc: Christoph Hellwig
Cc: Philipp Reisner
Cc: Lars Ellenberg
Cc: Mike Snitzer
Cc: Sagi Grimberg
Cc: Alexander Viro
Signed-off-by: Guoqing Jiang
---
drivers/block/drbd/drbd_int.h | 3 +--
drivers/md/dm-mpath.c | 3 +--
drivers
a good idea, and even the two patches can be put into
one, so how about the following patch?
Looks good.
Acked-by: Guoqing Jiang
Thanks,
Guoqing
Shaohua, what do you think of this one?
---
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 3d957ac1e109..7ffc622dd7fa 100644
--- a/drivers
On 06/26/2017 08:09 PM, Ming Lei wrote:
We will support multipage bvec soon, so initialize bvec
table using the standardy way instead of writing the
talbe directly. Otherwise it won't work any more once
multipage bvec is enabled.
Cc: Shaohua Li
Cc: linux-r...@vger.kernel.org
Signed-off-by: Mi
bio_io_error was introduced in the commit 4246a0b
("block: add a bi_error field to struct bio"), so
use it directly.
Signed-off-by: Guoqing Jiang
---
block/blk-core.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index c70685
Hi All,
As you know, Coly has proposed a general md raid discussion. I
would like to attend this discussion, and besides the topics listed
in previous proposal, I think we can talk about improve the test
suite of mdadm to make it more robust (I can share related test
suite which is used for clust
On 01/10/2017 12:38 AM, Coly Li wrote:
Hi Folks,
I'd like to propose a general md raid discussion, it is quite necessary
for most of active md raid developers sit together to discuss current
challenge of Linux software raid and development trends.
In the last years, we have many development a
25 matches
Mail list logo