* Zdenek Kabelac wrote:
> Dne 11.7.2016 v 22:44 Jon Bernard napsal(a):
> > Greetings,
> >
> > I have recently noticed a large difference in performance between thick
> > and thin LVM volumes and I'm trying to understand why that it the case.
> >
> > In summary, for the same
* Jack Wang wrote:
> 2016-07-11 22:44 GMT+02:00 Jon Bernard :
> > Greetings,
> >
> > I have recently noticed a large difference in performance between thick
> > and thin LVM volumes and I'm trying to understand why that it the case.
> >
> > In
On Tue, Jul 12 2016 at 10:18pm -0400,
Eric Wheeler wrote:
> On Tue, 12 Jul 2016, NeilBrown wrote:
>
> > On Tue, Jul 12 2016, Lars Ellenberg wrote:
> >
> > >
> > > Instead, I suggest to distinguish between recursive calls to
> > > generic_make_request(), and
On Tue, Jul 12 2016 at 6:22pm -0400,
Kani, Toshimitsu wrote:
> On Fri, 2016-06-24 at 14:29 -0400, Mike Snitzer wrote:
> >
> > BTW, if in your testing you could evaluate/quantify any extra overhead
> > from DM that'd be useful to share. It could be there are bottlenecks
> >
On Fri, 2016-06-24 at 14:29 -0400, Mike Snitzer wrote:
>
> BTW, if in your testing you could evaluate/quantify any extra overhead
> from DM that'd be useful to share. It could be there are bottlenecks
> that need to be fixed, etc.
Here are some results from fio benchmark. The test is
On Mon, Jul 11 2016 at 4:44pm -0400,
Jon Bernard wrote:
> Greetings,
>
> I have recently noticed a large difference in performance between thick
> and thin LVM volumes and I'm trying to understand why that it the case.
>
> In summary, for the same FIO test (attached), I'm
Hello Michal...
On 2016-07-12 16:07, Michal Hocko wrote:
/proc/slabinfo could at least point on who is eating that memory.
Thanks. I have made another test (and thus again put the RAID10 out of
sync for the 100th time, sigh) and made regular snapshots of slabinfo
which I have attached to
On Tue 12-07-16 14:42:12, Matthias Dahl wrote:
> Hello Michal...
>
> On 2016-07-12 13:49, Michal Hocko wrote:
>
> > I am not a storage expert (not even mention dm-crypt). But what those
> > counters say is that the IO completion doesn't trigger so the
> > PageWriteback flag is still set. Such a
Cc: Christophe Varoqui
Cc: device-mapper development
Signed-off-by: Xose Vazquez Perez
---
libmultipath/hwtable.c | 42 ++
1 file changed, 42 insertions(+)
diff --git
On Tue 12-07-16 13:49:20, Michal Hocko wrote:
> On Tue 12-07-16 13:28:12, Matthias Dahl wrote:
> > Hello Michal...
> >
> > On 2016-07-12 11:50, Michal Hocko wrote:
> >
> > > This smells like file pages are stuck in the writeback somewhere and the
> > > anon memory is not reclaimable because you
Hello Michal...
On 2016-07-12 13:49, Michal Hocko wrote:
I am not a storage expert (not even mention dm-crypt). But what those
counters say is that the IO completion doesn't trigger so the
PageWriteback flag is still set. Such a page is not reclaimable
obviously. So I would check the IO
On Tue 12-07-16 10:27:37, Matthias Dahl wrote:
> Hello,
>
> I posted this issue already on linux-mm, linux-kernel and dm-devel a
> few days ago and after further investigation it seems like that this
> issue is somehow related to the fact that I am using an Intel Rapid
> Storage RAID10, so I am
Hello Michal...
On 2016-07-12 11:50, Michal Hocko wrote:
This smells like file pages are stuck in the writeback somewhere and
the
anon memory is not reclaimable because you do not have any swap device.
Not having a swap device shouldn't be a problem -- and in this case, it
would cause even
Dne 11.7.2016 v 22:44 Jon Bernard napsal(a):
Greetings,
I have recently noticed a large difference in performance between thick
and thin LVM volumes and I'm trying to understand why that it the case.
In summary, for the same FIO test (attached), I'm seeing 560k iops on a
thick volume vs. 200k
Hello,
I posted this issue already on linux-mm, linux-kernel and dm-devel a
few days ago and after further investigation it seems like that this
issue is somehow related to the fact that I am using an Intel Rapid
Storage RAID10, so I am summarizing everything again in this mail
and include
Hello Mike...
It (at least) seems like that the fact that this is a Intel Rapid
Storage RAID10 does play a role after all.
I did several tests this morning, after I finally managed to get
another disk hooked up via USB3 (the best I could do for now).
I tried dm-crypt w/ a single partition and
Changes since V5:
* Fix commit message typo of patch 1/3:
'EINVA vs EINVAL' and 'dedicate vs dedicated'
* Use $(LN) and $(RM) in Makefile in patch 3/3.
* Rebased to current master(c9aef428b1b16b8128c9fbed1cdefe30bed4ac6f).
Changes since V4:
* Remove the unused constant incorrectly added
Problem:
mpath_recv_reply() return -EINVAL on command 'show maps json' with 2k paths.
Root cause:
Commit 174e717d351789a3cb29e1417f8e910baabcdb16 introduced the
limitation on max bytes(65535) of reply string from multipathd.
With 2k paths(1k mpaths) simulated by scsi_debug, the
Enforce what mpath_cmd.h states "-1 on failure (with errno set)" for
mpath_recv_reply() by set errno and return -1 on failure.
Signed-off-by: Gris Ge
---
libmpathcmd/mpath_cmd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/libmpathcmd/mpath_cmd.c
19 matches
Mail list logo