Thanks a lot Patrick for detail answer, I tried with gnu parallel with dd
and over all the throughput was increased locally .. you are right it is
due to client side single thread issue.
what about 2nd challenge to export bunch of NVMes from single server as
shared volume ? I tried glusterfs (very
Hmm – It’s possible you’ve got an issue, but I think more likely is that your
chosen benchmarks aren’t capable of showing the higher speed.
I’m not really sure about your fio test - writing 4K random blocks will be
relatively slow and might not speed up with more disks, but I can’t speak to it
1) fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite
--bs=4k --direct=0 --size=20G --numjobs=4 --runtime=240 --group_reporting
2) time cp x x2
3) and dd if=/dev/zero of=/ssd/d.data bs=10G count=4 iflag=fullblock
any other way to test this plz let me know
/Zee
On Tue, Aug 28,
Hi Andrew
Sorry, it is often confusing when multiple patches are tracked by the same JIRA
ticket reference. A number of patches that contribute towards 4.14 support were
included in 2.10.5 but there are still a couple that did not land in time (and
still need some work by the look of things -
Was LU-10560 supposed to land in 2.10.5? Tried to compile 2.10.5 against a 4.14
kernel and getting error that look like the patchset for "wait_queue_t changes
to wait_queue_entry_t" is missing.
Andrew Prout
Lincoln Laboratory Supercomputing Center
MIT Lincoln Laboratory
244 Wood Street, Lex
How are you measuring write speed?
From: lustre-discuss on behalf of
Zeeshan Ali Shah
Sent: Tuesday, August 28, 2018 1:30:03 AM
To: lustre-discuss@lists.lustre.org
Subject: [lustre-discuss] separate SSD only filesystem including HDD
Dear All, I recently deploy
The MDS situation is very basic: active/passive mds0/mds1 for both fas & fsB.
fsA has the combined msg/mdt in a single zfs filesystem, and fsB has its own
mdt in a separate zfs filesystem. mds0 is primary for all.
fsA & fsB DO both have changelogs enabled to feed robinhood databases.
What’s t