Re: [lustre-discuss] MDS/MDT

2020-02-19 Thread Abe Asraoui
Adding lustre-devel alias
From: "Abe Asraoui (System)" 
Date: Friday, February 14, 2020 at 3:43 PM
To: "lustre-discuss@lists.lustre.org" , "Abe 
Asraoui (System)" 
Subject: MDS/MDT

 Hi All,

Has anyone done any meta data performance with dual ported nvmes or regular 
nvmes..

Can you share what IOPs were obtained vs sata ssds ?


Thanks in Advance,
abe


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] MDS/MDT

2020-02-14 Thread Abe Asraoui
 Hi All,

Has anyone done any meta data performance with dual ported nvmes or regular 
nvmes..

Can you share what IOPs were obtained vs sata ssds ?


Thanks in Advance,
abe


___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] ldiskfs performance degradation due to kernel swap hugging cpu

2018-12-28 Thread Abe Asraoui



+ lustre-discuss 



 Hi All.
We are seeing low performance with lustre2.11 in ldiskfs configuration with 
obdfilter survey, not sure if this is a known issue.

obdfilter survery under ldiskfs performance is impacted by kernel swap 
hugging cpu usage, current configurations is as follows:
2 osts: ost1,ost2
/dev/sdc on /mnt/mdt type lustre 
(ro,context=unconfined_u:object_r:user_tmp_t:s0,svname=tempAA-MDT,mgs,osd=osd-ldiskfs,user_xattr,errors=remount-ro)
/dev/sdb on /mnt/ost1 type lustre 
(ro,context=unconfined_u:object_r:user_tmp_t:s0,svname=tempAA-OST0001,mgsnode=10.10.10.168@o2ib,osd=osd-ldiskfs,errors=remount-ro)
/dev/sda on /mnt/ost2 type lustre 
(ro,context=unconfined_u:object_r:user_tmp_t:s0,svname=tempAA-OST0002,mgsnode=10.10.10.168@o2ib,osd=osd-ldiskfs,errors=remount-ro)
[root@oss100 htop-2.2.0]#
[root@oss100 htop-2.2.0]# dkms status
lustre-ldiskfs, 2.11.0, 3.10.0-693.21.1.el7_lustre.x86_64, x86_64: installed
spl, 0.7.6, 3.10.0-693.21.1.el7_lustre.x86_64, x86_64: installed
[root@oss100 htop-2.2.0]#
sh ./obdsurvey-script.sh 
Mon Dec 10 17:19:52 PST 2018 Obdfilter-survey for case=disk from oss100
ost 2 sz 51200K rsz 1024K obj 2 thr 2 write 134.52 [ 49.99, 101.96] 
rewrite 132.09 [ 49.99, 78.99] read 2566.74 [ 258.96, 2068.71] 
ost 2 sz 51200K rsz 1024K obj 2 thr 4 write 195.73 [ 76.99, 128.98] 
rewrite
root@oss100 htop-2.2.0]# lctl dl
0 UP osd-ldiskfs tempAA-MDT-osd tempAA-MDT-osd_UUID 9
1 UP mgs MGS MGS 4
2 UP mgc MGC10.10.10.168@o2ib 65f231a0-8fd8-001d-6b0f-3e986f914178 4
3 UP mds MDS MDS_uuid 2
4 UP lod tempAA-MDT-mdtlov tempAA-MDT-mdtlov_UUID 3
5 UP mdt tempAA-MDT tempAA-MDT_UUID 8
6 UP mdd tempAA-MDD tempAA-MDD_UUID 3
7 UP qmt tempAA-QMT tempAA-QMT_UUID 3
8 UP lwp tempAA-MDT-lwp-MDT tempAA-MDT-lwp-MDT_UUID 4
9 UP osd-ldiskfs tempAA-OST0001-osd tempAA-OST0001-osd_UUID 4
10 UP ost OSS OSS_uuid 2
11 UP obdfilter tempAA-OST0001 tempAA-OST0001_UUID 5
12 UP lwp tempAA-MDT-lwp-OST0001 tempAA-MDT-lwp-OST0001_UUID 4
13 UP osp tempAA-OST0001-osc-MDT tempAA-MDT-mdtlov_UUID 4
14 UP echo_client tempAA-OST0001_ecc tempAA-OST0001_ecc_UUID 2
15 UP osd-ldiskfs tempAA-OST0002-osd tempAA-OST0002-osd_UUID 4
16 UP obdfilter tempAA-OST0002 tempAA-OST0002_UUID 5
17 UP lwp tempAA-MDT-lwp-OST0002 tempAA-MDT-lwp-OST0002_UUID 4
18 UP osp tempAA-OST0002-osc-MDT tempAA-MDT-mdtlov_UUID 4
19 UP echo_client tempAA-OST0002_ecc tempAA-OST0002_ecc_UUID 2
[root@oss100 htop-2.2.0]#
root@oss100 htop-2.2.0]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 152.8T 0 disk /mnt/ost2
sdb 8:16 0 152.8T 0 disk /mnt/ost1
sdc 8:32 0 931.5G 0 disk /mnt/mdt
sdd 8:48 0 465.8G 0 disk 
\u251c\u2500sdd1 8:49 0 200M 0 part /boot/efi
\u251c\u2500sdd2 8:50 0 1G 0 part /boot
\u2514\u2500sdd3 8:51 0 464.6G 0 part 
\u251c\u2500centos-root 253:0 0 50G 0 lvm /
\u251c\u2500centos-swap 253:1 0 4G 0 lvm [SWAP]
\u2514\u2500centos-home 253:2 0 410.6G 0 lvm /home
nvme0n1 259:2 0 372.6G 0 disk 
\u2514\u2500md124 9:124 0 372.6G 0 raid1 
nvme1n1 259:0 0 372.6G 0 disk 
\u2514\u2500md124 9:124 0 372.6G 0 raid1 
nvme2n1 259:3 0 372.6G 0 disk 
\u2514\u2500md125 9:125 0 354G 0 raid1 
nvme3n1 259:1 0 372.6G 0 disk 
\u2514\u2500md125 9:125 0 354G 0 raid1
 
thanks,
Abe




___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] MDT test in rel2.11

2018-07-17 Thread Abe Asraoui

Hi Patrick,

I’m more interested in what improvements have been in comparaison to rel2.10,
We have seen some performance degradation in rel2.10 with zfs and wondering if 
this is still the case in rel2.11 with the
Inclusion of DOM feature etc..


Thanks,
Abe


From: Patrick Farrell 
Date: Tuesday, July 17, 2018 at 8:32 PM
To: "Abe Asraoui (System)" , 
"lustre-de...@lists.lustre.org" , 
"lustre-discuss@lists.lustre.org" 
Subject: Re: MDT test in rel2.11


Abe,

Any benchmarking would be highly dependent on hardware, both client and server. 
 Is there a particular comparison (say, between versions) you’re interested in 
or something you’re concerned about?

- Patrick

From: lustre-devel  on behalf of Abe 
Asraoui 
Sent: Tuesday, July 17, 2018 9:23:10 PM
To: lustre-de...@lists.lustre.org; lustre-discuss@lists.lustre.org; Abe Asraoui
Subject: [lustre-devel] MDT test in rel2.11

Hi All,


Has anyone done any MDT testing under the latest rel2.11 and have benchmark 
data to share?


Thanks,
Abe


___
lustre-devel mailing list
lustre-de...@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-devel-lustre.org<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.lustre.org_listinfo.cgi_lustre-2Ddevel-2Dlustre.org&d=DwMF-g&c=4DxX-JX0i28X6V65hK0ft5M-1rZQeWgdMry9v8-eNr4&r=uOlOifyIDx4uh-m-EmK9TzkpR6RloKTYzjsbWwVKzmc&m=T3ZcoTwhQfWPNmVUj2vjMTfPAjwLEMX2xjMD-xadokU&s=dIbxsMKeXnEpMwOKA3OGr_0KQXbMONdnqxw9ZwTvUyY&e=>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[lustre-discuss] MDT test in rel2.11

2018-07-17 Thread Abe Asraoui
Hi All,


Has anyone done any MDT testing under the latest rel2.11 and have benchmark 
data to share?


Thanks,
Abe
 

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] [HPDD-discuss] Tiered storage

2017-07-13 Thread Abe Asraoui
Thanks Andreas for sharing the details here, we will most likely pick this up 
in rel 2.11

-Abe

-Original Message-
From: Dilger, Andreas [mailto:andreas.dil...@intel.com] 
Sent: Thursday, July 13, 2017 2:34 PM
To: Abe Asraoui (Server)
Cc: hpdd-disc...@lists.01.org; Cowe, Malcolm J; Xiong, Jinshan; Paciucci, 
Gabriele; Lustre Discuss
Subject: Re: [HPDD-discuss] Tiered storage


> On Jul 7, 2017, at 16:06, Abe Asraoui  wrote:
> 
> Hi All,
> 
> Does someone knows of a configuration guide for Lustre tiered storage ?

Abe,
I don't think there is an existing guide for this, but it is definitely 
something we are looking into.

Currently, the best way to manage different storage tiers in Lustre is via OST 
pools.  As of Lustre 2.9 it is possible to set a default OST pool on the whole 
filesystem (via "lfs setstripe" on the root directory) that is inherited for 
new files/directories that are created in directories that do not already have 
a default directory layout.  Also, some issues with OST pools were fixed in 2.9 
related to inheriting the pool from a parent/filesystem default if other 
striping parameters are specified on the command line (e.g. set pool on parent 
dir, then use "lfs setstripe -c 3" to create a new file).  Together, these make 
it much easier to manage different classes of storage within a single 
filesystem.

Secondly, "lfs migrate" (and the helper script lfs_migrate) allow migration 
(movement) of files between OSTs (relatively) transparently to the 
applications.  The "lfs migrate" functionality (added in Lustre 2.5 I think) 
keeps the same inode, while moving the data from one set of OSTs to another set 
of OSTs, using the same options as "lfs setstripe" to specify the new file 
layout.  It is possible to migrate files opened for read, but it isn't possible 
currently to migrate files that are being modified (either this will cause 
migration to fail, or alternately it is possible to block user access to the 
file while it is being migrated).

The File Level Redundancy (FLR) feature currently under development (target 
2.11) will improve tiered storage with Lustre, by allowing the file to be 
mirrored on multiple OSTs, rather than having to be migrated to have a copy 
exclusively on a single set of OSTs.  With FLR it would be possible to mirror 
input files into e.g. flash-based OST pool before a job starts, and drop the 
flash mirror after the job has completed, without affecting the original files 
on the disk-based OSTs.  It would also be possible to write new files onto the 
flash OST pool, and then mirror the files to the disk OST pool after they 
finish writing, and remove the flash mirror of the output files once the job is 
finished.

There is still work to be done to integrate this FLR functionality into job 
schedulers and application workflows, and/or have a policy engine that manages 
storage tiers directly, but depending on what aspects you are looking at, some 
of the functionality is already available.


Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation







___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org