Hi all,
I found that when I create a file, the number of inodes on OST does not
increase.
Let's see the following experiment:
[root@client lustre]# lfs df -i
UUID Inodes IUsed IFree IUse% Mounted on
data-MDT_UUID 5242883428 520860 1%
? You can check this in the "quota_slave/info" of each osd proc dir. (for instance, proc/fs/lustre/osd-ldiskfs/lustre-OST/quota_slave/info)
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Chan Ching Y
Hi,
I'm testing quota with latest maintenance release (Lustre 2.4.2).
I found that Lustre still allow the users to write the file over hard limit.
Let's see the example below. I've set 500MB soft limit and 530MB hard limit.
But I can write a 700MB file.
[patrick@client lustre]$ dd if=/dev/zero
HPDD
On Sun, 2014-03-02 at 08:26 +0800, Chan Ching Yu, Patrick wrote:
Hi White,
tcp0(eth0) and tcp1(eth1) are connected to different segment.
(connected to two virtual bridges in KVM)
Hi all,
In old Lustre manual (version 1.8), I found that the order of LNET
in /et
d. Someone told me the order
doesn't matter, the file just list all the available LNET devices to
use.
Does the order does matter ONLY in old version of Lustre?
Regards,
Patrick
On Fri, 28 Feb 2014 21:20:58 +, White, Cliff
wrote:
> On 2/28/14, 1:17 AM, "Chan Ching Yu Patr
hr Jr, Richard Frank (Rick Mohr) wrote:
On Feb 26, 2014, at 7:14 PM, "Chan Ching Yu, Patrick"
wrote:
[root@mds1 ~]# lctl list_nids
192.168.122.240@tcp
192.168.100.100@tcp1
[root@oss1 ~]# lctl list_nids
192.168.122.194@tcp
192.168.100.101@tcp1
[root@client ~]# lctl list_nids
192
Hi,
I'm always confused of which NID to use if multiple LNET
interfaces are available on server and client.
Someone told me
connection between Lustre client and OSS is determined by which NID of
MGS is specified when mounting
To make it clear, I establish a VM
environment to verify:
In the
l5_lustre.1.8.5
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Chan Ching Yu, Patrick
Senior
Hi all,
Our Lustre 2.1.5 client rebooted itself and kernel dump was generated in
/var/crash.
The backtrace output shows ldlm function which is Lustre-related. Any idea?
Thanks very much.
# crash /usr/lib/debug/lib/modules/2.6.32-279.19.1.el6_lustre.x86_64/vmlinux
vmcore
KERNEL:
/usr/l
Hi,
I would like to remove a OST permanently.
I’ve followed Lustre manual to migrate the data files and deactivate the
devices.
But how to remove OST permanently so that the clients and servers no longer see
the osc device.
These are my steps to remove a OST:
1) lctl �Cdevice 20 deactivate
2)
Dear all,
Just raise a probably stupid question. I always see the word “RPCs in flight”.
What does it mean? How does it affect the performance?
Thanks very much.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/m
I’ve tried older version 1.8.8 patchless client, writing to 2.1.5 server.
Still got similar iozone result, anyone has the idea?
From: Chan Ching Yu, Patrick
Sent: Friday, June 21, 2013 5:49 PM
To: lustre-discuss@lists.lustre.org
Subject: Poor Direct-IO Performance with Lustre-2.1.5
Hi,
I
Hi,
I am experiencing poor direct-IO performance using Lustre 2.1.5 (latest stable)
on CentOS 6.3.
Two OSS servers connect to the same MD3200 (daisy chained by 4 MD1200).
5 disks (from each MD) form a RAID-5 virtual disk as an OST.
8 OSTs are created in the file system.
RAID segment size is 256
Hi,
What are differences between lustre-tests and lustre-iokit?
By the way, I am using Lustre 2.1.5 (latest stable), but I can’t find the
lustre-iokit rpm for lustre 2.1.5.
http://downloads.whamcloud.com/public/lustre/lustre-2.2.0/el6/server/x86_64/
There is an rpm file lusrtre-iokit-1.4.0.noar
After installing openmpi with yum, I can install lustre-tests now.
Thanks very much.
Regards,
CY
-原始邮件-
From: Diep, Minh
Sent: Thursday, June 20, 2013 11:07 PM
To: Chan Ching Yu, Patrick ; lustre-discuss@lists.lustre.org
Subject: Re: [Lustre-discuss] Can't install lustre-tes
Hi,
I cannot install lustre-tests rpm on my CentOS 6.3, it depends on the file
libmpi.so.1.
# rpm -ivh lustre-tests-2.1.5-2.6.32_279.19.1.el6_lustre.x86_64.x86_64.rpm
error: Failed dependencies:
libmpi.so.1()(64bit) is needed by
lustre-tests-2.1.5-2.6.32_279.19.1.el6_lustre.x86_64.x86_6
Hi,
I am considering the harddisk allocation for the Lustre storage system.
There are totally two Lustre IO servers, one acts as MDS/OSS, another acts as
a pure OSS.
Both IO servers connect to a MD3200, which is daisy-chained by 4 MD1200.
Each MD storage system is equipped with 12 600GB harddisk
: Tuesday, May 28, 2013 12:17 AM
To: Chan Ching Yu, Patrick
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [Lustre-discuss] Help on installing Lustre 2.1.5 on stock CentOS
6.3
and ignore mount -t proc proc /proc line, this is because I was building it in
a chroot'ed environment.
On 2
6_64.rpm
lustre-modules-2.1.5-2.6.32_279.el6.x86_64.x86_64.rpm
error: Failed dependencies:
lustre-backend-fs is needed by
lustre-modules-2.1.5-2.6.32_279.el6.x86_64.x86_64
That’s the same. Thx.
From: Wojciech Turek
Sent: Monday, May 27, 2013 11:24 PM
To: Chan Ching Yu, Patrick
Cc: l
Hi all,
I tried to install Lustre 2.1.5 client RPM (patchless kernel) on a CentOS 6.3
machine.
However, in Whamcloud download site
(http://downloads.whamcloud.com/public/lustre/latest-maintenance-release/el6/client/RPMS/x86_64/),
I can only find the binary RPMs for kernel 2.6.32-279.19.1, wh
Hi all,
In my testing environment, there are one MDS/OSS server and one Lustre client,
running on CentOS 6.3. Lustre 2.1.5 is used.
I tried to power off the MDS/OSS server abnormally while Lustre filesystem is
still mounted on Lustre client.
Then I power off Lustre client, start MDS/OSS and Lus
21 matches
Mail list logo