[Lustre-discuss] Number of inodes on OST

2014-03-10 Thread Chan Ching Yu Patrick
Hi all, I found that when I create a file, the number of inodes on OST does not increase. Let's see the following experiment: [root@client lustre]# lfs df -i UUID Inodes IUsed IFree IUse% Mounted on data-MDT_UUID 5242883428 520860

Re: [Lustre-discuss] Lustre Quota over Hard Limit

2014-03-04 Thread Chan Ching Yu Patrick
000/quota_slave/info) -Original Message- From: lustre-discuss-boun...@lists.lustre.org [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Chan Ching Yu Patrick Sent: Wednesday, March 05, 2014 11:25 AM To: lustre-discuss@lists.lustre.org Subject: [Lustre-discuss] Lustre Quota over

Re: [Lustre-discuss] Which NID to use?

2014-03-03 Thread Chan Ching Yu Patrick
Intel HPDD On Sun, 2014-03-02 at 08:26 +0800, Chan Ching Yu, Patrick wrote: Hi White, tcp0(eth0) and tcp1(eth1) are connected to different segment. (connected to two virtual bridges in KVM) Hi all, In old Lustre manual (version 1.8), I found that the order of LNET in /et

Re: [Lustre-discuss] Which NID to use?

2014-03-01 Thread Chan Ching Yu, Patrick
me the order doesn't matter, the file just list all the available LNET devices to use. Does the order does matter ONLY in old version of Lustre? Regards, Patrick On Fri, 28 Feb 2014 21:20:58 +, White, Cliff wrote: On 2/28/14, 1:17 AM, Chan Ching Yu Patrick wrote: Hi Mohr

Re: [Lustre-discuss] Which NID to use?

2014-02-28 Thread Chan Ching Yu Patrick
Hi Mohr, The reason why I made this setup is I'm not sure how Lustre selects the interface in mult-rail environment. Especially when all node have Infiniband and Ethernet, how can I ensure Infiniband is used between client and OSS? Regards, Patrick On 02/27/2014 12:28 PM, Mohr Jr,

[Lustre-discuss] Which NID to use?

2014-02-26 Thread Chan Ching Yu, Patrick
Hi, I'm always confused of which NID to use if multiple LNET interfaces are available on server and client. Someone told me connection between Lustre client and OSS is determined by which NID of MGS is specified when mounting To make it clear, I establish a VM environment to verify: In the

Re: [Lustre-discuss] lustre 1.8.5 client failed to mount lustre

2013-10-16 Thread Chan Ching Yu Patrick
___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss -- Chan Ching Yu, Patrick Senior System Engineer Cluster Technology Limited Modernize

[Lustre-discuss] Kernel Dump

2013-07-26 Thread Chan Ching Yu, Patrick
Hi all, Our Lustre 2.1.5 client rebooted itself and kernel dump was generated in /var/crash. The backtrace output shows ldlm function which is Lustre-related. Any idea? Thanks very much. # crash /usr/lib/debug/lib/modules/2.6.32-279.19.1.el6_lustre.x86_64/vmlinux vmcore KERNEL:

[Lustre-discuss] Removing OSTs permanently

2013-06-28 Thread Chan Ching Yu, Patrick
Hi, I would like to remove a OST permanently. I’ve followed Lustre manual to migrate the data files and deactivate the devices. But how to remove OST permanently so that the clients and servers no longer see the osc device. These are my steps to remove a OST: 1) lctl �Cdevice 20 deactivate 2)

[Lustre-discuss] RPCs in flight

2013-06-27 Thread Chan Ching Yu, Patrick
Dear all, Just raise a probably stupid question. I always see the word “RPCs in flight”. What does it mean? How does it affect the performance? Thanks very much. ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org

Re: [Lustre-discuss] Poor Direct-IO Performance with Lustre-2.1.5

2013-06-22 Thread Chan Ching Yu, Patrick
I’ve tried older version 1.8.8 patchless client, writing to 2.1.5 server. Still got similar iozone result, anyone has the idea? From: Chan Ching Yu, Patrick Sent: Friday, June 21, 2013 5:49 PM To: lustre-discuss@lists.lustre.org Subject: Poor Direct-IO Performance with Lustre-2.1.5 Hi, I

[Lustre-discuss] Poor Direct-IO Performance with Lustre-2.1.5

2013-06-21 Thread Chan Ching Yu, Patrick
Hi, I am experiencing poor direct-IO performance using Lustre 2.1.5 (latest stable) on CentOS 6.3. Two OSS servers connect to the same MD3200 (daisy chained by 4 MD1200). 5 disks (from each MD) form a RAID-5 virtual disk as an OST. 8 OSTs are created in the file system. RAID segment size is

[Lustre-discuss] Can't install lustre-tests

2013-06-20 Thread Chan Ching Yu, Patrick
Hi, I cannot install lustre-tests rpm on my CentOS 6.3, it depends on the file libmpi.so.1. # rpm -ivh lustre-tests-2.1.5-2.6.32_279.19.1.el6_lustre.x86_64.x86_64.rpm error: Failed dependencies: libmpi.so.1()(64bit) is needed by

Re: [Lustre-discuss] Can't install lustre-tests

2013-06-20 Thread Chan Ching Yu, Patrick
After installing openmpi with yum, I can install lustre-tests now. Thanks very much. Regards, CY -原始邮件- From: Diep, Minh Sent: Thursday, June 20, 2013 11:07 PM To: Chan Ching Yu, Patrick ; lustre-discuss@lists.lustre.org Subject: Re: [Lustre-discuss] Can't install lustre-tests I

[Lustre-discuss] lustre-tests and lustre-iokit

2013-06-20 Thread Chan Ching Yu, Patrick
Hi, What are differences between lustre-tests and lustre-iokit? By the way, I am using Lustre 2.1.5 (latest stable), but I can’t find the lustre-iokit rpm for lustre 2.1.5. http://downloads.whamcloud.com/public/lustre/lustre-2.2.0/el6/server/x86_64/ There is an rpm file

[Lustre-discuss] Harddisk Allocation

2013-06-14 Thread Chan Ching Yu, Patrick
Hi, I am considering the harddisk allocation for the Lustre storage system. There are totally two Lustre IO servers, one acts as MDS/OSS, another acts as a pure OSS. Both IO servers connect to a MD3200, which is daisy-chained by 4 MD1200. Each MD storage system is equipped with 12 600GB

[Lustre-discuss] Mount takes long time after abnormal shutdown of MDS/OSS

2013-05-27 Thread Chan Ching Yu, Patrick
Hi all, In my testing environment, there are one MDS/OSS server and one Lustre client, running on CentOS 6.3. Lustre 2.1.5 is used. I tried to power off the MDS/OSS server abnormally while Lustre filesystem is still mounted on Lustre client. Then I power off Lustre client, start MDS/OSS and

[Lustre-discuss] Help on installing Lustre 2.1.5 on stock CentOS 6.3

2013-05-27 Thread Chan Ching Yu, Patrick
Hi all, I tried to install Lustre 2.1.5 client RPM (patchless kernel) on a CentOS 6.3 machine. However, in Whamcloud download site (http://downloads.whamcloud.com/public/lustre/latest-maintenance-release/el6/client/RPMS/x86_64/), I can only find the binary RPMs for kernel 2.6.32-279.19.1,

Re: [Lustre-discuss] Help on installing Lustre 2.1.5 on stock CentOS 6.3

2013-05-27 Thread Chan Ching Yu, Patrick
-2.6.32_279.el6.x86_64.x86_64.rpm error: Failed dependencies: lustre-backend-fs is needed by lustre-modules-2.1.5-2.6.32_279.el6.x86_64.x86_64 That’s the same. Thx. From: Wojciech Turek Sent: Monday, May 27, 2013 11:24 PM To: Chan Ching Yu, Patrick Cc: lustre-discuss@lists.lustre.org Subject

Re: [Lustre-discuss] Help on installing Lustre 2.1.5 on stock CentOS 6.3

2013-05-27 Thread Chan Ching Yu, Patrick
: Tuesday, May 28, 2013 12:17 AM To: Chan Ching Yu, Patrick Cc: lustre-discuss@lists.lustre.org Subject: Re: [Lustre-discuss] Help on installing Lustre 2.1.5 on stock CentOS 6.3 and ignore mount -t proc proc /proc line, this is because I was building it in a chroot'ed environment. On 27 May