Re: [lustre-discuss] lustre project quotas

2018-05-10 Thread Shuichi Ihara
please check 'dumpe2fs -h /dev/xxx | grep 'Filesystem features' if 'project' 
flag is enabled.
If not, you need to add it to all MDT/OSTs. e.g. tune2fs -O project /dev/xxx. 
described in Lustre manual in detail.

Thanks
Ihara

> On May 11, 2018, at 1:02, Einar Næss Jensen  wrote:
> 
> ldiskfs
> 
> Sent from my iPhone
> 
> On 10 May 2018, at 17:26, Alexander I Kulyavtsev  wrote:
> 
>> Do you use zfs or ldiskfs on OST?
>> Zfs does not have project quota yet. Alex.
>>  
>> From: lustre-discuss  on behalf of 
>> Einar Næss Jensen 
>> Date: Thursday, May 10, 2018 at 7:47 AM
>> To: "lustre-discuss@lists.lustre.org" 
>> Subject: Re: [lustre-discuss] lustre project quotas
>>  
>> ​Lustre server is 2.10.1
>> lustre client is 2.10.3
>>  
>> -- 
>> Einar Næss Jensen
>> NTNU HPC Section
>> Norwegian University of Science and Technoloy
>> Address: Høgskoleringen 7i
>>  N-7491 Trondheim, NORWAY
>> tlf: +47 90990249
>> email:   einar.nass.jen...@ntnu.no
>> From: lustre-discuss  on behalf of 
>> Einar Næss Jensen 
>> Sent: Thursday, May 10, 2018 2:45 PM
>> To: lustre-discuss@lists.lustre.org
>> Subject: [lustre-discuss] lustre project quotas
>>  
>> Hello.
>>  
>> I have sucessfully installed lustre and it works well, but I'm having 
>> trouble figuring out how to enable / set project quotas.
>> How can I verify that project quotas are enabled, and how do I set up 
>> projects and assign directories and users to the projects?
>>  
>>  
>> Best Regards
>> Einar Næss Jensen
>>  
>> -- 
>> Einar Næss Jensen
>> NTNU HPC Section
>> Norwegian University of Science and Technoloy
>> Address: Høgskoleringen 7i
>>  N-7491 Trondheim, NORWAY
>> tlf: +47 90990249
>> email:   einar.nass.jen...@ntnu.no
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [Lustre-discuss] configure error using Lustre 2.3 and OFED 3.5

2013-04-20 Thread Shuichi Ihara

Hi,

The lustre patches to build with OFED-3.5 are landed in master branch, but not 
included in lustre-2.3.
try this patch http://review.whamcloud.com/3011 (also need 
http://review.whamcloud.com/6048)

btw, did you apply any patches to OFED-3.5 to build with RHEL6.4's kernel?
As far as I tested, compat-driver didn't work on the latest RHEL6.4's kernel. 
I've filed on compat-driver's bugzilla and it was fixed in the latest branch. 
https://bugzilla.kernel.org/show_bug.cgi?id=55971

Also, compat-rdma failed on the latest RHEL6.3 kernel as well as RHEL6.4. It 
was filed at ofed bugzilla as well.
see http://bugs.openfabrics.org/show_bug.cgi?id=2421 and 
https://jira.hpdd.intel.com/browse/LU-2975

Anyway, we are waiting for these fixes for the Lustre.

Thanks
Ihara
 
On Apr 20, 2013, at 2:15 PM, "Hebenstreit, Michael" 
 wrote:

> Could solve it - the problem was only in configure; to compile conftest.c 
> some changes in the OFED header were necessary (note - I think the changes 
> make sense there, as the header file uses functions otherwise undefined):
> 
> ofed/3.5/src/compat-rdma/include/rdma/ib_addr.h
> 
>#include 
> +   #include 
> +
> +   #if (LINUX_VERSION_CODE < KERNEL_VERSION(3,2,0))
> +
> +   extern int __ethtool_get_settings(struct net_device *dev,
> +  struct ethtool_cmd *cmd);
> +   #endif
> 
>struct rdma_addr_client {
>atomic_t refcount;
>struct completion comp;
>};
> 
> There is also another error, but without any impact. Configure tries to 
> source src/ofa_kernel/config.mk - in earlier versions this was a simple 
> config file of the form "PARAM=VALUE"; now the file is in Makefile format; 
> sourcing that from configure as bourne shell script is .. not advisable
> 
> Michael
> 
> -Original Message-
> From: lustre-discuss-boun...@lists.lustre.org 
> [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Hebenstreit, 
> Michael
> Sent: Friday, April 19, 2013 5:39 PM
> To: Diep, Minh; Lustre-discuss@lists.lustre.org
> Subject: Re: [Lustre-discuss] configure error using Lustre 2.3 and OFED 3.5
> 
> That's not my problem - OFED is working, Lustre is not willing to compile :P
> 
> Michael
> 
> -Original Message-
> From: Diep, Minh 
> Sent: Friday, April 19, 2013 5:33 PM
> To: Hebenstreit, Michael; Lustre-discuss@lists.lustre.org
> Subject: Re: [Lustre-discuss] configure error using Lustre 2.3 and OFED 3.5
> 
> Ofed 3.5 does not support Rhel6.4 kernel yet. I believe 3.5.1 will
> 
> Thanks
> -Minh
> 
> On 4/19/13 5:05 PM, "Hebenstreit, Michael" 
> wrote:
> 
>> Configure fails at testing for openib - anyone an idea?
>> Thanks
>> Michaell
>> 
>> configure:10034: checking whether to enable OpenIB gen2 support
>> configure:10138: cp conftest.c build && make -d modules  CC=gcc -f 
>> /home/mhebenst/lustre-2.3.0/build/Makefile LUSTRE_LINUX_CONFIG=/adm 
>> in/extra/linux-2.6.32-358.2.1.el6.x86_64.crt1/.config LINUXINCLUDE= 
>> -I/usr/local/ofed/3.5/src/compat-rdma-3.5/include -I/admin/extra/l 
>> inux-2.6.32-358.2.1.el6.x86_64.crt1/arch/x86/include
>> -I/admin/extra/linux-2.6.32-358.2.1.el6.x86_64.crt1/arch/x86/include/ge
>> ner
>> ated -I
>> /admin/extra/linux-2.6.32-358.2.1.el6.x86_64.crt1/include
>> -I/admin/extra/linux-2.6.32-358.2.1.el6.x86_64.crt1/include
>> -I/admin/extra/l
>> inux-2.6.32-358.2.1.el6.x86_64.crt1/include2 -include 
>> include/linux/autoconf.h -o tmp_include_depends -o scripts -o 
>> include/config/MAR KER -C 
>> /admin/extra/linux-2.6.32-358.2.1.el6.x86_64.crt1
>> EXTRA_CFLAGS=-Werror-implicit-function-declaration -g 
>> -I/home/mhebenst/lustre -2.3.0/libcfs/include 
>> -I/home/mhebenst/lustre-2.3.0/lnet/include
>> -I/home/mhebenst/lustre-2.3.0/lustre/include -I/usr/local/ofed/3.5/sr 
>> c/compat-rdma-3.5/include  M=/home/mhebenst/lustre-2.3.0/build
>> make[1]: Warning: File `/home/mhebenst/lustre-2.3.0/build/conftest.c' 
>> has modification time 7.2e+03 s in the future In file included from 
>> /usr/local/ofed/3.5/src/compat-rdma-3.5/include/rdma/rdma_cm.h:39,
>>from /home/mhebenst/lustre-2.3.0/build/conftest.c:58:
>> /usr/local/ofed/3.5/src/compat-rdma-3.5/include/rdma/ib_addr.h: In 
>> function âiboe_get_rateâ:
>> /usr/local/ofed/3.5/src/compat-rdma-3.5/include/rdma/ib_addr.h:223:
>> error: implicit declaration of function ârtnl_lockâ
>> /usr/local/ofed/3.5/src/compat-rdma-3.5/include/rdma/ib_addr.h:224:
>> error: implicit declaration of function â__ethtool_get_settingsâ
>> /usr/local/ofed/3.5/src/compat-rdma-3.5/include/rdma/ib_addr.h:225:
>> error: implicit declaration of function ârtnl_unlockâ
>> make[1]: *** [/home/mhebenst/lustre-2.3.0/build/conftest.o] Error 1
>> make: *** [_module_/home/mhebenst/lustre-2.3.0/build] Error 2
>> configure:10141: $? = 2
>> configure: failed program was:
>> | /* confdefs.h.  */
>> | #define PACKAGE_NAME "Lustre"
>> | #define PACKAGE_TARNAME "lustre"
>> | #define PACKAGE_VERSION "LUSTRE_VERSION"
>> | #define PACKAGE_STRING "Lustre LUSTR

Re: [Lustre-discuss] Can one node mount more than one lustre cluster?

2011-03-14 Thread Shuichi Ihara

Yes, you can mount both filesystems which are created with separate MDS/OSS on 
a single fabric.
e.g. ) assume two filesystems (both filesystem name is 'lustre') are configured 
below.
FS1 - MDS/MGS:192.168.1.1 and OSS: 192.168.1.2
FS2 - MDS/MGS:192.168.1.11 and OSS: 192.168.1.12

You should be able to do on the client.
mount -t lustre 192.168.1.1@o2ib:/lustre /lustre1
mount -t lustre 192.168.1.11@o2ib:/lustre /lustre2

btw, even you configured two filesystems on same MDS/OSS by different 
filesystem name, you can mount both on the client.

Thanks
Ihara

On 3/15/11 3:15 PM, Brian O'Connor wrote:
>
> Hi,
>
> What are the constraints on a client mounting more than one lustre
> file system?
>
> I realise that a lustre cluster can have more than one file system
> configured, but can a client mount different file systems from different
> lustre clusters on the same network?;ie.
>
> Assume a Single IB fabric and two Lustre clusters with separate
> MGS/MDS/OSS. One lustre is Lincoln and the other is Washington
>
>   >list_nids
> 192.168.1.10@o2ib
>
>   >mount -t lustre lincoln:/lustre  /lincoln
>   >mount -t lustre washington:/lustre /washinton
>
> Is this doable or should they be on separate IB fabrics
>
>
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Enable async journals

2010-07-13 Thread Shuichi Ihara

Frank,

> tune2fs -O ^has_journal  on all OSSes for all all OSTs

What is a reason of removing the journal feature? You should have the journal 
even whether you want to enable
async_journal or not.
Also, as far as I could see, in general, without enabling async journal, we 
could see better performance on using
external journals rather than the internal journal.
However, once async_journal is enabled, we could see same (in some case, 
better) performance even the
internal journal is used.

Thanks
Ihara

(7/13/10 9:53 PM), Frank Heckes wrote:
> Hi all,
>
> we use SLES 11 and Lustre 1.8.1.1 + patches and like convert a lustre FS
> using external journals to one with async journals enabled.
> Question is whether the procedure:
>
> umount  on all clients
> umounton all OSSes
> e2fsck  on all OSSes for all all OSTs
> tune2fs -O ^has_journal  on all OSSes for all all OSTs
> lctl set_param obdfilter.*.sync_journal=0  on all OSSes
> mount on all OSSes
> mount   on all clients
>
> is correct to do the job? (I hope it isn't neccessary to recreate a FS
> from scratch.) Many thanks in advance.
>
> Cheers
>
> -Frank Heckes
>
> P.S.: 1.8.1.1 still contains some bugs which have been fixed in 1.8.3.
> Described setup is for test purpose only, but the procedure shall be
> used in the final environment (using lustre 1.8.3), too.
>
>
>
> 
> 
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> 
> 
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] MGS Nids

2010-05-20 Thread Shuichi Ihara

You need two MGS nodes for 'mount' commnand on the clients.
e.g) mount -t lustre 192.168.1...@tcp:192.168.1...@tcp:/lustre /lustre

client will attempt to connect to secondary MGS once primary is not available.

Thanks
Ihara

(5/20/10 9:22 PM), leen smit wrote:
> Ok, no VIP's then.. But how does failover work in lustre then?
> If I setup everything using the real IP and then mount from a client and
> bring down the active MGS, the client will just sit there until it comes
> back up again.
> As in, there is no failover to the second node.  So how does this
> internal lustre failover mechanism work?
>
> I've been going trought the docs, and I must say there is very little on
> the failover mechanism, apart from mentions that a seperate app should
> care of that. Thats the reason I'm implementing keepalived..
>
> At this stage I really am clueless, and can only think of creating a TUN
> interface, which will have the VIP address (thus, it becomes a real IP,
> not just a VIP).
> But I got a feeling that ain't the right approach either...
> Is there any docs available where a active/passive MGS setup is described?
> Is it sufficient to define a --failnode=nid,...  at creation time?
>
> Any help would be greatly appreciated!
>
> Leen
>
>
> On 05/20/2010 01:45 PM, Brian J. Murrell wrote:
>> On Thu, 2010-05-20 at 12:46 +0200, leen smit wrote:
>>
>>> Keepalive uses a VIP in a active/passive state. In a failover situation
>>> the VIP gets transferred to the passive one.
>>>
>> Don't use virtual IPs with Lustre.  Lustre clients know how to deal with
>> failover nodes that have different IP addresses and using a virtual,
>> floating IP address will just confuse it.
>>
>> b.
>>
>>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre 1.8.1 and ISCSI Kernel support

2009-08-26 Thread Shuichi Ihara


Yes, it's now working well on my test environment. I built a rpm for
the patched kernel by myself. Well, you also need to stop default 
iscsi-initiator
service which is enabled by iscsi-initiator-utils. (/etc/init.d/iscsi stop)

Try attached my simple rpm spec file and here is a quick procedure.
1. download tarball open-iscsi-2.0-871.tar.gz from www.open-iscsi.org
2. build open-scsi rpm package with attached spec file
3. uninstall iscsi-initiator-utils package
4. install open-iscsi-2.0-871_ and start initiator by 
/etc/init.d/open-iscsi start


Reykjavik hindisvik wrote:

Hi,

Thank you for your answer.
I've tried compiling open-isicsi, but it seems to be the same :
Aug 26 10:40:45 lfs1 kernel:  connection1:0: Could not create connection 
due to crc32c loading error. Make sure the crc32c module is built as a 
module or into the kernel


I've just compiled open-iscsi like that :
make KSRC=/usr/src/kernels/2.6.18-128.1.14.el5_lustre.1.8.1-x86_64
Evrything seemed to be OK. But it's impossible to mount ISCSI target 
with this way too...
Have ytou compiled it like that? Evrything works fine for you? Could you 
tell me the rpm you use?


Someone have another idea?

Thank you

Best regards

Hindisvik

2009/8/26 Shuichi Ihara mailto:ih...@sun.com>>


Hi,

I saw that similar problem was posted on ofed list and I got same
error before.
http://lists.openfabrics.org/pipermail/general/2009-June/060027.html

I'm also using the lustre with iscsi for just testing, but I built
open-iscsi
from the tarball and installed it. http://www.open-iscsi.org/

Hope this helps.

Thanks
-Ihara


Reykjavik hindisvik wrote:



2009/8/26 Reykjavik hindisvik mailto:hindis...@gmail.com> <mailto:hindis...@gmail.com
<mailto:hindis...@gmail.com>>>


   Hi,

   Thank you for your answer, it was a great idea to install  
 kernel-ib-1.4.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm

since it
   provides me the ISCSI modules needed! Thanx for that.

   So now, I can start iscsid service and launch :
   iscsiadm -m discovery  --type sendtargets --portal 192.168.0.253
   ... but I still have a problem when I want to connect my target :
   iscsiadm -m node -T iqn.2008-07.fr.xxx:xxx.disk1.sys1.xyz -p
   192.168.0.253 -l

   It gives me the following error :
   iscsiadm: Could not login to [iface: default, target:
   iqn.2008-07.fr.xxx:xxx.disk1.sys1.xyz, portal:
192.168.0.253,3260]:
   iscsiadm: initiator reported error (9 - internal error)

   Aug 26 09:19:24 lfs1 kernel:  session2: couldn't create a new
   connection.<6>scsi7 : iSCSI Initiator over TCP/IP
   Aug 26 09:19:24 lfs1 kernel:  connection3:0: Could not create
   connection due to crc32c loading error. Make sure the crc32c
module
   is built as a module or into the kernel
   Aug 26 09:19:24 lfs1 iscsid: received iferror -12
   Aug 26 09:19:24 lfs1 iscsid: can't create connection (115)

   It seems there's a problem with the crc32c module, which is
needed
   to mount an ISCSI target. If I use a non lustre patched
kernel like
   : 2.6.18-128.1.14.el5 the module is compiled ionto the kernel and
   everything works fine.

   What can I do?
   Have someone encountered this problem?

   Thank you by advance.

   Best regards,

   Hindisvik



   2009/8/25 Arne Wiebalck mailto:arne.wieba...@cern.ch>
   <mailto:arne.wieba...@cern.ch <mailto:arne.wieba...@cern.ch>>>


   Hi Hindisvik,

   iSCSI support is switched off in the lustre kernels, but
it should
   be available from the kernel-ib package, see this thread:

 
 http://lists.lustre.org/pipermail/lustre-discuss/2009-July/011068.html


   I tried that with 1.8.0, but did not succeed: the iSCSI
modules
   could not be loaded, so I compiled my own kernel in the end.

   I did not check with 1.8.1, but if you succeed I would be
very
   interested to know.

   HTH,
Arne





   Reykjavik hindisvik wrote:

   Hello,

   I've downloaded the last release of Lustre (1.8.1) in
rpm :

   e2fsprogs-1.41.6.sun1-0redhat.rhel5.x86_64.rpm
 
 *kernel-lustre-2.6.18-128.1.14.el5_lustre.1.8.1.x86_64.rpm*

   lustre-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
 
 lustre-client-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
 
 lustre-client-modules-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rp

Re: [Lustre-discuss] Lustre 1.8.1 and ISCSI Kernel support

2009-08-26 Thread Shuichi Ihara

Hi,

I saw that similar problem was posted on ofed list and I got same error before.
http://lists.openfabrics.org/pipermail/general/2009-June/060027.html

I'm also using the lustre with iscsi for just testing, but I built open-iscsi
from the tarball and installed it. http://www.open-iscsi.org/

Hope this helps.

Thanks
-Ihara


Reykjavik hindisvik wrote:
> 
> 
> 2009/8/26 Reykjavik hindisvik  >
> 
> Hi,
> 
> Thank you for your answer, it was a great idea to install 
> kernel-ib-1.4.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm since it
> provides me the ISCSI modules needed! Thanx for that.
> 
> So now, I can start iscsid service and launch :
> iscsiadm -m discovery  --type sendtargets --portal 192.168.0.253
> ... but I still have a problem when I want to connect my target :
> iscsiadm -m node -T iqn.2008-07.fr.xxx:xxx.disk1.sys1.xyz -p
> 192.168.0.253 -l
> 
> It gives me the following error :
> iscsiadm: Could not login to [iface: default, target:
> iqn.2008-07.fr.xxx:xxx.disk1.sys1.xyz, portal: 192.168.0.253,3260]:
> iscsiadm: initiator reported error (9 - internal error)
> 
> Aug 26 09:19:24 lfs1 kernel:  session2: couldn't create a new
> connection.<6>scsi7 : iSCSI Initiator over TCP/IP
> Aug 26 09:19:24 lfs1 kernel:  connection3:0: Could not create
> connection due to crc32c loading error. Make sure the crc32c module
> is built as a module or into the kernel
> Aug 26 09:19:24 lfs1 iscsid: received iferror -12
> Aug 26 09:19:24 lfs1 iscsid: can't create connection (115)
> 
> It seems there's a problem with the crc32c module, which is needed
> to mount an ISCSI target. If I use a non lustre patched kernel like
> : 2.6.18-128.1.14.el5 the module is compiled ionto the kernel and
> everything works fine.
> 
> What can I do?
> Have someone encountered this problem?
> 
> Thank you by advance.
> 
> Best regards,
> 
> Hindisvik
> 
> 
> 
> 2009/8/25 Arne Wiebalck  >
> 
> Hi Hindisvik,
> 
> iSCSI support is switched off in the lustre kernels, but it should
> be available from the kernel-ib package, see this thread:
> 
> http://lists.lustre.org/pipermail/lustre-discuss/2009-July/011068.html
> 
> I tried that with 1.8.0, but did not succeed: the iSCSI modules
> could not be loaded, so I compiled my own kernel in the end.
> 
> I did not check with 1.8.1, but if you succeed I would be very
> interested to know.
> 
> HTH,
>  Arne
> 
> 
> 
> 
> 
> Reykjavik hindisvik wrote:
> 
> Hello,
> 
> I've downloaded the last release of Lustre (1.8.1) in rpm :
> 
> e2fsprogs-1.41.6.sun1-0redhat.rhel5.x86_64.rpm
> *kernel-lustre-2.6.18-128.1.14.el5_lustre.1.8.1.x86_64.rpm*
> lustre-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
> lustre-client-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
> 
> lustre-client-modules-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
> lustre-ldiskfs-3.0.9-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
> lustre-modules-1.8.1-2.6.18_128.1.14.el5_lustre.1.8.1.x86_64.rpm
> 
> I'd like to use Lustre with a ISCSI storage device, and it
> seems this kernel does not support ISCSI (?!)
> (scsi_transport_iscsi.ko). Have someone encouter this
> problem? Is there another version with SCSI support? What
> can I do?
> 
> Thank you be advance for any suggestion.
> 
> Best regards
> 
> Hindisvik
> 
> 
> 
> 
> 
> 
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] create LVM volume for OSSs

2008-07-05 Thread Shuichi Ihara
Please use mdadm(8) to build RAID5 MD device.

Thanks
-Ihara

sd a wrote:
> Thanks for your reply,
> 
> I want to setup file striping mechanism such as RAID 5 for OSS to 
> protect data loss
> 
> How to do this?
> 
> On Fri, Jul 4, 2008 at 5:13 PM, Shuichi Ihara <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
> 
> 
> Yes, lmc is configurator that is part of lustre-1.4 and as you
> mentioned we use
> "lustre.mount" instead of these lustre-1.4 utilities in the
> lustre-1.6.x later.
> But, your question was about the LVM. This is not part of lustre. If
> you want to
> know about LVM more detail, below RHEL administration manual would
> be useful.
> 
> 
> http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/index.html
> 
> Thanks
> -Ihara
> 
> sd a wrote:
> 
> Hi all,
> 
> I've read Lustre Quick Start
> (http://wiki.lustre.org/index.php?title=Lustre_Quick_Start)  and
> Lustre Operation manual.
> 
> As far as I know, the lmc command is removed in Lustre 1.6.5
> (?). So, I had many confusions about creating LVM volume for OSTs
> 
> Can anybody describe clearly how to create LVM and/or RAID for
> OSTs ?
> 
> Thanks a lot
> 
> 
> 
> 
> 
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> <mailto:Lustre-discuss@lists.lustre.org>
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 
> 
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] create LVM volume for OSSs

2008-07-04 Thread Shuichi Ihara

Yes, lmc is configurator that is part of lustre-1.4 and as you mentioned we use
"lustre.mount" instead of these lustre-1.4 utilities in the lustre-1.6.x later.
But, your question was about the LVM. This is not part of lustre. If you want to
know about LVM more detail, below RHEL administration manual would be useful.

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/index.html

Thanks
-Ihara

sd a wrote:
> Hi all,
> 
> I've read Lustre Quick Start 
> (http://wiki.lustre.org/index.php?title=Lustre_Quick_Start)  and Lustre 
> Operation manual.
> 
> As far as I know, the lmc command is removed in Lustre 1.6.5 (?). So, I 
> had many confusions about creating LVM volume for OSTs
> 
> Can anybody describe clearly how to create LVM and/or RAID for OSTs ?
> 
> Thanks a lot
> 
> 
> 
> 
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] state of sun x4500 drivers

2008-04-23 Thread Shuichi Ihara
Which kernel are you using?
It looks your 0.81 is newer than original rhel4.x's (or rhel5) one.
This is not about the disk driver, but X4500(a.k.a Thumper) needs the Linux 
RAID patches
to use as OSS. This patches are included in the lustre patched kenrel and can 
use on
rhel4.x (CentOS 4.x) right now.

Thanks,

-Ihara

Brian Behlendorf wrote:
> Recently I have also been doing some linux work with the x4500 and I have 
> been 
> using the sata_mv driver (v0.81).  The driver will properly detect all the 
> drives and you may access them safely.  However, from what I've seen the 
> driver needs some further development work to actually perform well.  I see 
> only 30 MB/s write rates to a single disk using a simple streaming dd test.  
> Much of this bad performance may simply be due to the fact that the driver 
> can not enable the disk write-back cache forcing you to use write-thru mode.
> 
> So currently the bottom line is linux will work on the x4500.  But to get it 
> working well someone is going to need to invest some development effort to 
> improve the linux driver.
> 
> Good luck,
> Brian
> 
> 
>> There was some discussion about the driver/module for the sata
>> controlers in the thumper (x4500) in the linux kernel.
>>
>> My question is if we bought one of these,  would the CFS kernel have
>> everything needed to use the thumper in a safe way.
>> Thank You.
>>
>> 
>>
>> ___
>> Lustre-discuss mailing list
>> Lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss