Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Daniel Kobras
Hi! On Mon, Aug 31, 2009 at 04:34:58PM -0400, Brian J. Murrell wrote: > On Mon, 2009-08-31 at 21:56 +0200, Daniel Kobras wrote: > > Lustre's > > standard config follows Posix and allows dirty client-side caches after > > close(). Performance improves as a result, of course, but in case something >

Re: [Lustre-discuss] SCSI driver problem on 2.6.22.19 + lustre 1.8.1?

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 19:17 -0400, Brian J. Murrell wrote: > Ahhh. So the only possible culprit WRT Lustre is possibly a kernel > patch problem, do you agree? Assuming you are using the 2.6.22-vanilla.series the only patches I see that were touched between those two were: * export-truncate-

Re: [Lustre-discuss] SCSI driver problem on 2.6.22.19 + lustre 1.8.1?

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 15:57 -0500, Nirmal Seenu wrote: > All these call traces are generated when the machine boots up and tries > to load the kernel and before the lustre modules are loaded Ahhh. So the only possible culprit WRT Lustre is possibly a kernel patch problem, do you agree? IOW, thi

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Peter Grandi
Interesting discussion of NFS vs. Lustre even if they are so different in aims... [ ... ] lee> 3) It must support all of the transports we are interested in. Except for some corner cases (that an HEP site might well have) that today tends to reduce to the classic Ethernet/IP pair... lee> 4) It

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Nicolas Williams
On Mon, Aug 31, 2009 at 03:50:33PM -0600, Kevin Van Maren wrote: > Nicolas Williams wrote: > >Client death with dirty data is not all that different from process > >death with dirty data in user-land. Think of an application that does > >write(2), write(2), close(2), _exit(2), but dies between wri

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Kevin Van Maren
Nicolas Williams wrote: > Client death with dirty data is not all that different from process > death with dirty data in user-land. Think of an application that does > write(2), write(2), close(2), _exit(2), but dies between writes. > It is very different; with a user application crash, all th

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Nicolas Williams
On Mon, Aug 31, 2009 at 04:50:02PM -0400, Paul Nowoczynski wrote: > Yes this is the case on server failure but I think the true similarity > between lustre and a locally mounted filesystem lies in the failure of a > client holding dirty pages. Please correct me if I'm wrong but data > loss will

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Kevin Van Maren
Yes, the semantics are similar to a local filesystem: data can be lost after close() under several cases, including: 1) the client crashes before the server has written the data to disk (data that made it to the server should be written, but that is asynch), 2) the server returns an error to th

Re: [Lustre-discuss] SCSI driver problem on 2.6.22.19 + lustre 1.8.1?

2009-08-31 Thread Nirmal Seenu
All these call traces are generated when the machine boots up and tries to load the kernel and before the lustre modules are loaded(I don't load them automatically during boot up). There is no kernel panic generated but the kernel is unresponsive. I could quickly try reverting one of the server

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Paul Nowoczynski
Brian J. Murrell wrote: >> Lustre's >> standard config follows Posix and allows dirty client-side caches after >> close(). Performance improves as a result, of course, but in case something >> goes wrong on the net or the server, users potentially lose data just like on >> any local Posix filesyst

Re: [Lustre-discuss] SCSI driver problem on 2.6.22.19 + lustre 1.8.1?

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 14:19 -0500, Nirmal Seenu wrote: > > I was running 1.8.0.1 without any problem on the same kernel(2.6.22.19) > and all these problems started when I upgraded to 1.8.1 using the same > kernel. Just based on logic, if all that was changed was really only the Lustre upgrade,

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 21:56 +0200, Daniel Kobras wrote: > Hi! Hi, > Lustre's > standard config follows Posix and allows dirty client-side caches after > close(). Performance improves as a result, of course, but in case something > goes wrong on the net or the server, users potentially lose data j

[Lustre-discuss] Vanilla 2.6.27.31 + Lustre server 1.8.1?

2009-08-31 Thread Nirmal Seenu
Is it safe to use the SLES11 Lustre patches on the 2.6.27.31 vanilla kernel source tree? I tried using quilt to patch the vanilla kernel and the only patch that seem to fail is md-raid5 and I don't need md-raid5 support on my Lustre servers. Thanks Nirmal __

Re: [Lustre-discuss] NFS vs Lustre

2009-08-31 Thread Daniel Kobras
Hi! On Sun, Aug 30, 2009 at 04:12:11PM -0500, Nicolas Williams wrote: > NFSv4 can't handle O_APPEND, and has those close-to-open semantics. > Those are the two large departures from POSIX in NFSv4. Along these lines, it's probably worth mentioning commit-on-close as well, an area where NFS (v3 an

Re: [Lustre-discuss] CRC32C not usable for ISCSI in IB RPM

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 11:56 -0600, Daniel Kulinski wrote: > I am trying to update to the latest Lustre packages. We have an iSCSI > backend and when I install kernel-ib and try to log into my devices I > get an error about CRC32C being unavailable. Well, to be completely honest, iSCSI is not re

[Lustre-discuss] SCSI driver problem on 2.6.22.19 + lustre 1.8.1?

2009-08-31 Thread Nirmal Seenu
I have been having a lot of troubles with my SCSI devices on 2.6.22.19 + lustre 1.8.1. I am not able to boot my servers if the fibre is connected and I have to physically remove the fibre every time that I need to reboot the server. I am using SATABeast for OSTs and it is connected to the serve

Re: [Lustre-discuss] CRC32C not usable for ISCSI in IB RPM

2009-08-31 Thread Jeffrey Bennett
I have successfully used Lustre with iSCSI, however never by using together with IB. jab From: lustre-discuss-boun...@lists.lustre.org [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Daniel Kulinski Sent: Monday, August 31, 2009 10:57 AM To: lus

Re: [Lustre-discuss] mds and ost question on older hardware

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 10:50 -0400, Mag Gam wrote: > Would running an MDS and OST on a 32bit hardware hamper any > performance or scalability? Possibly. Depending on load. > These boxes all have 4GB of memory. A 32-bit kernel can only access the first GB of memory or so. The other 3GB would be

[Lustre-discuss] CRC32C not usable for ISCSI in IB RPM

2009-08-31 Thread Daniel Kulinski
I am trying to update to the latest Lustre packages. We have an iSCSI backend and when I install kernel-ib and try to log into my devices I get an error about CRC32C being unavailable. I have this in the boot messages: alg: No test for crc32c (crc32c-generic) Then, when I try to log in

[Lustre-discuss] mds and ost question on older hardware

2009-08-31 Thread Mag Gam
Would running an MDS and OST on a 32bit hardware hamper any performance or scalability? These boxes all have 4GB of memory. ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http://lists.lustre.org/mailman/listinfo/lustre-discuss

Re: [Lustre-discuss] Changing error behaviour to kernel panic.

2009-08-31 Thread Brian J. Murrell
On Mon, 2009-08-31 at 10:56 +0200, Roy Dragseth wrote: > Hi. Hi, > We have a few problems with our storage hw where we get file systems > corruption > on some OSTs once in a while. Lustre prefers to try to continue, but io- > operations to the OSTs in question fail making applications crash.

Re: [Lustre-discuss] modify client kernel support

2009-08-31 Thread Brian J. Murrell
On Sun, 2009-08-30 at 23:44 -0400, Dr. Hung-Sheng Tsao (LaoTsao) wrote: > hi Hi, > how do lustre handle some clients need to modify the kernel due to some > local requirement? I'm not really sure what you are asking. Lustre clients should be able to use whatever kernel you want to build it wit

[Lustre-discuss] list of OSS

2009-08-31 Thread jherold
Hello How can I get the list of OSSes and the OSTs connectet to them from /proc filesystem on MDT. I need it to monitor some issues (probably hardware problems) in our Lustre instalation. Best Regards Jacek Herold ___ Lustre-discuss mailing list Lus

[Lustre-discuss] Changing error behaviour to kernel panic.

2009-08-31 Thread Roy Dragseth
Hi. We have a few problems with our storage hw where we get file systems corruption on some OSTs once in a while. Lustre prefers to try to continue, but io- operations to the OSTs in question fail making applications crash. I would prefer to have a full hang instead of a partially working syst