Re: [Lustre-discuss] Lustre and disk tuning

2008-01-31 Thread Andreas Dilger
On Jan 30, 2008 18:32 -0800, Dan wrote: > I was a little uncertain of the stripe size calculation so here we go... > My chunk size is 128k and there are 23 disks in RAID 6 (one hot spare > leave 23). That means 21 data disks? Judging by your formula I take 23 * > 128k whis is 2944. Is this even

[Lustre-discuss] running read only client and OSS on the same server

2008-01-31 Thread Dai, Manhong
Hi , Running OSS and client on the same server could have problem. How about a read-only client? A readonly client is useful because the files on lustre file system can be backed up to other file system on the same server, so there is not any network traffic. I appreciate any input. Best, Ma

Re: [Lustre-discuss] Lustre and disk tuning

2008-01-31 Thread Dan
Thanks Andreas. I'll reconfigure the RAID and give it another shot today. Would it be reasonable to credit the stalled writes with this I/O mismatch I have? Dan On Thu, 2008-01-31 at 01:40 -0700, Andreas Dilger wrote: > On Jan 30, 2008 18:32 -0800, Dan wrote: > > I was a little uncertain of

Re: [Lustre-discuss] Off-topic: largest existing Lustre file system?

2008-01-31 Thread Weikuan Yu
I would throw in some of my experience for discussion as Shane mentioned my name here :) (1) First, I am not under the impression that the collective I/O is designed to reveal the peak performance of a particular system. Well, there are publications claiming that colective I/O might be a prefer

Re: [Lustre-discuss] lustre+samba

2008-01-31 Thread Brian J. Murrell
On Thu, 2008-01-31 at 11:10 +0100, Papp Tamas wrote: > Dear All, > > I try to use our cluster though samba share. Everything work fine, but > I think, we should have -o flock at lustre mount time. Is this a single samba server exporting Lustre? Do you really need lock coherency between Lustre cl

Re: [Lustre-discuss] lustre+samba

2008-01-31 Thread Klaus Steden
Tamas, I¹ve had issues with software that used POSIX file locking calls with Lustre 1.6 ... the same thing worked on Lustre 1.4, but I suspect it just failed transparently and didn¹t actually involve any recording locking. Never saw it panic anything, though, but I know that 1.6 doesn¹t support l

Re: [Lustre-discuss] running read only client and OSS on the same server

2008-01-31 Thread Andreas Dilger
On Jan 31, 2008 09:19 -0500, Dai, Manhong wrote: > Running OSS and client on the same server could have problem. How > about a read-only client? > > A readonly client is useful because the files on lustre file system > can be backed up to other file system on the same server, so there > is not

Re: [Lustre-discuss] WBC subcomponents.

2008-01-31 Thread Vladimir V. Saveliev
Hello On Wed, 2008-01-23 at 00:10 +0300, Nikita Danilov wrote: > Hello, > > below is a tentative list of tasks into which WBC effort can be > sub-divided. I also provided a less exact list for the EPOCH component, > and an incomplete list for the STL component. > > WBC tasks are estimated in lin

Re: [Lustre-discuss] Off-topic: largest existing Lustre file system?

2008-01-31 Thread Marty Barnaby
I concur that at 160, my processor count was low. At the time, I had access to as many as 1000 on our big Cray XT3, Redstorm, but now it is not available to me at all. Through my trials, I found that, for appending to a single, shared file, matching the lfs maximum stripe count of 160, with the

Re: [Lustre-discuss] Lustre and disk tuning

2008-01-31 Thread Andreas Dilger
On Jan 31, 2008 08:25 -0800, Dan wrote: > Thanks Andreas. I'll reconfigure the RAID and give it another shot > today. Would it be reasonable to credit the stalled writes with this > I/O mismatch I have? It would definitely hurt performance... Also, placing the MDT on the same RAID6 is not very

Re: [Lustre-discuss] Off-topic: largest existing Lustre file system?

2008-01-31 Thread Tom.Wang
Marty Barnaby wrote: > I concur that at 160, my processor count was low. At the time, I had > access to as many as 1000 on our big Cray XT3, Redstorm, but now it is > not available to me at all. Through my trials, I found that, for > appending to a single, shared file, matching the lfs maximum s

[Lustre-discuss] Lustre process/system tuning ...

2008-01-31 Thread Klaus Steden
Hello, I'm seeing some interesting behaviour from one of the nodes in our cluster when two applications attempt to read from the Lustre. Specifically, one application is a real time video player, and the other is just interacting with the file system conventionally ... but if the player is runnin