The main issue is that tune2fs changing the superblock while the journal is not
recovered means any changes will be lost. It is hard to get this 100% correct,
since it is possible to set some tunable on the mounted superblock, and
replaying the journal in that case would be bad.
Running
On 2010-08-14, at 2:28, Adrian Ulrich adr...@blinkenlights.ch wrote:
- the on-disk structure of the object directory for this OST is corrupted.
Run e2fsck -fp /dev/{ostdev} on the unmounted OST filesystem.
e2fsck fixed it: The OST is now running since 40 minutes without problems:
But
On 2010-08-14, at 1:32, Michael Kluge michael.kl...@tu-dresden.de wrote:
how does Lustre handle write() requests to files opened with O_DIRECT.
Does the OSS enforce that the OST has physically written the data to the
OST before the op is completed or does the write() call return on the
OST filesystem.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Why are you running mkfs.lustre on the client? Please read the manual for
instructions on how to do initial setup of Lustre.
Cheers, Andreas
On 2010-08-11, at 11:55, Vilobh Meshram vilobh.mesh...@gmail.com wrote:
Hi,
I get the following error when I try to mount lustre on the clients.
of changes in 1.8.4 in the {lustre,lnet,ldiskfs}/ChangeLog files.
That said, 1.8.4 does have a fair number of bug fixes brought in from Cray and
LLNL, so I would recommend everyone to use it (when it finally appears on the
download site).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
a difference.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 2010-08-12, at 14:52, burlen wrote:
Andreas Dilger wrote:
On 2010-08-11, at 23:36, burlen wrote:
I am interested in how write()s are buffered in Lustre on the cleint,
server, and network in between. Specifically I'd like to understand what
happens during writes when large number
,
readcache_max_filesize) it will either discard the page immediately, or keep it
in memory and let the VM evict it when there is memory pressure (if not
accessed).
On 08/12/2010 12:35 PM, Andreas Dilger wrote:
On 2010-08-11, at 23:36, burlen wrote:
I am interested in how write()s are buffered
-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss
On 2010-08-10, at 17:26, Paul Nowoczynski pa...@psc.edu wrote:
FYI I noticed today that rpmbuild of the 2.6.27 src rpm fails due to the
missing file linux-2.6.27-lustre.patch.
thanks,
Hi Paul, that patch name doesn't look familiar. We definitely apply kernel
patches for the server, but they
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
the clients/servers to handle varying network and storage latency,
instead of having a fixed timeout.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
lustre-client-modules-1.8.1.1-2.6.27.29_0.1_lustre.1.8.1.1_default
reshpc115:~ # rpm -qa | grep -i kernel-ib
kernel-ib-1.4.2-2.6.27.29_0.1_default
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss
debug levels adjustments I think it was
somewhat improved.
Useful would be to run strace -tttT to get timestamps for each operation to
see for which operations it is slower on Lustre than NFS.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
On 2010-08-02, at 23:06, Sebastian Gutierrez gut...@cs.stanford.edu wrote:
I have found some mention on lustre-discuss that using a tool that does a
backup of the xattrs is preferable. I am assuming that the cp -a should be
sufficient since it is supposed to preserve all. In the
On 2010-07-30, at 13:14, Sebastian Gutierrez gut...@cs.stanford.edu wrote:
If you are planning on expanding this at the RAID6 level to be an 8+2
configuration, you should specify -E stripe=256,stride=64.
Are there any potential negatives here? I initially used a 6 disk raid 10
but I
in every call?
Thanks,
Arifa.
-Original Message-
From: Andreas Dilger [mailto:andreas.dil...@oracle.com]
Sent: Thursday, July 29, 2010 11:41 PM
To: Arifa Nisar
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [Lustre-discuss] Read ahead / prefetching
On 2010-07-29, at 14:02, Arifa
is interrupted for some reason.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
that
are all 0.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 2010-07-29, at 04:47, Daire Byrne wrote:
I was wondering if it is possible to have the client completely cache
a recursive listing of a lustre filesystem such that on a second run
it doesn't have to talk to the MDT again? Taking the simplest case
where I only have one client that is
think was fixed in the 2.0.0 server) and it can't
hurt to do some testing in your environment.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
, it will always readahead at least a
full RPC at a time (by default 1MB), unless the application is reading larger
chunks than this, then it reads ahead in units of the IO size aligned to
RPC-sized boundaries.
-Original Message-
From: Andreas Dilger [mailto:andreas.dil...@oracle.com
.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
it. No patch as yet, but it would be worthwhile to
subscribe to for updates.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman
, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
If someone who is familiar with SELinux had the time, I'd be thrilled to find
some way to exclude Lustre mountpoints from SELinux automatically, and then
submit it upstream.
Cheers, Andreas
On 2010-07-21, at 8:24, William Olson lustre_ad...@reachone.com wrote:
When it comes to
This isn't something we test here, but in theory it should work. The OST object
ids have nothing to do with the on-disk inode numbers, so inode renumbering
during the resize shouldn't cause any Lustre-visible issues.
I would recommend to do a raw copy of the OST filesystem, find some files with
restored to a mounted lustre
filesystem to preserve the striping. Otherwise, regular RHEL5 tar should be
enough for a backup/restore of the MDT xattrs mounted with -t ldiskfs.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
The use of ext3 or ext4 and the filesystem feature flags has nothing to do with
the setting of the incorrect target. I don't know how you got to that state,
but there are a number of places where the OST index is stored that need to
verified and fixed.
There is the mountdata file, which you
-16, at 10:06, William Olson lustre_ad...@reachone.com wrote:
On 7/15/2010 5:48 PM, Andreas Dilger wrote:
On 2010-07-15, at 08:33, William Olson wrote:
Somebody, anybody? I'm sure it's something fairly simple, but it
escapes me, assistance would be greatly appreciated!
I can't
On 2010-07-16, at 0:27, Maxence Dunnewind maxe...@dunnewind.net wrote:
I just tried on qt4, and it compiles correctly, the results are :
-j 16 : 30min35 against 32 min
-j 8 : same time (34min25 vs 34min36)
Thanks for testing this. What it means is that there is very little contention
on the
On 2010-07-15, at 15:46, Adesanya, Adeyemi y...@slac.stanford.edu wrote:
We are working on coming up with a backup plan for our Lustre filesystem in
case we ever lose an OST in the future. I like the idea of backing up the
filesystem at the client level and then identifying what files were
in the thread.
Andreas Dilger wrote:
My only other suggestion is to dump the Lustre kernel debug log on the NFS
server after a mount failure to see where/why it is getting the permission
error.
# lctl clear
# (mount NFS client)
# lctl dk /tmp/debug
Then search through the logs for -2
On 2010-07-16, at 16:55, William Olson wrote:
On 7/16/2010 9:16 AM, Andreas Dilger wrote:
My only other suggestion is to dump the Lustre kernel debug log on the NFS
server after a mount failure to see where/why it is getting the permission
error.
# lctl clear
# (mount NFS client)
# lctl
On 2010-07-16, at 18:09, William Olson wrote:
On 7/16/2010 4:50 PM, Andreas Dilger wrote:
Then search through the logs for -2 errors (-EPERM).
Well that improved the debug level, but didn't reveal any -2 errors.. In
fact I can't seem to find a line with an error
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss
data: not found
tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not been formatted with
mkfs.lustre
This error message should probably be fixed also.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre
. If you rebooted, or did this already, and
this error is still present then it looks like you somehow didn't build the
modules correctly.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss
It is possible for the clients to mount the whole Filesystem read-only, which
sets a flag on the MDS and OST for that client to have it return -EROFS for any
Filesystem modifying operations.
However, it isn't possible to mount the OST itself read-only today. At one time
there were patches in
Unmount the MDS and mount it as type ldiskfs and list the ROOT directory. If
there are no files there then it seems that somehow you have deleted or
reformatted the MDS Filesystem.
You could also check lost+found at that point in case your files were moved by
e2fsck for some reason.
Check
of the clients.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
in performance.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
mdc-multiop.diff
Description: Binary data
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
This will give you a sorted list of the top 20 clients that are sending the
most RPCs to the ost_io service, along with the operation being done (3 =
OST_READ, 4 = OST_WRITE).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
important to note is that both ZFS and the new lfsck are designed
to be able to validate the filesystem continuously as it is being used, so
there is no need to take a 100h outage before putting the filesystem back into
use.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation
be
possible to automatically create these quota files the first time that a new
OST is mounted, since we know at that point that the filesystem is empty and
there will be no quota usage for any user on the OST.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
On 2010-07-03, at 15:02, pg_...@lus.for.sabi.co.uk wrote:
Note that if you are not running with writeback cache enabled
on the disks, then you shouldn't have to run an fsck on the
filesystems after a crash.
This seems to me extremely bad advice, based on these rather
extraordinarily
On 2010-07-01, at 11:52, Craig Prescott presc...@hpc.ufl.edu wrote:
We do the fsck from the command line and look at the output. If there
were no filesystem modifications (this is the usual case), we then start
the Lustre services interactively.
Note that if you are not running with
them, when each is used ?
LUSTRE_SUPER_MAGIC is used for server mountpoints, LL_SUPER_MAGIC is used for
client mountpoints.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre
of the input files on the clients to see if
eliminating the small-file reads was a source of improvement?
I will try directly on the mds (so on only one node) to compare.
I look forward to your results.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
Michael, Joshua,
you should also investigate the ip2nets option. This allows using the same
modprobe.conf options on both the clients and servers, since it uses the IP
addresses to determine the LNET networks rather than having to specify the
interface names directly.
Cheers, Andreas
On
how performance compares. :-)
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
hard to say, but starting testing on it will definitely speed up the
process.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
The client will try to resend forever, until either the request
succeeds or the process is interrupted.
Cheers, Andreas
On 2010-06-18, at 7:33, Tonney Kaiven Cheung zhangy...@gmail.com
wrote:
Dear all!
Inside the Lustre Filesystem, if the stat of a request from client
become timeout,
Since the event is unknown it is hard to know in advance whether it
can be ignored or not. Some protocols encode in the message type
whether it is 'mandatory' to handle or 'optional', or as Lustre does
it negotiates in advance what operations are understood and never
sends unknown requests
incorrectly, or the multipath driver is
broken).
You need to fix this, and then Lustre should work. I'm assuming that you
configured the MDT device correctly to use the right block device, and it isn't
accidentally using the raw underlying device and avoiding the multipath.
Cheers, Andreas
--
Andreas
cmd fd(4)
cmd(c00466a4){t:'f';sz:4} arg(ffb0a34c) on /data
Do you have 32-bit userspace running on a 64-bit kernel? We have a problem
with the IOC numbers not being correctly defined and so the userspace tools
need to match the kernel.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Also setting the max RPC size on the client to be 768kB would avoid
the need for each RPC to generate 2 IO requests.
It is possible with newer tune2fs to set the RAID stripe size and the
allocator (mballoc) will use that size. There is a bug open to
transfer this optimal size to the
libraries installed. This might happen if you installed 2 different
versions of e2fsprogs at the same time.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss
the
caller has a 1-page buffer to receive the extents).
On 06/10/2010 06:30 PM, Andreas Dilger wrote:
On 2010-06-10, at 08:07, Bradley W. Settlemyer wrote:
Is there a mechanism within Lustre for querying the populated
extents
in a sparse lustre file? Perhaps some kind of bmap support
extents in file offset order, but this
would need a Lustre patch to implement (which is currently not a priority
task).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre
}
80187cd0{sys_read+69}
8010ae5e{system_call+126}
[...]
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
configuration appropriately, and then the upgrade to 1.8 should work cleanly,
and (in theory) it should still be possible to downgrade to 1.4 if needed.
That said, I'd recommend trying this first and/or making a backup (which are
always good to have).
Cheers, Andreas
--
Andreas Dilger
Lustre
the pages?
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
this at boot time and
the eth0 interface just isn't set up yet.
There were a thread recently about using the _netdev mount option (which works
on some distros), or to use an rc script to mount after the network setup has
completed.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle
shouldn't
install the lustre-patched kernel on the client.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre
.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
are using flock to lock the files. You need to mount the
clients with -o flock to get globally-coherent flock (at some performance
impact) or -o localflock to get local-node-only flock (at minimal performance
impact).
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation
spec file for it but since you're using Ubuntu I
assume
that's of no use to you.
This is against 1.41.5.sun1
Jim
On Tue, Jun 01, 2010 at 10:19:05AM -0700, Andreas Dilger wrote:
On 2010-06-01, at 07:25, Ramiro Alba Queipo wrote:
On Tue, 2010-06-01 at 02:15 -0600, Andreas Dilger wrote
leggere, copiare, usare o diffondere il contenuto della
presente @mail senza autorizzazione. Se avete ricevuto questo messaggio per
errore, siete pregati di rispedire la stessa al mittente. Grazie
Il giorno 28/mag/10, alle ore 21:34, Andreas Dilger ha scritto:
On 2010-05-27, at 04:15
around this?
There is a bug in lfs find that it tries to get the file size unnecessarily.
You can use lfs getstripe -obd ... instead, and it should work even if the
OST is down.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc
)
then the writers to OST1 will get ENOSPC.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
...@lists.lustre.org] On Behalf Of Andreas Dilger
Sent: Wednesday, June 02, 2010 3:03 PM
To: Andy Pace
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [Lustre-discuss] Storage management question
On 2010-06-02, at 12:04, Andy Pace wrote:
but what I'm wondering is how the metadata handles
for any OST, on the OST -- only from
the client. So I'm assuming it's a client-side utility?
Right.
The tool that does this (basically just a shell script) is called lfs_migrate
and will hopefully show up in the next Lustre release.
-Original Message-
From: Andreas Dilger
-discuss
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que està net.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Cheers, Andreas
--
Andreas
) ?
This is one of the features we developed for Lustre ldiskfs that was later
added upstream into ext4. It is present in all ldiskfs modules for some years
already.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre
On 2010-06-01, at 07:25, Ramiro Alba Queipo wrote:
On Tue, 2010-06-01 at 02:15 -0600, Andreas Dilger wrote:
On 2010-06-01, at 01:23, Ramiro Alba Queipo wrote:
I've just compiled the last patched e2fsprogs (1.41.10) package suitable
for the last lustre version (1.8.3) and I had some booting
.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
are really needed, as there
are different packages for SLES and RH, so i try to stick to the docs as
close as poss.
For upgrading i understand that i need to use tunefs.lustre et al and i'am
not sure if these commands trigger some of the e2fsprogs.
Cheers, Andreas
--
Andreas Dilger
Lustre
.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
There have been some reports of problems with automount and Lustre
that have never been tracked down. If someone with automount
experience and config, and time to track this down could investigate
I'm sure we could work it out.
Cheers, Andreas
On 2010-05-27, at 12:24, David Noriega
The problem with SELinux is that it is trying to access the security
xattr for each file access but Lustre does not cache xattrs on the
client.
The other main question about SELinux is whether it even makes sense
in a distributed environment.
For now (see bug) we have just disabled the
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
bumping the
LAST_ID value in the case that it is currently 2 and the MDS is requesting
some large value.
On May 26, 2010, at 1:29 PM, Andreas Dilger wrote:
On 2010-05-26, at 13:18, Mervini, Joseph A wrote:
I have migrated all the files that were on a damaged OST and have recreated
active report for this OSC on a client?
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
similar message
MDS + OSS's version : CentOS 5.4 and lustre version 1.8.1.1
Clients version : CentOS 4.8 and lustre version 1.6.7.2
This is really a problem between the MDS and the OSS. Is there anything in the
OSS logs?
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation
and replacement OSTs.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 2010-05-21, at 6:34, Christopher Huhn c.h...@gsi.de wrote:
What worries us is that the Lustre server patches do not appear to
progress towards integration into the mainline kernel but rather away
from it, which makes porting to Debian (and up-to-date kernels in
general) more and more
On 2010-05-21, at 5:49, Stefano Elmopi stefano.elm...@sociale.it
wrote:
I realized that the time server differed much across machines,
there were at least a few hours of difference.
I'm doing the tests and have not been paying attention to time
synchronization
but now I have aligned the
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
syntaxes is one of them.
I'm not sure what release they are slated for.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman
The SLES11 kernel is at 2.6.27 so it could be usable for this. Also, I
thought that there were Debian packages for Lustre, why not use those?
Cheers, Andreas
On 2010-05-20, at 9:48, Ramiro Alba Queipo r...@cttc.upc.edu wrote:
Hi all,
On Wed, 2010-05-19 at 14:43 +0200, Bernd Schubert wrote:
On 2010-05-20, at 11:33, Ramiro Alba Queipo r...@cttc.upc.edu wrote:
On Thu, 2010-05-20 at 10:16 -0600, Andreas Dilger wrote:
The SLES11 kernel is at 2.6.27 so it could be usable for this.
Also, I
Ok, I am getting
http://downloads.lustre.org/public/kernels/sles11/linux-2.6.27.39-0.3.1
P. Kevin Canady
Vice President,
ClusterStor Inc.
415.505.7701
kevin.can...@clusterstor.com
On May 19, 2010, at 8:01 AM, Andreas Dilger wrote:
I've used a SLES kernel on an FC install for a long time on my home
system. With newer distros there are also fewer changes to the base
kernel, so
in the lustre AND
e2fsprogs branches.
I'm not sure what you mean. The e2fsprogs patches have always been in a
separate repository from the core Lustre code, and all of the Lustre/ldiskfs
kernel patches are in the Git repository.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation
601 - 700 of 1263 matches
Mail list logo