Dear,
I have a pre-installed lustre file-system with two MDSs and two OSSs. Each OSS
has 6 arrays of disks. In OSS01, one of the arrays is not working properly due
to a physical error in one of the disks of the array. I wanted to eliminate
this array from OSS01. Simply I restarted MDSs and the
On Wed, 2010-05-26 at 09:43 +0300, Mohamed Adel wrote:
Dear,
I have a pre-installed lustre file-system with two MDSs and two OSSs. Each
OSS has 6 arrays of disks. In OSS01, one of the arrays is not working
properly due to a physical error in one of the disks of the array. I wanted
to
These lines is only available when we are on version 1.8.3
May 22 13:19:12 semldxludwig kernel: LustreError:
12412:0:(lib-move.c:2441:LNetPut()) Error sending PUT to 12345-10.0.0@tcp:
-113
May 22 13:19:51 semldxluis kernel: Lustre:
13460:0:(fsfilt-ldiskfs.c:1385:fsfilt_ldiskfs_setup())
Nope, wish i had .. maximal mount count reached
Thanks,
Mathias Gustavsson
Linux System Manager
AstraZeneca RD Mölndal
SE-431 83 Mölndal, Sweden
Phone: +46 31 776 12 58
mathias.gustavs...@astrazeneca.com
--
Hi,
My Lustre system has 7 OSTs on 3 OSSs. 2 of the OSTs have less then 200Gb
free while the other 5
have 3+ TB free.
I am not using striping. Question, when creating a new file does Lustre try to
write to the OST
with the most free space? If not, how does it choose which OST to write
On Wed, 2010-05-26 at 09:26 -0400, Scott wrote:
Hi,
Hi,
You don't provide any particulars (lustre version, lfs df output, etc.)
so I will answer generically...
I am not using striping. Question, when creating a new file does Lustre try
to write to the OST
with the most free space?
That
Guy Coates wrote:
One thing to watch out for in your kernel configs is to make sure that:
CONFIG_SECURITY_FILE_CAPABILITIES=N
I hope this is not the case for the now obsolete:
CONFIG_EXT3_FS_SECURITY=y
which appears to be enabled by default on RHEL5.x
Its not entirely clear to me what this
The problem with SELinux is that it is trying to access the security
xattr for each file access but Lustre does not cache xattrs on the
client.
The other main question about SELinux is whether it even makes sense
in a distributed environment.
For now (see bug) we have just disabled the
On 26/05/10 16:18, Andreas Dilger wrote:
The problem with SELinux is that it is trying to access the security
xattr for each file access but Lustre does not cache xattrs on the client.
The other main question about SELinux is whether it even makes sense in
a distributed environment.
Just to
Hi Guy,
On Wed, 2010-05-26 at 14:59 +0100, Guy Coates wrote:
The SLES11 kernel is at 2.6.27 so it could be usable for this. Also, I
Ok, I am getting
http://downloads.lustre.org/public/kernels/sles11/linux-2.6.27.39-0.3.1.tar.bz2
but, please. Where can I get a suitable
Hi,
My version of Lustre is 1.8.3
My filesystem is composed of one MGS/MDS server and two OSS.
By testing, I tried to delete a OST and replace it with another OST
and now the situation is this:
cat /proc/fs/lustre/lov/lustre01-mdtlov/target_obd
0: lustre01-OST_UUID ACTIVE
2:
On 26/05/10 16:31, Ramiro Alba Queipo wrote:
Hi Guy,
On Wed, 2010-05-26 at 14:59 +0100, Guy Coates wrote:
The SLES11 kernel is at 2.6.27 so it could be usable for this. Also, I
Ok, I am getting
http://downloads.lustre.org/public/kernels/sles11/linux-2.6.27.39-0.3.1.tar.bz2
but,
If you are creating files that are a significant fraction of the free
space on your OSTs, you _should_ stripe
them across multiple OSTs (lfs setstripe).
Also note that ENOSPC is likely to be returned before the OST is
actually out of space: due to the
OST pre-granting space to the clients,
On 26/05/10 17:25, Ramiro Alba Queipo wrote:
On Wed, 2010-05-26 at 16:48 +0100, Guy Coates wrote:
One thing to watch out for in your kernel configs is to make sure that:
CONFIG_SECURITY_FILE_CAPABILITIES=N
OK. But the question is if this issue still applies for lustre-1.8.3 and
SLES kernel
Hoping for a quick sanity check:
I have migrated all the files that were on a damaged OST and have recreated the
software raid array and put a lustre file system on it.
I am now at the point where I want to re-introduce it to the scratch file
system as if it was never
gone. I used:
On 2010-05-26, at 13:18, Mervini, Joseph A wrote:
I have migrated all the files that were on a damaged OST and have recreated
the software raid array and put a lustre file system on it.
I am now at the point where I want to re-introduce it to the scratch file
system as if it was never
Andreas,
I migrated all the files off the target with lfs_migrate. I didn't realize that
I would need to retain any of the ldiskfs data if everything was moved. (I must
have misinterpreted your earlier comment.)
So this is my current scenario:
1. All data from a failing OST has been migrated
On 2010-05-26, at 13:47, Mervini, Joseph A wrote:
I migrated all the files off the target with lfs_migrate. I didn't realize
that I would need to retain any of the ldiskfs data if everything was moved.
(I must have misinterpreted your earlier comment.)
So this is my current scenario:
1.
Hi,
A while ago, we experienced multi disk failures on a raid6 ost. We
managed to migrate some data off the OST (lfs_migrate), and the
process was long (software raid was often failing).
We reconstructed the target from scratch, which introduced a new OST.
Following the Lustre documentation on
On 2010-05-26, at 16:49, Florent Parent wrote:
A while ago, we experienced multi disk failures on a raid6 ost. We
managed to migrate some data off the OST (lfs_migrate), and the
process was long (software raid was often failing).
We reconstructed the target from scratch, which introduced a
On Wed, May 26, 2010 at 19:08, Andreas Dilger andreas.dil...@oracle.com wrote:
On 2010-05-26, at 16:49, Florent Parent wrote:
on MGS:
lctl conf_param osc.lustre1-OST002f-osc.active.osc.active=0
If you specified the above, it is possible that you only deactivated it on
the MDS, not on the
To clarify: would still have some vestiges of the old OST, and would
have to follow
the other procedure if the OST index is reused, but the writeconf should
remove all
mention of the OST from lctl dl, right?
Kevin Van Maren wrote:
Andreas,
This isn't the same as the similar thread, where an
22 matches
Mail list logo