In lustre you MUST have a MGS _AND_ MDS node and an associated MDT (file
system) running on the MDS. Typically the MDS and MGS are configured on the
same node.
If you don't have this, lustre won't work.
Joe Mervini
Sandia National Laboratories
High Performance Computing
505.844.6770
I just ran into this same issue last week. There is a JIRA ticket on it at
Intel but in a nutshell mkfs.lustre on zfs will only record the last mgsnode
you specify in your command. To add an additional fail node you can use the zfs
command to update the configuration:
zfs set
Oh - BTW. You will need to do the same thing with your OSTs for setting both
the mgsnodes.
Also, you can use zfs show zpool name/zpool volume to get the same info as
you would with tunefs.lustre
Joe Mervini
Sandia National Laboratories
High Performance Computing
505.844.6770
Hi,
I just upgraded our servers from RHEL 5.4 - RHEL 5.5 and went from lustre
1.8.3 to 1.8.5.
Now when I try to mount the OSTs I'm getting:
[root@aoss1 ~]# mount -t lustre /dev/disk/by-label/scratch2-OST0001
/mnt/lustre/local/scratch2-OST0001
mount.lustre: mount
:52 PM, Cliff White wrote:
It is called when truncating a file - afaik it is showing you the number of
truncates, more or less.
cliffw
On Thu, Jun 16, 2011 at 10:52 AM, Mervini, Joseph A
jame...@sandia.govmailto:jame...@sandia.gov wrote:
Hi,
I have been covertly trying for a long time to find
Hi,
I have been covertly trying for a long time to find out what punch means as far
a lustre llobdstat output but have not really found anything definitive.
Can someone answer that for me? (BTW: I am not alone in my ignorance... :) )
Thanks.
Joe Mervini
Sandia National Laboratories
High
of started threads.
There is a patch I wrote to also reduce the number of running treads but it
wasn't landed yet.
Cheers, Andreas
On 2011-02-24, at 14:04, Mervini, Joseph A jame...@sandia.gov wrote:
I'm inclined to agree. So apparently the only time that modifying the
runtime max
Quick question: Has runtime modification of the number of OST threads been
implemented in Lustre-1.8.3?
Joe Mervini
Sandia National Laboratories
High Performance Computing
505.844.6770
jame...@sandia.gov
___
Lustre-discuss mailing list
Cool! Thank you Johann.
Joe Mervini
Sandia National Laboratories
High Performance Computing
505.844.6770
jame...@sandia.gov
On Feb 24, 2011, at 11:05 AM, Johann Lombardi wrote:
On Thu, Feb 24, 2011 at 10:48:32AM -0700, Mervini, Joseph A wrote:
Quick question: Has runtime modification
Maren wrote:
However, I don't think you can decrease the number of running threads.
See https://bugzilla.lustre.org/show_bug.cgi?id=22417 (and also
https://bugzilla.lustre.org/show_bug.cgi?id=22516 )
Kevin
Mervini, Joseph A wrote:
Cool! Thank you Johann.
Joe Mervini
Sandia
Hoping for a quick sanity check:
I have migrated all the files that were on a damaged OST and have recreated the
software raid array and put a lustre file system on it.
I am now at the point where I want to re-introduce it to the scratch file
system as if it was never
gone. I used:
on the above conditions, what do I need to do to get this OST back into
the file system?
Thanks in advance.
Joe
On May 26, 2010, at 1:29 PM, Andreas Dilger wrote:
On 2010-05-26, at 13:18, Mervini, Joseph A wrote:
I have migrated all the files that were on a damaged OST and have recreated
Hi,
We encountered a multi-disk failure on one of our mdadm RAID6 8+2 OSTs. 2
drives failed in the array within the space of a couple of hours and were
replaced. It is questionable whether both drives are actually bad because we
are seeing the same behavior in a test environment where a bad
It is possible but it's painful and probably depends on the reason. I had a
situation a while back where the script I was using to mkfs.lustre had the
wrong fsname applied and as a result added the OST to the wrong lustre file
system.
After realizing my mistake I backed out and reformatted
I'm not really sure why writethrough_cache_enable is being disabled but the
method we have used to disable the read_cache_enable is echo 0
/proc/fs/lustre/obdfilter/ost name/read_cache_enable without any issues.
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org
15 matches
Mail list logo