Try using mount -t lustre 192.168.1...@o2ib:/lustre /mnt/lustre
On Thu, Nov 4, 2010 at 12:00 AM, ren yufei wrote:
> Dear all,
>
> I have setup some nodes (include MDT/MGS/OSS/Client) with Mellanox 40G RNIC
> connected via Mellanox MTS3600 switch, and deployed lustre FS in this
> cluster. The MD
We recently had a hardware failure on one of our OSTs, which has caused
some major problems for our 1.6.6-based array.
We're now getting the error:
Serious error: objid 517386 already exists; is this filesystem corrupt?
on one of our OSTs. If I mount this OST as ldiskfs and look in O/0/d*,
As the error message said, the lustre modules probably were not loaded when you
were trying to mount lustre client.
Please provide information - specifically, the loaded modules and more dmesg
log.
在 2010-11-4,上午2:30, ren yufei 写道:
> Dear all,
>
> I have setup some nodes (include MDT/MGS/OSS/
hi all,
we recently updated our servers (from centos 5.2, lustre 1.8.1.1) and
clients (centos 5.4, from lustre 1.8.3) to centos 5.4 and lustre
2.0.0.1. since then we are experiencing massive problems with users
accessing the lustre filesystem through a samba share. we did not have
these problems b
On 2010-11-03, at 13:25, Thomas Roth wrote:
> my attempt to format a new OST failed (mkfs.lustre: Unable to mount
> /dev/sdd: Invalid argument), obviously because sdd has "device size =
> 15253504MB", and the log tells me 'LDISKFS-fs does not support
> filesystems greater than 8TB and can cause dat
Hi all,
my attempt to format a new OST failed (mkfs.lustre: Unable to mount
/dev/sdd: Invalid argument), obviously because sdd has "device size =
15253504MB", and the log tells me 'LDISKFS-fs does not support
filesystems greater than 8TB and can cause data corruption.'
However, this is a Lustre 1
Dear all,
I have setup some nodes (include MDT/MGS/OSS/Client) with Mellanox 40G RNIC
connected via Mellanox MTS3600 switch, and deployed lustre FS in this cluster.
The MDT/MGS node and OSS nodes works but client could not mount on this FS. The
error information is as follows.
Client: 192.168.
On Mon, 1 Nov 2010, Martin Pokorny wrote:
...
> FWIW, we've been using MPICH2's MPI-IO/ROMIO/ADIO with Lustre (v 1.8)
> for several months now, and it's been working reliably. We do mount the
> Lustre filesystem with "flock"; at one time I thought it necessary, but
> I don't recall if I verified th