Jim McCusker wrote:
The mds is outside of the NAT'ed network and I would have to jump
through many hoops to get it inside that network, because it's a virtual
network that doesn't exist physically.
If your client cannot reach the MDS, your client will not be able to
mount. The MDS connection is necessary.
cliffw
Jim
Felix, Evan J wrote:
Do you have a nid on the MDS for the NAT'ed Network? The mds should
have nids [EMAIL PROTECTED] and say [EMAIL PROTECTED] (I made that up),
then you could connect to the MDS on the other network(NAT'ed) interface
Evan
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim McCusker
Sent: Monday, June 11, 2007 9:32 AM
To: [email protected]
Subject: [Lustre-discuss] Problem mounting volumes inside vmware NAT
(1.4.10)
I have a vmware server running several vms that are stored on a
lustre volume. When I use bridged networking, I am able to mount the
volume inside the vm with no problem. I would like to have some vms
on a private NAT'ed network as database servers (with the data files
stored on lustre), but when I change the NIC to be on the NAT
network, the mount fails with the following:
[EMAIL PROTECTED] ~]# /etc/init.d/lustrefs start Mounting Lustre
filesystems: mount.lustre: mount(chai.med.yale.edu:mds1/client,
/vol) failed: Input/output error
mds nid 0: [EMAIL PROTECTED]
mds name: mds1
profile: client
options: rw,flock
retry: 0
[FAILED]
/var/log/messages shows:
Jun 11 03:13:06 localhost kernel: LustreError:
2525:0:(socklnd_cb.c:2160:ksocknal_recv_hello()) Error -104 reading
HELLO from 128.36.115.13 Jun 11 03:13:06 localhost kernel:
LustreError: Connection to [EMAIL PROTECTED] at host 128.36.115.13 on
port 988 was reset: is it running a compatible version of Lustre and
is [EMAIL PROTECTED] one of its NIDs?
Jun 11 03:13:11 localhost kernel: LustreError:
3931:0:(client.c:947:ptlrpc_expire_one_request()) @@@ timeout (sent
at 1181545986, 5s ago) [EMAIL PROTECTED] x9/t0
o38->[EMAIL PROTECTED]@tcp:12 lens 240/272 ref 1 fl Rpc:/0/0 rc 0/0
Jun 11 03:13:11 localhost kernel: LustreError: mdc_dev: The
configuration 'client' could not be read from the MDS 'mds1'. This
may be the result of communication errors between the client and the
MDS, or if the MDS is not running.
Jun 11 03:13:11 localhost kernel: LustreError:
3928:0:(llite_lib.c:962:lustre_fill_super()) Unable to process log:
client Jun 11 03:13:11 localhost mount: mount.lustre:
mount(chai.med.yale.edu:mds1/client, /vol) failed: Input/output error
Jun 11 03:13:11 localhost mount: mds nid 0: [EMAIL PROTECTED]
Jun 11 03:13:11 localhost mount: mds name: mds1
Jun 11 03:13:11 localhost mount: profile: client
Jun 11 03:13:11 localhost mount: options: rw,flock
Jun 11 03:13:11 localhost mount: retry: 0
Jun 11 03:13:11 localhost lustrefs: Mounting Lustre filesystems: failed
Only the vmware server has more than one NIC enabled, and that isn't
having any trouble connecting, so "options lnet networks=tcp(eth0)"
doesn't seem like the right option.
Jim
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss