On Wed, May 29, 2019 at 12:10 AM Andreas Dilger
wrote:
What version of Lustre are you using? I haven't heard of such problems,
> but I'm not sure how many users use automount either. I know _some_ use
> it, since we had a bug open about it a few years ago, but haven't heard
> much since.
>
> It
On Mon, May 27 2019, Youssef Eldakar wrote:
> In our Rocks cluster, we added entries to /etc/auto.share to conveniently
> mount our Lustre file systems on all nodes:
>
> lfs01 -fstype=lustre 192.168.230.238@o2ib:/lustrefs
> lfs02 -fstype=lustre 192.168.230.239@o2ib:/lustrefs
>
> On login nodes, th
On May 27, 2019, at 02:05, Youssef Eldakar wrote:
>
> In our Rocks cluster, we added entries to /etc/auto.share to conveniently
> mount our Lustre file systems on all nodes:
>
> lfs01 -fstype=lustre 192.168.230.238@o2ib:/lustrefs
> lfs02 -fstype=lustre 192.168.230.239@o2ib:/lustrefs
>
> On log
In our Rocks cluster, we added entries to /etc/auto.share to conveniently
mount our Lustre file systems on all nodes:
lfs01 -fstype=lustre 192.168.230.238@o2ib:/lustrefs
lfs02 -fstype=lustre 192.168.230.239@o2ib:/lustrefs
On login nodes, the lfs01 or lfs02 mount points would occasionally give "No