Hi all,

We are trying to mount a remote filesystem in oVirt from an IBM ESS3500. but it 
seems to be a little against us.

everyting i try to mount it i get this in supervdsm.log(two different tries):

MainProcess|jsonrpc/7::DEBUG::2023-03-31 
10:55:41,808::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call 
mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f7595dba9b0>, 
'/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 
'rw,relatime,dev=essovirt01', 'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/7::DEBUG::2023-03-31 
10:55:41,808::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 
/usr/bin/mount -t gpfs -o rw,relatime,dev=essovirt01 /essovirt01 
/rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/7::DEBUG::2023-03-31 
10:55:41,941::commands::230::root::(execCmd) FAILED: <err> = b'mount: 
/rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file 
handle.\n'; <rc> = 32
MainProcess|jsonrpc/7::ERROR::2023-03-31 
10:55:41,941::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error 
in mount
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in 
wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, 
in mount
    cgroup=cgroup)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in 
_mount
    _runcmd(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in 
_runcmd
    raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 
'rw,relatime,dev=essovirt01', '/essovirt01', 
'/rhev/data-center/mnt/_essovirt01'] failed with rc=32 out=b'' err=b'mount: 
/rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file 
handle.\n'
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:48,993::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call 
dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:48,993::commands::137::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-63 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:49,000::commands::82::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0
MainProcess|mpathhealth::DEBUG::2023-03-31 
10:55:49,000::supervdsm_server::85::SuperVdsm.ServerCallback::(wrapper) return 
dmsetup_run_status with b'360050764008100e42800000000000223: 0 629145600 
multipath 2 0 1 0 2 1 A 0 1 2 8:192 A 0 0 1 E 0 1 2 8:144 A 0 0 1 
\n360050764008100e42800000000000229: 0 629145600 multipath 2 0 1 0 2 1 A 0 1 2 
8:208 A 0 0 1 E 0 1 2 8:160 A 0 0 1 \n360050764008100e4280000000000022a: 0 
10485760 multipath 2 0 1 0 2 1 A 0 1 2 8:176 A 0 0 1 E 0 1 2 8:224 A 0 0 1 
\n360050764008100e42800000000000260: 0 1048576000 multipath 2 0 1 0 2 1 A 0 2 2 
8:16 A 0 0 1 8:80 A 0 0 1 E 0 2 2 8:48 A 0 0 1 8:112 A 0 0 1 
\n360050764008100e42800000000000261: 0 209715200 multipath 2 0 1 0 2 1 A 0 2 2 
8:64 A 0 0 1 8:128 A 0 0 1 E 0 2 2 8:32 A 0 0 1 8:96 A 0 0 1 
\n360050764008102edd8000000000001ab: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 
65:128 A 0 0 1 E 0 1 2 65:32 A 0 0 1 \n360050764008102f558000000000001a9: 0 
8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:16 A 0 0 1 E 0 1 2 65:48 A 
 0 0 1 \n3600507640081820ce800000000000077: 0 838860800 multipath 2 0 1 0 2 1 A 
0 1 2 8:240 A 0 0 1 E 0 1 2 65:0 A 0 0 1 \n3600507680c800058d000000000000484: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:160 A 0 0 1 E 0 1 2 66:80 A 0 0 1 
\n3600507680c800058d000000000000485: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 
65:176 A 0 0 1 E 0 1 2 66:96 A 0 0 1 \n3600507680c800058d000000000000486: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:112 A 0 0 1 E 0 1 2 65:192 A 0 0 1 
\n3600507680c800058d000000000000487: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 
66:128 A 0 0 1 E 0 1 2 65:208 A 0 0 1 \n3600507680c800058d000000000000488: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:224 A 0 0 1 E 0 1 2 66:144 A 0 0 1 
\n3600507680c800058d000000000000489: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 
66:160 A 0 0 1 E 0 1 2 65:240 A 0 0 1 \n3600507680c800058d00000000000048a: 0 
20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:0 A 0 0 1 E 0 1 2 66:176 A 0 0 1 
\n3600507680c800058d00000000000048b: 0 20971520 multipath 2 0 1 0 2 1
  A 0 1 2 66:192 A 0 0 1 E 0 1 2 66:16 A 0 0 1 
\n3600507680c800058d00000000000048c: 0 419430400 multipath 2 0 1 0 2 1 A 0 1 2 
65:144 A 0 0 1 E 0 1 2 66:64 A 0 0 1 \n3600507680c800058d0000000000004b1: 0 
41943040 multipath 2 0 1 0 2 1 A 0 1 2 66:208 A 0 0 1 E 0 1 2 66:32 A 0 0 1 
\n3600507680c800058d0000000000004b2: 0 41943040 multipath 2 0 1 0 2 1 A 0 1 2 
66:48 A 0 0 1 E 0 1 2 66:224 A 0 0 1 \n360050768108100c9d0000000000001aa: 0 
8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:64 A 0 0 1 E 0 1 2 65:112 A 0 0 1 
\n360050768108180ca48000000000001a9: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 
65:96 A 0 0 1 E 0 1 2 65:80 A 0 0 1 \n'
MainProcess|jsonrpc/0::DEBUG::2023-03-31 
10:55:49,938::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call 
mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f7595dba9b0>, 
'/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 'rw,relatime', 
'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/0::DEBUG::2023-03-31 
10:55:49,939::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 
/usr/bin/mount -t gpfs -o rw,relatime /essovirt01 
/rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/0::DEBUG::2023-03-31 
10:55:49,944::commands::230::root::(execCmd) FAILED: <err> = b'mount: 
/rhev/data-center/mnt/_essovirt01: wrong fs type, bad option, bad superblock on 
/essovirt01, missing codepage or helper program, or other error.\n'; <rc> = 32
MainProcess|jsonrpc/0::ERROR::2023-03-31 
10:55:49,944::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error 
in mount
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in 
wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, 
in mount
    cgroup=cgroup)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in 
_mount
    _runcmd(cmd)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in 
_runcmd
    raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 
'rw,relatime', '/essovirt01', '/rhev/data-center/mnt/_essovirt01'] failed with 
rc=32 out=b'' err=b'mount: /rhev/data-center/mnt/_essovirt01: wrong fs type, 
bad option, bad superblock on /essovirt01, missing codepage or helper program, 
or other error.\n'

i have tried earlier in the same oVirt version two map a SAN LUN directly to 
the hypervisor and create a local gpfs filesystem and that was able to mount 
with the below parameters within the gui:

Storage Type : POSIX Compliant FS
HOST : The host that has scale installed and mounted
Path : /essovirt01
VFS Type: gpfs
Mount Options: rw,relatime,dev=essovirt01

This works. but with a remote filesystem it does not. im not sure why as it 
should pretty close to a local filesystem as it is granted with : File system 
access:  essovirt01 (rw, root allowed) and mounted with /essovirt01 on 
/essovirt01 type gpfs (rw,relatime,seclabel)

Anybody have a clue what to do ? seems weird there should be such a difference 
from a locally owned to a remote owned fs when mounted through gpfs 
remotefs/remotecluster option.

and just to verify, i have given the filesystem the correct permission for 
ovirt : drwxr-xr-x.   2 vdsm kvm  262144 Mar 31 10:33 essovirt01

Thanks in advance.
Christiansen


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PRBITD6TIHOYCKXKOXOCI2FWU6UO3ZEE/

Reply via email to