Hi,
I had permissions issue on the storage domain, it was fixed today & after
setting the right ownership I was able to add it successfully
Thanks :)

On Thu, Jan 2, 2020 at 7:02 PM Nir Soffer <nsof...@redhat.com> wrote:

> On Thu, Jan 2, 2020 at 6:47 PM Dana Elfassy <delfa...@redhat.com> wrote:
> >
> > Thanks,
> > I can mount it manually, but when trying to read the file, I'm getting
> the error: /usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py: No
> such file or directory
> > The directory's user & group are on my user (I'm unable to change
> ownership for some reason), but I have set earlier the directory's (which I
> tried to add as a storage domain) permissions to 777
>
> This does sound like a healthy storage domain configuration.
>
> The way to create a storage domain is:
>
> In /etc/exports add entry like:
>
> $ cat /etc/exports
> /export/1  *(rw,async,anonuid=36,anongid=36)
> /export/2  *(rw,async,anonuid=36,anongid=36)
>
> Note: async is not safe for production, I'm using it to simulate a
> fast nfs server. You probably want to use sync
> which is the default.
>
> Reload nfs server configuration:
>
> # exportfs -r
>
> If you check the export you will see:
>
> # exportfs -v
> /export/1
> <world>(async,wdelay,hide,no_subtree_check,anonuid=36,anongid=36,sec=sys,rw,secure,root_squash,no_all_squash)
> /export/2
> <world>(async,wdelay,hide,no_subtree_check,anonuid=36,anongid=36,sec=sys,rw,secure,root_squash,no_all_squash)
>
> Then change the owner of the directory to vdsm:kvm
> (you many need to add a vdsm user on the nfs server)
>
> # chown -R vdsm:kvm /path/to/export
>
> On the host, the mount should look like:
>
> # mount | grep nfs1
> nfs1:/export/2 on /rhev/data-center/mnt/nfs1:_export_2 type nfs4
>
> (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.122.20,local_lock=none,addr=192.168.122.30)
>
> (I'm using NFS 4.2 which is much faster, you should use it too)
>
> # tree -pug /rhev/data-center/mnt/nfs1\:_export_2/
> /rhev/data-center/mnt/nfs1:_export_2/
> └── [drwxr-xr-x vdsm     kvm     ]  55255570-983a-4d82-907a-19b964abf7ed
>     ├── [drwxr-xr-x vdsm     kvm     ]  dom_md
>     │   ├── [-rw-rw---- vdsm     kvm     ]  ids
>     │   ├── [-rw-rw---- vdsm     kvm     ]  inbox
>     │   ├── [-rw-rw---- vdsm     kvm     ]  leases
>     │   ├── [-rw-r--r-- vdsm     kvm     ]  metadata
>     │   ├── [-rw-rw---- vdsm     kvm     ]  outbox
>     │   └── [-rw-rw---- vdsm     kvm     ]  xleases
>     ├── [drwxr-xr-x vdsm     kvm     ]  images
>     │   ├── [drwxr-xr-x vdsm     kvm     ]
> 17b81130-f4a4-4764-ab5a-258a803c7706
>     │   │   ├── [-rw-rw---- vdsm     kvm     ]
> 66738f70-bb6c-4ce0-a98e-c7109d3b4275
>     │   │   ├── [-rw-rw---- vdsm     kvm     ]
> 66738f70-bb6c-4ce0-a98e-c7109d3b4275.lease
>     │   │   └── [-rw-r--r-- vdsm     kvm     ]
> 66738f70-bb6c-4ce0-a98e-c7109d3b4275.meta
>     │   ├── [drwxr-xr-x vdsm     kvm     ]
> 6fd5eafc-572b-4335-b59b-42282ba10464
>     │   │   ├── [-rw-rw---- vdsm     kvm     ]
> 46e25ec2-04ae-4829-b32d-972c1e15da09
>     │   │   ├── [-rw-rw---- vdsm     kvm     ]
> 46e25ec2-04ae-4829-b32d-972c1e15da09.lease
>     │   │   └── [-rw-r--r-- vdsm     kvm     ]
> 46e25ec2-04ae-4829-b32d-972c1e15da09.meta
>     │   ├── [drwxr-xr-x vdsm     kvm     ]
> b2cdc744-9992-4a97-8d80-37c40c5b8f84
>     │   │   ├── [-rw-rw---- vdsm     kvm     ]
> 12b58c9b-ccd7-4375-a0df-fc956075f313
>     │   │   ├── [-rw-rw---- vdsm     kvm     ]
> 12b58c9b-ccd7-4375-a0df-fc956075f313.lease
>     │   │   └── [-rw-r--r-- vdsm     kvm     ]
> 12b58c9b-ccd7-4375-a0df-fc956075f313.meta
>     │   └── [drwxr-xr-x vdsm     kvm     ]
> e95c50b2-d066-499b-9c58-6a479b50e515
>     │       ├── [-rw-rw---- vdsm     kvm     ]
> 69435001-9c25-41fd-85d3-7f7321a503e6
>     │       ├── [-rw-rw---- vdsm     kvm     ]
> 69435001-9c25-41fd-85d3-7f7321a503e6.lease
>     │       └── [-rw-r--r-- vdsm     kvm     ]
> 69435001-9c25-41fd-85d3-7f7321a503e6.meta
>     └── [drwxr-xr-x vdsm     kvm     ]  master
>         ├── [drwxr-xr-x vdsm     kvm     ]  tasks
>         └── [drwxr-xr-x vdsm     kvm     ]  vms
>
>
> > On Thu, Jan 2, 2020 at 5:24 PM Nir Soffer <nsof...@redhat.com> wrote:
> >>
> >> On Thu, Jan 2, 2020 at 4:44 PM Dana Elfassy <delfa...@redhat.com>
> wrote:
> >> >
> >> > Hi,
> >> > When trying to add a storage domain to a 4.4 host I'm getting this
> error message:
> >> > Error while executing action New NFS Storage Domain: Unexpected
> exception
> >> >
> >> > The errors from vdsm.log:
> >> > 2020-01-02 09:38:33,578-0500 ERROR (jsonrpc/0) [storage.initSANLock]
> Cannot initialize SANLock for domain 6ca1e203-5595-47e5-94b8-82a7e69d99a9
> (clusterlock:259)
> >> > Traceback (most recent call last):
> >> >   File
> "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 250,
> in initSANLock
> >> >     lockspace_name, idsPath, align=alignment, sector=block_size)
> >> > sanlock.SanlockException: (-202, 'Sanlock lockspace write failure',
> 'IO timeout')
> >>
> >> This means sanlock operation failed with a timeout - the storage was
> too slow
> >> or there some issue on the storage side.
> >>
> >> Can you mount this manually and access file as user vdsm?
> >>
> >> > 2020-01-02 09:38:33,579-0500 INFO  (jsonrpc/0) [vdsm.api] FINISH
> createStorageDomain error=Could not initialize cluster lock: ()
> from=::ffff:192.168.100.1,36452, flow_id=17dc614
> >> > e, task_id=05c2107a-4d59-48d0-a2f7-0938f051c9ab (api:52)
> >> > 2020-01-02 09:38:33,582-0500 ERROR (jsonrpc/0)
> [storage.TaskManager.Task] (Task='05c2107a-4d59-48d0-a2f7-0938f051c9ab')
> Unexpected error (task:874)
> >> > Traceback (most recent call last):
> >> >   File
> "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 250,
> in initSANLock
> >> >     lockspace_name, idsPath, align=alignment, sector=block_size)
> >> > sanlock.SanlockException: (-202, 'Sanlock lockspace write failure',
> 'IO timeout')
> >>
> >> Looks like same error raised again, a common issue in legacy code.
> >>
> >> > During handling of the above exception, another exception occurred:
> >> >
> >> > Traceback (most recent call last):
> >> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line
> 881, in _run
> >> >     return fn(*args, **kargs)
> >> >   File "<decorator-gen-121>", line 2, in createStorageDomain
> >> >   File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line
> 50, in method
> >> >     ret = func(*args, **kwargs)
> >> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line
> 2622, in createStorageDomain
> >> >     max_hosts=max_hosts)
> >> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/nfsSD.py", line
> 120, in create
> >> >     fsd.initSPMlease()
> >> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line
> 974, in initSPMlease
> >> >     return self._manifest.initDomainLock()
> >> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line
> 620, in initDomainLock
> >> >     self._domainLock.initLock(self.getDomainLease())
> >> >   File
> "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 308,
> in initLock
> >> >     block_size=self._block_size)
> >> >   File
> "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 260,
> in initSANLock
> >> >     raise se.ClusterLockInitError()
> >> > vdsm.storage.exception.ClusterLockInitError: Could not initialize
> cluster lock: ()
> >>
> >> There is no information about this error, looks like the public
> >> useless error we raise for the
> >> underlying error seen before.
> >>
>
>
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A52POFLXOB7AK37XLV3Q5YB3EHOFTGU6/

Reply via email to