Hi Huseyin,
 
Thanks for responding. I did spend quite a bit of time trying to sort out the 
nova and libvirt uid/gid so hopefully they are correct. That is also one of the 
reasons why I resorted to testing with a basic text file as root to rule that 
out.
 
Host 2 :
 
root@devops-kvm02:/var/lib/nova/instances# ls -l
total 0
drwxr-xr-x 1 nova nova 0 Apr 21 12:51 74af50aa-d771-41d5-af2a-36c6dacd9539
-rw-r--r-- 1 nova nova 0 Apr 20 15:47 compute_nodes
drwxr-xr-x 1 nova nova 0 Apr 20 15:43 locks
-rw-r--r-- 1 root root 0 Apr 21 13:38 test
 
root@devops-kvm02:/var/lib/nova/instances# id nova
uid=120(nova) gid=120(nova) groups=120(nova),114(libvirtd)

Host 1:
 
root@devops-kvm01:/var/lib/nova/instances# ls -l
total 0
drwxr-xr-x 1 nova nova 0 Apr 21 12:51 74af50aa-d771-41d5-af2a-36c6dacd9539
-rw-r--r-- 1 nova nova 0 Apr 20 15:47 compute_nodes
drwxr-xr-x 1 nova nova 0 Apr 20 15:43 locks
-rw-r--r-- 1 root root 0 Apr 21 13:38 test
 
root@devops-kvm01:/var/lib/nova/instances# id nova
uid=120(nova) gid=120(nova) groups=120(nova),114(libvirtd)
 
thanks,
 
Neville


 
Date: Tue, 21 Apr 2015 15:49:43 +0300
From: hco...@gmail.com
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CephFS concurrency question


  
    
  
  
    Dear Neville,

    

    Could you please share output of "ls -l /var/lib/nova/instances" on
    both hosts? The user id's of nova on both is probably different. You
    can check /etc/passwd files on both host. 

    

    Regards,

    

    Huseyin COTUK

    

    On 21-04-2015 15:43, Neville wrote:

    
    
      
      I'm trying to setup live migration in Openstack
        using Ceph RBD backed volumes. From what I understand I also
        need to put the libvirt folder /var/lib/nova/instances on shared
        storage for it to work as Nova tests for this as part of the
        migration process. I decided to look at using CephFS for this
        purpose.

         

        I've created a CephFS file system and mounted it on my two
        compute nodes as /var/lib/nova/instances but I'm getting some
        strange results. It seems like after one of the hosts accesses a
        file then it becomes locked and the other host can't access it.
        For example, I complete the following steps:

         

        1\ Create new Openstack instance using the boot from image
        (create new volume) option. New instance is created with RBD
        backed volume as expected. On the relevant compute host I see
        the instance folder created under /var/lib/nova/instances/{instance
          id} with two files inside libvirt.xml and console.log. If
        I cat the libvirt.xml file it works as expected.

        2\ Live migrate the instance to other host. Appears to work as
        expected although instance status in Horizon stays as migrating
        forever. I can see instance has moved to second host by running
        virsh list on both hosts.

        4\ Now, if I attempt to cat the libvirt.xml file on the new host
        I get "Operation not permitted".

         

        I'm assuming this isn't what's expected?

         

        To test this further I tried the following basic tests:

         

        On Host 2:

         

        root@devops-kvm02:/var/lib/nova/instances#
        echo hello > test

        root@devops-kvm02:/var/lib/nova/instances#
        cat test

        hello

        root@devops-kvm02:/var/lib/nova/instances#

        

        Then from Host 1:

         

        root@devops-kvm01:/var/lib/nova/instances#
        cat test

        cat: test: Operation not permitted

        root@devops-kvm01:/var/lib/nova/instances#

        

        Then back on Host 2:

        

        root@devops-kvm02:/var/lib/nova/instances#
        cat test

        cat: test: Operation not permitted

        root@devops-kvm02:/var/lib/nova/instances#

        

        Should this even work? My understanding is CephFS allows
        concurrent access but I'm not sure if there is some file locking
        going on that I need to understand.

         

        Thanks,

         

        Neville

         

         

         

      
      

      
      

      _______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

    
    

  


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com                          
          
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to