Re: [Gluster-users] What should I do to improve performance ?

2015-03-25 Thread Joe Topjian
Just to clarify: the Cinder Gluster driver, which is comparable to the
Cinder NFS driver, uses FUSE:

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/glusterfs.py#L489-L499

Unless I'm mistaken, bootable volumes with Nova will use libgfapi (if
configured with qemu_allowed_storage_drivers), but the best ephemeral
instances can do is store the instances on a GlusterFS mounted
/var/lib/nova/instances.

We've also seen performance issues when running ephemeral instances on a
glusterfs mounted directory. However, we're using the Gluster Cinder driver
for volumes and have been mostly happy with it.

On Tue, Mar 24, 2015 at 9:22 AM, Joe Julian  wrote:

> Nova ephemeral disks don't use libgfapi, only Cinder volumes.
>
> On March 24, 2015 7:27:52 AM PDT, marianna cattani <
> marianna.catt...@gmail.com> wrote:
>
>> Openstack doesn't have vsdm, it should be a configuration option:
>> "qemu_allowed_storage_drivers=gluster"
>>
>> But, however , the machine is generated with the xml that you see.
>>
>> Now I try to write to the OpenStack's mailing list .
>>
>> tnx
>>
>> M
>>
>> 2015-03-24 15:14 GMT+01:00 noc :
>>
>>> On 24-3-2015 14:39, marianna cattani wrote:
>>> > Many many thanks 
>>> >
>>> > Mine is different 
>>> >
>>> > :'(
>>> >
>>> > root@nodo-4:~# virsh -r dumpxml instance-002c | grep disk
>>> > 
>>> >   >> >
>>> file='/var/lib/nova/instances/ef84920d-2009-42a2-90de-29d9bd5e8512/disk'/>
>>> >   
>>> > 
>>> >
>>> >
>>> Thats a fuse connection. I'm running ovirt + glusterfs where vdsm is a
>>> special version with libgfapi support. Don't know if Openstack has that
>>> too?
>>>
>>> Joop
>>>
>>>
>> --
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] libgfapi on ubuntu

2014-01-14 Thread Joe Topjian
Hi Bernhard

(dropping -devel as I'm not subscribed)

I've been able to compile qemu using Ubuntu 12.04 and the OpenStack Havana
cloud archive (https://wiki.ubuntu.com/ServerTeam/CloudArchive) which comes
with its own version of qemu. Maybe you can just pull qemu from that repo
without having to use 12.04.

If you do get qemu to compile, though, I'd be curious if you run into the
issue described here:

http://www.gluster.org/pipermail/gluster-users/2013-December/thread.html#38302

I've been trying for weeks to clear time out to debug more, but just
haven't been able to.

Thanks,
Joe


On Tue, Jan 14, 2014 at 1:59 AM, Bernhard Glomm
wrote:

> I'm running ubuntu 13.10 and try recompile qemu as described here
> https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1224517
> in order to test libgfapi.
> unfortunately compiling failed with messages like this:
> 
> In file included from /usr/include/libfdt.h:55:0,
>  from
> /usr/src/rebuild-qemu/qemu-1.5.0+dfsg/device_tree.c:28:
> /usr/include/fdt.h:58:2: error: unknown type name âfdt32_tâ
>   fdt32_t magic; /* magic word FDT_MAGIC */
>   ^
> .
> this both files are causing the trouble:
>  /usr/include/fdt.h and
>  /usr/include/libfdt.h
>
> I tried the qemu sources from "trusty" (ubuntu 14.04)
> with them make failes with:
> ...
> dpkg-shlibdeps: error: no dependency information found for
> /usr/lib/libgfapi.so.0 (used by
> debian/qemu-system-misc/usr/bin/qemu-system-s390x)
> dh_shlibdeps: dpkg-shlibdeps -Tdebian/qemu-system-misc.substvars
> debian/qemu-system-misc/usr/bin/qemu-system-microblazeel
> debian/qemu-system-misc/usr/bin/qemu-system-or32
> debian/qemu-system-misc/usr/bin/qemu-system-xtensaeb
> debian/qemu-system-misc/usr/bin/qemu-system-alpha
> debian/qemu-system-misc/usr/bin/qemu-system-sh4eb
> debian/qemu-system-misc/usr/bin/qemu-system-unicore32
> debian/qemu-system-misc/usr/bin/qemu-system-m68k
> debian/qemu-system-misc/usr/bin/qemu-system-cris
> debian/qemu-system-misc/usr/bin/qemu-system-lm32
> debian/qemu-system-misc/usr/bin/qemu-system-moxie
> debian/qemu-system-misc/usr/bin/qemu-system-microblaze
> debian/qemu-system-misc/usr/bin/qemu-system-s390x
> debian/qemu-system-misc/usr/bin/qemu-system-sh4
> debian/qemu-system-misc/usr/bin/qemu-system-xtensa returned exit code 2
> make: *** [install] Error 2
> dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit
> status 2
> debuild: fatal error at line 1361:
> dpkg-buildpackage -rfakeroot -D -us -uc -j10 failed
> 
> anyone an idea how to proceed?
>
> Since the above mentioned bug report is closed with "won't fix"
> until there is a MIR filed for glusterfs I may ask
> will that be done?
> Otherwise how can I use libgfapi on an ubuntu system?
> TIA
> Bernhard
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] qemu remote insecure connections

2013-12-19 Thread Joe Topjian
Hi Vijay,

Thank you for the reply.

It does look like a connection has been established but qemu-img is blocked
> on something. Can you please start qemu-img with strace -f and capture the
> output?
>

Sure. I have added two new files to the gist:

https://gist.github.com/jtopjian/7981763

One is strace using the "root" user, so the command is successful. The
other is using the "libvirt" user. I eventually ctrl-c'd that strace.

Let me know if I can collect any more info for you.

Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] qemu remote insecure connections

2013-12-15 Thread Joe Topjian
Hello,

I apologize for the delayed reply.

I've collected some logs and posted them here:
https://gist.github.com/jtopjian/7981763

I stopped the Gluster service on 192.168.1.11, moved /var/log/glusterfs to
a backup, then started Gluster so that the log files were more succinct.

I then used the qemu-img command as mentioned before as root, which was
successful. Then I ran the command as libvirt-qemu and let the command hang
for 2 minutes before I killed it.

It might also be helpful to note that this is on an Ubuntu 12.04 server
using the packages found here:
https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4

Thanks,
Joe


On Sat, Dec 14, 2013 at 12:41 AM, Vijay Bellur  wrote:

> On 12/13/2013 10:58 AM, Joe Topjian wrote:
>
>> Hello,
>>
>> I'm having a problem getting remote servers to connect to Gluster with
>> qemu.
>>
>> I have 5 servers, 4 of which run Gluster and host a volume. The qemu
>> user on all 5 servers has the same uid.
>>
>> storage.owner-uid and storage.owner-gid is set to that user.
>>
>> In addition, server.allow-insecure is on and is also set in the
>> glusterd.vol file. glusterd has also been restarted (numerous times).
>>
>> When attempting to create a qemu file by connecting to the same server,
>> everything works:
>>
>> qemu@192.168.1.11 <mailto:qemu@192.168.1.11>> qemu-img create
>> gluster://192.168.1.11/volumes/v.img <http://192.168.1.11/volumes/v.img>
>> 1M
>> Formatting 'gluster://192.168.1.11/volumes/v.img
>> <http://192.168.1.11/volumes/v.img>', fmt=raw size=1048576
>> qemu@192.168.1.11 <mailto:qemu@192.168.1.11>>
>>
>>
>> But when trying to do it remotely, the command hangs indefinitely:
>>
>> qemu@192.168.1.12 <mailto:qemu@192.168.1.12>> qemu-img create
>> gluster://192.168.1.11/volumes/v.img <http://192.168.1.11/volumes/v.img>
>> 1M
>> Formatting 'gluster://192.168.1.11/volumes/v.img
>> <http://192.168.1.11/volumes/v.img>', fmt=raw size=1048576
>>
>> ^C
>>
>> Yet when 192.168.1.12 connects to gluster://192.168.1.12
>> <http://192.168.1.12>, the command works and the file shows up in the
>>
>> distributed volume.
>>
>> Further, when turning server.allow-insecure off, I get an immediate
>> error no matter what the source and destination connection is:
>>
>> qemu@192.168.1.12 <mailto:qemu@192.168.1.12>> qemu-img create
>> gluster://192.168.1.11/volumes/v.img <http://192.168.1.11/volumes/v.img>
>> 1M
>> Formatting 'gluster://192.168.1.11/volumes/v.img
>> <http://192.168.1.11/volumes/v.img>', fmt=raw size=1048576
>>
>> qemu-img: Gluster connection failed for server=192.168.1.11 port=0
>> volume=volumes image=v.img transport=tcp
>> qemu-img: gluster://192.168.1.11/volumes/v.img
>> <http://192.168.1.11/volumes/v.img>: error while creating raw: No data
>>
>> available
>>
>> Does anyone have any ideas how I can have an unprivileged user connect
>> to remote gluster servers?
>>
>>
> Can you please provide glusterd and glusterfsd logs from 192.168.1.11?
>
> -Vijay
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] qemu remote insecure connections

2013-12-12 Thread Joe Topjian
Hello,

I'm having a problem getting remote servers to connect to Gluster with qemu.

I have 5 servers, 4 of which run Gluster and host a volume. The qemu user
on all 5 servers has the same uid.

storage.owner-uid and storage.owner-gid is set to that user.

In addition, server.allow-insecure is on and is also set in the
glusterd.vol file. glusterd has also been restarted (numerous times).

When attempting to create a qemu file by connecting to the same server,
everything works:

qemu@192.168.1.11> qemu-img create gluster://192.168.1.11/volumes/v.img 1M
Formatting 'gluster://192.168.1.11/volumes/v.img', fmt=raw size=1048576
qemu@192.168.1.11>

But when trying to do it remotely, the command hangs indefinitely:

qemu@192.168.1.12> qemu-img create gluster://192.168.1.11/volumes/v.img 1M
Formatting 'gluster://192.168.1.11/volumes/v.img', fmt=raw size=1048576
^C

Yet when 192.168.1.12 connects to gluster://192.168.1.12, the command works
and the file shows up in the distributed volume.

Further, when turning server.allow-insecure off, I get an immediate error
no matter what the source and destination connection is:

qemu@192.168.1.12> qemu-img create gluster://192.168.1.11/volumes/v.img 1M
Formatting 'gluster://192.168.1.11/volumes/v.img', fmt=raw size=1048576
qemu-img: Gluster connection failed for server=192.168.1.11 port=0
volume=volumes image=v.img transport=tcp
qemu-img: gluster://192.168.1.11/volumes/v.img: error while creating raw:
No data available

Does anyone have any ideas how I can have an unprivileged user connect to
remote gluster servers?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster / KVM Filesystem Benchmarks

2012-09-09 Thread Joe Topjian
Hello,

These benchmarks were not testing the Gluster / KVM native integration
patch. The benchmarks were testing performance of qcow2 files running on
Gluster volumes.

I just wanted to clear that up.

Thanks,
Joe


On Sun, Sep 9, 2012 at 4:32 AM, Andrei Mikhailovsky wrote:

> Guys, do you know if this patch supports glusterfs over rdma?
>
> I am running glusterfs over infiniband and kvm performance is so nasty. As
> an example, I get around 600-800 mb/s for read/write on the glusterfs
> partition mounted on kvm server. However, vms stored on this partition can
> only read/write around 40-50mb/s. I would love to try this patch if there
> is rdma support.
>
> Thanks
>
>
>
>
> ----------
> *From: *"Joe Topjian" 
> *To: *gluster-users@gluster.org
> *Sent: *Wednesday, 5 September, 2012 6:47:09 AM
> *Subject: *[Gluster-users] Gluster / KVM Filesystem Benchmarks
>
>
> Hello,
>
> I did a few filesystem benchmarks with Gluster (3.3) and KVM using iozone
> and have compiled a spreadsheet with the results:
>
> https://docs.google.com/open?id=0B6GzZufzohwFZmozTFRSSHk5T0E
>
> Just a heads up: It is an Excel spreadsheet.
>
> All of the details that were used to generate the results are described in
> the spreadsheet. Of most interest would be the second tab titled "Gluster".
> The results that do not have "vm" in the description were iozone procs
> running directly on a mounted replicated Gluster volume (2 bricks). The
> "vm" results are iozone procs running in KVM virtual machines stored in
> qcow2 files.
>
> The first tab, General, is just some simple non-Gluster benchmarks that I
> ran for comparison.
>
> The third tab, Gluster old, was me doing iozone benchmarks on files with
> sizes ranging from 8mb to 512mb. I noticed that there was very little
> difference in the results so I decided to work with only 128mb and 256mb
> sized files.
>
> If you do not have access to Excel or something compatible, you can still
> view most of the information in the Google Doc. Here is a jpeg image of the
> main graph that was generated:
>
> https://docs.google.com/open?id=0B6GzZufzohwFWGtFS3I5UEllTkU
>
> Questions I have:
>
> * The "optimized settings" that I used were pulled from a Gluster
> Performance Tuning presentation. It doesn't look like the settings did very
> much in terms of optimization. Can someone comment on these settings? Are
> there better settings to use?
>
> * I'm a bit confused at how the KVM / qcow2 reads are much higher than the
> reads directly on the Gluster volume. Any idea why that is?
>
> * I ran all tests with the cache-io translator on and off. Like the
> "optimized settings", it wasn't of much use. Did I use this incorrectly?
>
> * The reason I did all tests with 128mb and 256mb sized files was to
> highlight the very bizarre trait where certain increments (16, 64, 256)
> gave very poor results while increments such as 8, 32, and 128 had good
> results. Any idea why that is?
>
> * Can anyone comment on if these results are of any use? Or are the stats
> I collect and the way I collected them incorrect?
>
> Please let me know if anyone has any questions or needs anything clarified.
>
> Thanks,
> Joe
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster / KVM Filesystem Benchmarks

2012-09-04 Thread Joe Topjian
Hello,

I did a few filesystem benchmarks with Gluster (3.3) and KVM using iozone
and have compiled a spreadsheet with the results:

https://docs.google.com/open?id=0B6GzZufzohwFZmozTFRSSHk5T0E

Just a heads up: It is an Excel spreadsheet.

All of the details that were used to generate the results are described in
the spreadsheet. Of most interest would be the second tab titled "Gluster".
The results that do not have "vm" in the description were iozone procs
running directly on a mounted replicated Gluster volume (2 bricks). The
"vm" results are iozone procs running in KVM virtual machines stored in
qcow2 files.

The first tab, General, is just some simple non-Gluster benchmarks that I
ran for comparison.

The third tab, Gluster old, was me doing iozone benchmarks on files with
sizes ranging from 8mb to 512mb. I noticed that there was very little
difference in the results so I decided to work with only 128mb and 256mb
sized files.

If you do not have access to Excel or something compatible, you can still
view most of the information in the Google Doc. Here is a jpeg image of the
main graph that was generated:

https://docs.google.com/open?id=0B6GzZufzohwFWGtFS3I5UEllTkU

Questions I have:

* The "optimized settings" that I used were pulled from a Gluster
Performance Tuning presentation. It doesn't look like the settings did very
much in terms of optimization. Can someone comment on these settings? Are
there better settings to use?

* I'm a bit confused at how the KVM / qcow2 reads are much higher than the
reads directly on the Gluster volume. Any idea why that is?

* I ran all tests with the cache-io translator on and off. Like the
"optimized settings", it wasn't of much use. Did I use this incorrectly?

* The reason I did all tests with 128mb and 256mb sized files was to
highlight the very bizarre trait where certain increments (16, 64, 256)
gave very poor results while increments such as 8, 32, and 128 had good
results. Any idea why that is?

* Can anyone comment on if these results are of any use? Or are the stats I
collect and the way I collected them incorrect?

Please let me know if anyone has any questions or needs anything clarified.

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer status'

2012-08-28 Thread Joe Topjian
Hi James,

FWIW: I tend to agree on this point. The other question that comes to
> mind is what "quick" bash scripts are you writing for managing gluster?
>

"quick" might not have been the best word to use -- maybe "small" would
have been better.

For example, "peer status" + columns + grep + awk could be used for a small
Nagios check command.

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] FeedBack Requested : Changes to CLI output of 'peer status'

2012-08-28 Thread Joe Topjian
Hi Pranith,

On Tue, Aug 28, 2012 at 6:51 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>Why not use --xml at the end of the command? That will print the output
> in xml format. Would that make it easy to parse?.


IMO, I can see --xml being useful for more "sophisticated" scripts that
utilize Perl, Python, or Ruby, however, for quick shell scripting, I can
see this being very difficult if not unusable.

Having the output print in set columns, though might wrap when printed on
<=80 char terminals, would make it very easy to parse with awk.

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Typical setup questions

2012-08-28 Thread Joe Topjian
Hi Matt,

On Tue, Aug 28, 2012 at 9:29 AM, Matt Weil  wrote:

> Since we are on the subject of hardware what would be the perfect fit for
> a gluster brick. We where looking at a PowerEdge C2100 Rack Server.
>

Just a note: the c2100 has been superseded by the Dell r720xd. Although the
r720 is not part of the c-series, it's their official replacement.


> During testing I found it pretty easy to saturate 1 Gig network links.
> This was also the case when multiple links where bonded together.  Are
> there any cheap 10 gig switch alternatives that anyone would suggest?


While not necessarily cheap, I've had great luck with Arista 7050 switches.

We implement them in sets of two, linked together. We then use dual-port
10gb NICs and connect each NIC to each switch. It gives multiple layers of
redundancy + a theoretical 20gb throughput per server.

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Ownership changed to root

2012-08-26 Thread Joe Topjian
Hi Brian,


> This sounds extremely unlikely. mdadm and LVM both work at the block device
> layer - reading and writing 512-byte blocks. They have no understanding of
> filesystems and no understanding of user IDs.
>

Agreed.


> I suspect there were other differences between the tests. For example, did
> you do one with an ext4 filesystem and one with xfs? Or did you have a
> failed drive in your RAID1, which meant that some writes were timing out?
>

Both environments were xfs. The only difference is that the original
environment was not using LVM or mdadm -- similar to what I ended up doing
on the second environment in the end. I do plan on testing this out in more
detail soon as I'd like to narrow the issue down further.


> FWIW, I've also seen the "files owned by root" occasionally in testing, but
> wasn't able to pin down the cause.
>

Good to know - if anything to say I'm not going crazy  :)

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Ownership changed to root

2012-08-24 Thread Joe Topjian
I figured out how to work around this but I'm not sure of the exact reason
why it happened.

The Gluster bricks I was using were LVM LV partitions that sat on top of a
software RAID1. I broke the software RAID and dedicated one hard drive to
LVM in order for OpenStack to use it for nova-volumes. I then used the
other drive strictly for the gluster brick.

This removed mdadm and LVM out of the equation and the problem went away. I
then tried with just LVM and still did not see this problem.

Unfortunately I don't have enough hardware at the moment to create another
RAID1 mirror, so I can't single that out. I will try when I get a chance --
unless anyone else knows if it would cause a problem? Or maybe it is the
mdamd+LVM combination?

Thanks,
Joe

On Fri, Aug 24, 2012 at 3:17 PM, Joe Topjian  wrote:

> Hello,
>
> I'm seeing a weird issue with OpenStack and Gluster.
>
> I have /var/lib/nova/instances mounted as a glusterfs volume. The owner of
> /var/lib/nova/instances is nova:nova.
>
> When I launch a vm and watch it launching, I see the following:
>
> root@c01:/var/lib/nova/instances/instance-0012# ls -l
> total 8
> -rw-rw 1 nova nova0 Aug 24 14:22 console.log
> -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml
>
> This is correct.
>
> Then it changes ownership to libvirt-qemu:
>
> root@c01:/var/lib/nova/instances/instance-0012# ls -l
> total 22556
> -rw-rw 1 libvirt-qemu kvm 0 Aug 24 14:22 console.log
> -rw-r--r-- 1 libvirt-qemu kvm  27262976 Aug 24 14:22 disk
> -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xm
>
> Again, this is correct.
>
> But then it changes to root:
>
> root@c01:/var/lib/nova/instances/instance-0012# ls -l
> total 22556
> -rw-rw 1 root root0 Aug 24 14:22 console.log
> -rw-r--r-- 1 root root 27262976 Aug 24 14:22 disk
> -rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml
>
> OpenStack then errors out due to not being able to correctly access the
> files.
>
> If I remove the /var/lib/nova/instances mount and just use the normal
> filesystem, the root ownership part does not happen.
>
> I have successfully had Gluster working with OpenStack in this way on a
> different installation, so I'm not sure why I'm seeing this issue now.
>
> Any ideas?
>
> Thanks,
> Joe
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Ownership changed to root

2012-08-24 Thread Joe Topjian
Hello,

I'm seeing a weird issue with OpenStack and Gluster.

I have /var/lib/nova/instances mounted as a glusterfs volume. The owner of
/var/lib/nova/instances is nova:nova.

When I launch a vm and watch it launching, I see the following:

root@c01:/var/lib/nova/instances/instance-0012# ls -l
total 8
-rw-rw 1 nova nova0 Aug 24 14:22 console.log
-rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml

This is correct.

Then it changes ownership to libvirt-qemu:

root@c01:/var/lib/nova/instances/instance-0012# ls -l
total 22556
-rw-rw 1 libvirt-qemu kvm 0 Aug 24 14:22 console.log
-rw-r--r-- 1 libvirt-qemu kvm  27262976 Aug 24 14:22 disk
-rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xm

Again, this is correct.

But then it changes to root:

root@c01:/var/lib/nova/instances/instance-0012# ls -l
total 22556
-rw-rw 1 root root0 Aug 24 14:22 console.log
-rw-r--r-- 1 root root 27262976 Aug 24 14:22 disk
-rw-rw-r-- 1 nova nova 1459 Aug 24 14:22 libvirt.xml

OpenStack then errors out due to not being able to correctly access the
files.

If I remove the /var/lib/nova/instances mount and just use the normal
filesystem, the root ownership part does not happen.

I have successfully had Gluster working with OpenStack in this way on a
different installation, so I'm not sure why I'm seeing this issue now.

Any ideas?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users