I finally got vagrant working. Had to roll back to v1.3.5 to get it
working. I can get the puppet vm up, but the provisioning step always
fails with a puppet error [1]. Puppet complains about not finding a
declared class. This happens with and without vagrant-cachier. I'm
using the latest box (uplo
Hi Vijay, everyone,
On 01/26/2014 04:22 PM, Vijay Bellur wrote:
On 01/26/2014 07:57 PM, Paul Boven wrote:
The result of this was that migrations completely stopped
working. Trying to do a migration would cause the process on the sending
machine to hang, and on the receiving machine, libvirt b
On Sun, Jan 26, 2014 at 10:22 AM, Vijay Bellur wrote:
>> So I set 'option base-port 50152' in /etc/glusterfs/glusterd.vol (note
>> that the bug talks about /etc/glusterfs/gluster.vol, which doesn't
>> exist).
It is glusterd.vol. -- as a side note, Puppet-Gluster supports
managing this option:
http
On 01/26/2014 07:57 PM, Paul Boven wrote:
Hi folks,
While debugging the migration issue, I noticed that sometimes we did in
fact occasionally hit but 987555 ("Address already in use") when doing a
live-migration, so I decided to implement the advice in aforementioned bug.
So I set 'option base-
Hi folks,
While debugging the migration issue, I noticed that sometimes we did in
fact occasionally hit but 987555 ("Address already in use") when doing a
live-migration, so I decided to implement the advice in aforementioned bug.
So I set 'option base-port 50152' in /etc/glusterfs/glusterd.v
thanks, I will try this.
On Sun, Jan 26, 2014 at 7:23 PM, James wrote:
> On Sun, Jan 26, 2014 at 6:13 AM, Mingfan Lu wrote:
> > we have lots of (really) files in our gluser brick servers and every
> day, we
> > will generate lots, the number of files increase very quickly. could I
> > disable
Hi Guys,
I'm trying to use a GUI client such as S3 Browser or CloudBerryLab Explorer
for my glusterfs. As I know the S3 compatible interface is implemented by
the OpenStack Swift and I am be able to read / write from / to the drive by
curl tool. my questions are:
1. To be able to use those cli
Have you tried setting the uid/guid as part of the gluster volume? For
oVirt it uses 36:36 for virt
gluster volume set DATA storage.owner-uid 36
gluster volume set DATA storage.owner-gid 36
I'm assuming, setting this will enforce these ownership permissions on all
files.
On Sun, Jan 26, 2014 at
Hi Bernhard,
Indeed I see the same behaviour:
When a guest is running, it is owned by libvirt:kvm on both servers.
When a guest is stopped, it is owned by root:root on both servers.
In a failed migration, the ownership changes to root:root.
I'm not convinced though that it is a simple unix permi
Hi James, everyone,
When debugging things, I already came across this bug. It is unlikely to
be the cause of our issues:
Firstly, we migrated from 3.4.0 to 3.4.1, so we already had the possible
port number conflict, but things worked fine with 3.4.0.
Secondly, I don't see the 'address alrea
On Sun, Jan 26, 2014 at 6:13 AM, Mingfan Lu wrote:
> we have lots of (really) files in our gluser brick servers and every day, we
> will generate lots, the number of files increase very quickly. could I
> disable updatedb in brick servers? if that, glusterfs servers will be
> impacted?
Yes, read m
we have lots of (really) files in our gluser brick servers and every day,
we will generate lots, the number of files increase very quickly. could I
disable updatedb in brick servers? if that, glusterfs servers will be
impacted?
___
Gluster-users mailing l
I found in the brick servers, .glusterfs/indicies/xattrop of one volume
has many stale files (> 260,000 ,most of them are created 2 month ago),
could I delete them direcly?
Another questions, how there stale files be left? I think when a file is
created or self-healed, the files should be remove
13 matches
Mail list logo