On 11/19/2014 02:27 AM, Juan José Pavlik Salles wrote:
Seems like a big jump to take, updating from 3.3.2 to 3.5, is it a
plug&play upgrade?
Yes, it was easy.
t
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/
On 11/19/2014 03:23 AM, Lindsay Mathieson wrote:
> Just some basic question on the heal process, please just point me to the
> docs
> if they are there :)
>
> - How is the need for a heal detected? I presume nodes can detect when they
> can't sync writes to the other nodes. This is flagged (xat
Seems like a big jump to take, updating from 3.3.2 to 3.5, is it a
plug&play upgrade?
2014-11-18 17:51 GMT-03:00 Tamas Papp :
>
> On 11/18/2014 09:41 PM, Juan José Pavlik Salles wrote:
>
>> Did you have the dame problem? Is it a memory leak?
>>
>
> Yes.
>
> After the upgrade it works quite well.
Just some basic question on the heal process, please just point me to the docs
if they are there :)
- How is the need for a heal detected? I presume nodes can detect when they
can't sync writes to the other nodes. This is flagged (xattr?) for healing
when the other nodes are back up?
- How is
On 11/18/2014 09:41 PM, Juan José Pavlik Salles wrote:
Did you have the dame problem? Is it a memory leak?
Yes.
After the upgrade it works quite well.
tamas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mai
Did you have the dame problem? Is it a memory leak?
2014-11-18 16:28 GMT-03:00 Tamas Papp :
> Switching to 3.5 helped us a _lot_.
>
> --
> Sent from mobile
>
> On November 18, 2014 7:48:45 PM Juan José Pavlik Salles <
> jjpav...@gmail.com> wrote:
>
>> Hi guys, I've a small cluster with 5 nodes
Hello, I was wondering if there has been any progress on reproducing this error
or if there is any more info I can provide.
Thanks, Jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-user
On Tue, 18 Nov 2014 08:26:39 PM you wrote:
> I can CC you to the bugzilla, so that you can see the update on the bug
> once it is fixed. Do you want to be CCed to the bug?
Yes please,
thanks
p.s switched back to diif and all heals finished this morning :)
--
Lindsay
signature.asc
Descriptio
The Gluster community is please to announce the release of updated
releases for the 3.4 and 3.5 family. With the release of 3.6 a few weeks
ago, this is brings all the current members of GlusterFS into a more
stable, production ready status.
The GlusterFS 3.4.6 release is focused on bug fixes.
On 11/18/2014 06:56 AM, Pranith Kumar Karampuri wrote:
On 11/18/2014 05:35 PM, Lindsay Mathieson wrote:
I have a VM image which is a sparse file - 512GB allocated, but only
32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18
On 11/18/2014 05:35 PM, Lindsay Mathieson wrote:
I have a VM image which is a sparse file - 512GB allocated, but only
32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18 19:57 vm-100-disk-1.qcow2
I switched to full sync and r
On Mon, Nov 17, 2014 at 08:57:01PM +0100, Niels de Vos wrote:
> Hi all,
>
> Tomorrow (Tuesday) we will have an other Gluster Community Bug Triage
> meeting.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> - date: every Tuesday
> - time: 12:00 UTC, 13:00 CET (in your terminal
I have a VM image which is a sparse file - 512GB allocated, but only 32GB used.
root@vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
total 31G
-rw--- 2 root root 513G Nov 18 19:57 vm-100-disk-1.qcow2
I switched to full sync and rebooted.
heal was started on the image and it seemed t
On Tue, 18 Nov 2014 06:16:53 AM you wrote:
> heal info command that you executed basically gives a list of files to be
> healed. So in the above output, 1 entry is possibly getting healed and
> other 7 need to be healed.
> >
> >
> >
> > And what is a gfid?
>
> In glusterfs, gfid (glusterfs ID) i
- Original Message -
> From: "Lindsay Mathieson"
> To: "gluster-users"
> Sent: Tuesday, November 18, 2014 4:24:51 PM
> Subject: [Gluster-users] gluster volume heal info question
>
>
>
> When I run the subject I get:
>
>
>
> root@vnb:~# gluster volume heal datastore1 info
>
> Bri
On 11/18/2014 04:14 PM, Lindsay Mathieson wrote:
On Tue, 18 Nov 2014 02:36:19 PM Pranith Kumar Karampuri wrote:
On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
On 18 November 2014 17:40, Pranith Kumar Karampuri
wrote:
However given the files are tens of GB in size, won't it thrash my
netwo
When I run the subject I get:
root@vnb:~# gluster volume heal datastore1 info
Brick vnb:/mnt/gluster-brick1/datastore/
/images/100/vm-100-disk-1.qcow2 - Possibly undergoing heal
/images/102/vm-102-disk-1.qcow2
/images/400/vm-400-disk-1.qcow2
Number of entries: 8
it has 8 entries but only o
On Tue, 18 Nov 2014 02:36:19 PM Pranith Kumar Karampuri wrote:
> On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
> > On 18 November 2014 17:40, Pranith Kumar Karampuri
wrote:
> >
> > However given the files are tens of GB in size, won't it thrash my
> > network?
>
> Yes you are right. I wonder
On 11/18/2014 01:17 PM, Lindsay Mathieson wrote:
On 18 November 2014 17:40, Pranith Kumar Karampuri wrote:
Sorry didn't see this one. I think this is happening because of 'diff' based
self-heal which does full file checksums, that I believe is the root cause.
Could you execute 'gluster volume
On 18 November 2014 18:05, Franco Broi wrote:
>
> Can't see how any of that could account for 1000% cpu unless it's just
> stuck in a loop.
Currently still varying between 400% to 950%
Can glusterfsd be killed without effecting the lgfapi clients? (KVM's)
___
Can't see how any of that could account for 1000% cpu unless it's just
stuck in a loop.
On Tue, 2014-11-18 at 18:00 +1000, Lindsay Mathieson wrote:
> On 18 November 2014 17:46, Franco Broi wrote:
> >
> > Try strace -Ff -e file -p 'glusterfsd pid'
>
> Thanks, Attached
>
On 18 November 2014 17:46, Franco Broi wrote:
>
> Try strace -Ff -e file -p 'glusterfsd pid'
Thanks, Attached
Process 27115 attached with 25 threads - interrupt to quit
[pid 27122] stat("/mnt/gluster-brick1/datastore", {st_mode=S_IFDIR|0755,
st_size=4, ...}) = 0
[pid 11840] lstat("/mnt/gluster-b
22 matches
Mail list logo