[Gluster-users] Gluster 3.9 repository?

2017-01-11 Thread Andrus, Brian Contractor
All,
I notice on the main page, Gluster 3.9 is listed as
Gluster 3.9 is the latest major release as of November 2016.

Yet, if you click on the download link, there is no mention of Gluster 3.9 on 
that page at all. It has:
GlusterFS version 3.8 is the latest version at the moment.

Is there going to be a 3.9 repository set up like the 3.8 has? And could the 
webpage be updated for 3.9?

Thanks!

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Replace replicate in single brick

2017-01-10 Thread Andrus, Brian Contractor
Gandalf,

From what I see, it doesn’t:
reset-brick  

He showed the syntax for ‘replace-brick’ not ‘reset-brick’, which, since I am 
on 3.8.5, hopefully works when a source-brick is missing….
Anyone know if that would be true? Or do I have to upgrade to 3.9 to be able to 
replace a completely unavailable, missing brick replica?

I guess I will find out, since I could put the old disk back in with the errors 
if it must exist.

Brian

From: Gandalf Corvotempesta [mailto:gandalf.corvotempe...@gmail.com]
Sent: Tuesday, January 10, 2017 12:31 AM
To: Ravishankar N 
Cc: gluster-users@gluster.org; Andrus, Brian Contractor 
Subject: Re: [Gluster-users] Replace replicate in single brick

Il 10 gen 2017 05:59, "Ravishankar N" 
mailto:ravishan...@redhat.com>> ha scritto:
If you are using glusterfs 3.9 and want to give the replaced brick the same 
name as the old one, there is the reset-brick command. The commit message in 
http://review.gluster.org/#/c/12250/ gives you some information on the steps to 
run.
If you are okay with using a different name for the brick, then there is 
`gluster volume replace-brickcommit force`. I think this command works from glusterfs 3.7.10 onward.
-Ravi

If reset-brick is used to replace a brick with another with the same name,  why 
this command ask for both source and destination brick name?
It would always be the same, if different, replace-brick command should be used.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Replace replicate in single brick

2017-01-09 Thread Andrus, Brian Contractor
All,
We have a small cluster with a single volume and a single replicated brick:

Volume Name: volume1
Type: Replicate
Number of Bricks: 1 x 2 = 2
Bricks:
Brick1: node01:/export/sdb1/brick
Brick2: node02:/export/sdb1/brick

Now the disk on node01 is going bad and needs replaced, but I am uncertain if 
there is a safe way to do this with gluster. There are not enough connections 
in the system to hook up a new disk and then add it. I have to physically 
replace the disk with the new one.

Is there a way to do that?

Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Bricks show much space, mount shows little...

2016-04-04 Thread Andrus, Brian Contractor
All,

I am a little confused here...
I have a 7.2 terabyte mirrored gluster implementation. The backing filesystem 
is zfs.

This is what I am seeing:
node45: Filesystem   Size  Used Avail Use% Mounted on
node45: /dev/sdb1190M   33M  148M  19% /boot
node45: /dev/sdb3854G   12G  798G   2% /
node45: node45:/RDATA  6.2T  6.1T  193G  97% /RDATA
node45: tmpfs127G   64K  127G   1% /dev/shm
node45: zpool1   7.2T  1.3T  6.0T  17% /zpool1

node46: Filesystem   Size  Used Avail Use% Mounted on
node46: /dev/sda1190M   33M  148M  18% /boot
node46: /dev/sda3854G   11G  800G   2% /
node46: node43:/RDATA  6.2T  6.1T  193G  97% /RDATA
node46: tmpfs127G 0  127G   0% /dev/shm
node46: zpool1   7.2T  1.3T  6.0T  18% /zpool1

I do not understand that my zpools are only 18% full but the gluster mounted 
view of that same data is 97% full...

Any ideas what is going on here?

Brian Andrus

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Moved files from one directory to another, now gone

2016-03-03 Thread Andrus, Brian Contractor
Pranith,

Sorry for taking so long. Work required my time.

So I can look at this a little more right now.
I am using glusterfs-3.7.6-1
Here is some pertinent info:
]# gluster volume info DATA

Volume Name: DATA
Type: Disperse
Volume ID: 8374314b-796a-48d0-acf7-dea51fdd83ae
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: server45:/zpool1/DATABRICK
Brick2: server46:/zpool1/DATABRICK
Brick3: server47:/zpool1/DATABRICK
Options Reconfigured:
performance.io-cache: off
performance.quick-read: off
performance.readdir-ahead: disable

# df -h /zpool1 /DATA
Filesystem  Size  Used Avail Use% Mounted on
zpool1  7.2T  1.2T  6.0T  17% /zpool1
server45:/DATA   15T  2.5T   12T  18% /DATA

# du -hsc /DATA/
232G/DATA/
232Gtotal

I have been cleaning up some of the stuff, but there are 2 directories I cannot 
remove:
# ls /DATA/
FINISHED_LOGS  home  JOBS  LOGS  m4p2  OLDLOGS

# ls /zpool1/DATABRICK/
FINISHED_LOGS/ .glusterfs/home/  JOBS/  LOGS/  
m4p2/  OLDLOGS/   .trashcan/

# ls /zpool1/DATABRICK/
FINISHED_LOGS  home  JOBS  LOGS  m4p2  OLDLOGS

# rm -rf /DATA/OLDLOGS/
# rm -rf /DATA/FINISHED_LOGS/
# ls /DATA/
FINISHED_LOGS  home  JOBS  LOGS  m4p2  OLDLOGS

Those two directories are empty on the glusterfs mount, but still have files 
when I look on the brick.

Strange bug. I will be just reformatting the thing when some of the work 
completes, but I can run some tests and provide info if you want to try and 
track it down.

Brian Andrus



From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: Thursday, January 28, 2016 10:36 PM
To: Andrus, Brian Contractor ; gluster-users@gluster.org
Subject: Re: [Gluster-users] Moved files from one directory to another, now gone


On 01/29/2016 12:50 AM, Andrus, Brian Contractor wrote:
All,

I have a glusterfs setup with a disperse volume over 3 zfs bricks, one on each 
node.

I just did a 'mv' of some log files from one directory to another and when I 
look in the directory, they are not there at all!
Neither is any of the data I used to have. It is completely empty. I try doing 
a 'touch' of a file and nothing shows up.

I do see the files on the zpool mount, but they seem to be partially there. 
They are all just text files, but they look incomplete.
There seems to be many errors in the log:

[2016-01-28 18:57:41.353265] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 
0-DATA-disperse-0: Heal failed [Transport endpoint is not connected]

Any ideas on what this is? Other directories seem ok.

Could you give me the following information:
1) Output of: gluster volume info
2) Log files of the setup? Both client and brick logs, they should be present 
in /var/log/glusterfs
3) What is the version of glusterfs you are using?

Pranith


Thanks in advance,
Brian Andrus





___

Gluster-users mailing list

Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>

http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] ZFS and snapshots

2016-02-04 Thread Andrus, Brian Contractor
All,

It seems that snapshotting for volumes based on ZFS is still 'in the works'. Is 
that the case?

snapshot create: failed: Snapshot is supported only for thin provisioned LV. 
Ensure that all bricks of DATA are thinly provisioned LV.

Using glusterfs-3.7.6-1.el6.x86_64

Brian Andrus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Moved files from one directory to another, now gone

2016-01-28 Thread Andrus, Brian Contractor
All,

I have a glusterfs setup with a disperse volume over 3 zfs bricks, one on each 
node.

I just did a 'mv' of some log files from one directory to another and when I 
look in the directory, they are not there at all!
Neither is any of the data I used to have. It is completely empty. I try doing 
a 'touch' of a file and nothing shows up.

I do see the files on the zpool mount, but they seem to be partially there. 
They are all just text files, but they look incomplete.
There seems to be many errors in the log:

[2016-01-28 18:57:41.353265] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 
0-DATA-disperse-0: Heal failed [Transport endpoint is not connected]

Any ideas on what this is? Other directories seem ok.

Thanks in advance,
Brian Andrus

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterd on one node using 89% of memory

2016-01-26 Thread Andrus, Brian Contractor
All,

I have one (of 4) gluster node that is using almost all of the available memory 
on my box. It has been growing and is up to 89%
I have already done 'echo 2 > /proc/sys/vm/drop_caches'
There seems to be no effect.

Are there any gotcha to just restart glusterd?
This is a CentOS 6.6 system with gluster 3.7.6

Thanks in advance,

Brian Andrus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] concurrent writes not all being written

2015-12-13 Thread Andrus, Brian Contractor
All,

I have a small gluster filesystem on 3 nodes.
I have a perl program that multi-threads and each thread writes it's output to 
one of 3 files depending on some results.

My trouble is that I am seeing missing lines from the output.
The input is a file of 500 lines. Depending on the line, it would be written to 
one of three files, but when I total the lines put out, I am missing anywhere 
from 4 to 8 lines.

This is even the case if I use an input file that should all go to a single 
file.

BUT... when I have it write to /tmp or /dev/shm, all of the lines expected are 
there.
This leads me to think there is something not happy with gluster and concurrent 
writes.

Here is the code for the actual write:

flock(GOOD_FILES, LOCK_EX) or die $!;
seek(GOOD_FILES, 0, SEEK_END) or die $!;
print GOOD_FILES $lines_to_process[$tid-1] ."\n";
flock(GOOD_FILES, LOCK_UN) or die $!;

So I would expect the proper file locking is taking place.
Is it possible that gluster is not writing because of a race condition?

Any insight as to where to look for a solution is appreciated.


Brian Andrus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Why oh why does gluster delay?

2015-11-30 Thread Andrus, Brian Contractor
All,

I am seeing it VERY consistently that when I do a 'gluster peer status' or 
'gluster pool list', the system 'hangs' for up to 1 minute before spitting back 
results.

I have 10 nodes all on the same network and currently ZERO volumes or bricks 
configured. Just trying to get good performance for the cluster to be talking 
to itself


What is gluster doing that takes so long to respond? Seems there may be a more 
efficient way of doing it, whatever it is...



Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] New install of 3.7.6 peers disconnect intermittently

2015-11-25 Thread Andrus, Brian Contractor
All,

I am trying to do an install of gluster 3.7.6
OS is CentOS 6.5
I start doing peer probe commands to add the servers, but I am constantly 
getting intermittent rpc_clnt_ping_timer_expired messages in the logs and 
arbitrary servers show 'Disconnected' when I try doing 'gluster pool list'

I have not even created any bricks or a filesystem yet. This is just to get the 
nodes in place that will be servers.

All nodes are in a Seamicro chassis and share networking, so it isn't even the 
network switch.

Is there anything that is known that can cause such a lockup? This is getting 
very frustrating.

Thanks in advance,

Brian Andrus

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] port 988

2015-07-31 Thread Andrus, Brian Contractor
All,

I am seeing a problem with conflicting ports.
I am running a relatively simple gluster implementation (4 x 2 = 8)
But on the same nodes I also run lustre.
I find that since gluster starts first, it seems to take over port 988, which 
lnet needs.
Unfortunately, I do not see where I can affect that particular port in the 
configs.

If I start glusterd after lnet, things seem happier.

Any ideas how to get gluster to avoid that port?


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster healing VM images

2015-07-24 Thread Andrus, Brian Contractor
I had this as well. It was a BIG pain for me. FWIW, after upgrading to gluster 
3.7, I have not had an issue.
YMMV


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of André Bauer
Sent: Friday, July 24, 2015 7:44 AM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster healing VM images

Same Here.
Glusterfs 3.5.5.
4 Nodes. Distributed Replicated.
Qemu uses libgfapi to access vmimages.

When one node goes down, Harddisk of VM goes ro.
Works again immediately after VM restart.


Am 21.07.2015 um 12:21 schrieb Gregor Burck:
> Am Dienstag, 21. Juli 2015, 10:04:25 schrieb Andrew Roberts:
>> The VMs using the healing image files freeze completely, also 
>> freezing Virt-Manager, and then all of the other VMs either freeze or become 
>> slow.
> This is the same as in my test enviroment.
> 
> Work in the vm isn't possible after freeze for me, cause the 
> filesystem in vm is ro,... No solution for yet, but I test somethings on.
> 
> On which OS did you run the glusterfs-server?
> 
> Bye
> 
> Gregor
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


--
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net 
www.magix.com 


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith 
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

  
 
--
The information in this email is intended only for the addressee named above. 
Access to this email by anyone else is unauthorized. If you are not the 
intended recipient of this message any disclosure, copying, distribution or any 
action taken in reliance on it is prohibited and may be unlawful. MAGIX does 
not warrant that any attachments are free from viruses or other defects and 
accepts no liability for any losses resulting from infected email 
transmissions. Please note that any views expressed in this email may be those 
of the originator and do not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] downgrade client from 3.7.1 to 3.6.3

2015-06-11 Thread Andrus, Brian Contractor
Usually works:

yum downgrade 


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kingsley
Sent: Thursday, June 11, 2015 7:39 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] downgrade client from 3.7.1 to 3.6.3

Hi,

We have a 3.6.3 cluster and all the clients were running the same version of 
glusterfs until I accidentally upgraded one of the client machines (which uses 
the fuse mount) to 3.7.1 when doing a yum update.

I'd prefer to not mix the versions and don't want to upgrade the lot to 3.7.x 
yet, so how do I downgrade the client back to 3.6.3?

I did "yum remove glusterfs glusterfs-fuse" and then edited the repo file 
(pasted below), but when I then do a subsequent "yum install glusterfs", it 
says it's going to install 3.7.1 again. How do I stop it doing this?

This is on CentOS 7.

Cheers,
Kingsley.

# cat /etc/yum.repos.d/glusterfs-epel.repo
# Place this file in your /etc/yum.repos.d/ directory

[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several 
petabytes.
#baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/EPEL.repo/epel-$releasever/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/EPEL.repo/pub.key
#gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key

[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several 
petabytes.
#baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch
baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/EPEL.repo/epel-$releasever/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/EPEL.repo/pub.key
#gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key

[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several 
petabytes. - Source 
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node restart

2015-06-09 Thread Andrus, Brian Contractor
Roger
I was using the last latest 3.7.0 and before that 3.6.3
So far I have NOT had the issue with 3.7.1, so that makes me quite happy.

Brian Andrus


-Original Message-
From: Roger Lehmann [mailto:roger.lehm...@marktjagd.de] 
Sent: Monday, June 08, 2015 11:46 PM
To: Andrus, Brian Contractor; Justin Clift
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node 
restart

Hi Brian,

which version were you using before 3.7.1? Did that problem occur with version 
3.7.1 meanwhile?

Greetings,
Roger Lehmann

Am 04.06.2015 um 16:55 schrieb Andrus, Brian Contractor:
> I have similar issues with gluster and am starting to wonder if it really is 
> stable for VM images.
>
> My setup is simple: 1X2=2
> I am mirroring a disk, basically.
>
> Trouble has been that the VM images (qcow2 files) go split-brained when one 
> of the VMs gets busier than usual. Once that happens, heal doesn't work often 
> and while it is in such a state, the VM often becomes unresponsive. I've had 
> to pick on of the qcow files from a brick, copy it somewhere, delete the file 
> from gluster and then copy the file from a brick to gluster.
>
> Usually that works, but sometimes I have to run fsck on the image at boot to 
> clean things up.
> This is NOT stable, to be sure.
>
> Hopefully it is moot as I recently upgraded to 3.7.1 and we will see how that 
> goes. So far, so good.
>
>
> Brian Andrus
> ITACS/Research Computing
> Naval Postgraduate School
> Monterey, California
> voice: 831-656-6238
>
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Justin Clift
> Sent: Thursday, June 04, 2015 7:33 AM
> To: Roger Lehmann
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on 
> cluster node restart
>
> On 4 Jun 2015, at 15:08, Roger Lehmann  wrote:
> 
>> I couldn't really reproduce this in my test environment with GlusterFS 3.6.2 
>> but I had other problems while testing (may also be because of a virtualized 
>> test environment), so I don't want to upgrade to 3.6.2 until I definitely 
>> know the problems I encountered are fixed in 3.6.2.
> 
>
> Just to point out, version 3.6.3 was released a while ago.  It's 
> effectively 3.6.2 + bug fixes.  Have you looked at testing that? :)
>
> + Justin
>
> --
> GlusterFS - http://www.gluster.org
>
> An open source, distributed file system scaling to several petabytes, and 
> handling thousands of clients.
>
> My personal twitter: twitter.com/realjustinclift
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] replacing bad hd in a 1x2 replica

2015-06-04 Thread Andrus, Brian Contractor
All,

I have noticed that one of my HDs in a 1x2 replica is throwing sector errors.
Now I am wondering the best way to fix or replace it. The backing filesystem is 
xfs, so xfs_repair can only be used when it is offline.

So, can I unmount the brick, run the repairs to block out the bad sectors and 
bring it back online? When it comes back, do I have to worry about performance? 
The main files on the system are running VM images.


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node restart

2015-06-04 Thread Andrus, Brian Contractor
I have similar issues with gluster and am starting to wonder if it really is 
stable for VM images.

My setup is simple: 1X2=2
I am mirroring a disk, basically.

Trouble has been that the VM images (qcow2 files) go split-brained when one of 
the VMs gets busier than usual. Once that happens, heal doesn't work often and 
while it is in such a state, the VM often becomes unresponsive. I've had to 
pick on of the qcow files from a brick, copy it somewhere, delete the file from 
gluster and then copy the file from a brick to gluster.

Usually that works, but sometimes I have to run fsck on the image at boot to 
clean things up.
This is NOT stable, to be sure.

Hopefully it is moot as I recently upgraded to 3.7.1 and we will see how that 
goes. So far, so good.


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Justin Clift
Sent: Thursday, June 04, 2015 7:33 AM
To: Roger Lehmann
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node 
restart

On 4 Jun 2015, at 15:08, Roger Lehmann  wrote:

> I couldn't really reproduce this in my test environment with GlusterFS 3.6.2 
> but I had other problems while testing (may also be because of a virtualized 
> test environment), so I don't want to upgrade to 3.6.2 until I definitely 
> know the problems I encountered are fixed in 3.6.2.


Just to point out, version 3.6.3 was released a while ago.  It's effectively 
3.6.2 + bug fixes.  Have you looked at testing that? :)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several petabytes, and 
handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users