Gluster Community Newsletter, June 2016
Announcing 3.8!
As of June 14, 3.8 is released for general use.
The 3.8 release focuses on:
containers with inclusion of Heketi
hyperconvergence
ecosystem integration
protocol improvements with NFS Ganesha
2016-06-27 21:25 GMT+02:00 Joe Julian :
> In large clusters you're deploying them using automation tools. It's not
> that hard. That being said, the disperse translator can manage redundancy
> over a non-replicated volume which would give you that feature.
Is diserse
Also, disperse is no where near as powerful as crush mapping which can
be configured to be rack aware. Would be nice to have that level of
configuration eventually.
On 06/27/2016 12:25 PM, Joe Julian wrote:
In large clusters you're deploying them using automation tools. It's
not that hard.
In large clusters you're deploying them using automation tools. It's not
that hard. That being said, the disperse translator can manage
redundancy over a non-replicated volume which would give you that feature.
On 06/27/2016 12:18 PM, Gandalf Corvotempesta wrote:
If I remember properly, the
If I remember properly, the brick order when creating a replicated
volume is important, as we have to write bricks in order to preserve
redudancy
Any plans to fix this allowing a 'random' order like ceph? In ceph
there is no need to create volums by setting bricks in proper order.
On large
Dear Aravinda
My bad, you where right, I had used the hostname without the FQDN in order to
create my slave volume... I guess I will simply re-create it.
Regards
ML
On Monday, June 27, 2016 7:55 AM, Aravinda wrote:
Hi ML,
Please let us know the output of gluster volume
@Anoop,
Where can I find the coredump file ?
The crash occurs 2 times last 7 days, each time a sunday morning with no
reason, no increase of traffic or something like this, the volume was
mounted since 15 days.
The bricks are used as a CDN like, distributting small images and css
files
On Mon, 2016-06-27 at 09:47 +0200, Yann LEMARIE wrote:
> Hi,
>
> I'm using GlusterFS since many years and never see this problem, but
> this is the second time in one week ...
>
> I have 3 volumes with 2 bricks and 1 volume crash with no reason,
Did you observe the crash while mounting the
On Mon, 2016-06-27 at 13:54 +0530, Atin Mukherjee wrote:
> +Anoop, Jiffin
>
> On Mon, Jun 27, 2016 at 1:50 PM, Yann LEMARIE
> wrote:
> > Thanks Atin,
> >
> > Here is the log with "backtrace"
> >
> > > time of crash:
> > > 2016-06-26 09:27:44
> > > configuration details:
>
This is probably a naive question but could I know from which version
onwards was support for extended ACLs supported? Specifically, I wanted to
know from which version would all setfacl/getfacl commands work as expected.
Also, a broader question emnating from this is would it be useful for a
On Mon, 2016-06-27 at 15:20 +0530, Anoop C S wrote:
> On Sun, 2016-06-26 at 14:48 +, Alan Hartless wrote:
> >
> > I had a glusterd server restart and now glusterfsd refused to
> > start.
> > When I try to start glusterd withÂ
> > /usr/sbin/glusterd -N -p /var/run/glusterd.pid
> >
> > I get
On Sun, 2016-06-26 at 14:48 +, Alan Hartless wrote:
> I had a glusterd server restart and now glusterfsd refused to start.
> When I try to start glusterd withÂ
> /usr/sbin/glusterd -N -p /var/run/glusterd.pid
>
> I get this:
> librdmacm: couldn't read ABI version.
> librdmacm: assuming: 4
>
On 06/27/2016 01:08 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Avra Sengupta"
To: "Vijay Bellur" , "Alastair Neil" ,
"gluster-users"
, "Niels de Vos" ,
+Anoop, Jiffin
On Mon, Jun 27, 2016 at 1:50 PM, Yann LEMARIE wrote:
> Thanks Atin,
>
> Here is the log with "backtrace"
>
> time of crash:
> 2016-06-26 09:27:44
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
Grep for "backtrace" in /var/log/glusterfs, that should give you a
particular log file name by which you can determine which process crashed.
You should also be able to see a coredump. Why don't you attach the same
for further analysis?
~Atin
On Mon, Jun 27, 2016 at 1:17 PM, Yann LEMARIE
Hi,
I'm using GlusterFS since many years and never see this problem, but
this is the second time in one week ...
I have 3 volumes with 2 bricks and 1 volume crash with no reason, I just
have to stop/start the volume to make it up again.
The only logs I can find are in syslog :
Jun 26
- Original Message -
> From: "Avra Sengupta"
> To: "Vijay Bellur" , "Alastair Neil"
> , "gluster-users"
> , "Niels de Vos" , "Raghavendra
> Gowdappa"
>
On 06/27/2016 12:04 PM, Avra Sengupta wrote:
On 06/25/2016 01:19 AM, Vijay Bellur wrote:
On 06/24/2016 02:12 PM, Alastair Neil wrote:
I upgraded my fedora 23 system to f24 a couple of days ago, now I am
unable to mount my gluster cluster.
The update installed:
glusterfs-3.8.0-1.fc24.x86_64
On 06/25/2016 01:19 AM, Vijay Bellur wrote:
On 06/24/2016 02:12 PM, Alastair Neil wrote:
I upgraded my fedora 23 system to f24 a couple of days ago, now I am
unable to mount my gluster cluster.
The update installed:
glusterfs-3.8.0-1.fc24.x86_64
glusterfs-libs-3.8.0-1.fc24.x86_64
19 matches
Mail list logo