[Gluster-users] 'ERROR: parsing the volfile failed' on fresh install

2017-12-12 Thread Ben Mabey
Hi all,
I’m trying out gluster by following the Quick Start guide on two fresh installs 
of Ubuntu 16.04. One one node I was able to install and start gluster just 
fine. One the other node I am running into the following:

$ sudo service glusterd start
Job for glusterd.service failed because the control process exited with error 
code. See "systemctl status glusterd.service" and "journalctl -xe" for details.



$ sudo journalctl -xe | tail
-- Unit glusterd.service has begun starting up.
Dec 12 20:17:46 u101410 GlusterFS[29360]: [glusterfsd.c:2004:parse_cmdline] 
0-glusterfs: ERROR: parsing the volfile failed [No such file or directory]
Dec 12 20:17:46 u101410 glusterd[29360]: USAGE: /usr/sbin/glusterd [options] 
[mountpoint]
Dec 12 20:17:46 u101410 systemd[1]: glusterd.service: Control process exited, 
code=exited status=255
Dec 12 20:17:46 u101410 systemd[1]: Failed to start GlusterFS, a clustered 
file-system server.
-- Subject: Unit glusterd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit glusterd.service has failed.


I wasn’t able to find much information regarding this error. I have tried 
removing gluster (via sudo apt autoremove glusterfs-server) and reinstalling 
but that did not help.

Any ideas on what to try?

Thanks,
Ben
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Consultants?

2017-12-12 Thread Ben Mabey
Hi all,
On the gluster website it links to a page of consultants that 404s: 
https://www.gluster.org/consultants/ 

Does anyone know of an actual list of consultants that offer gluster support? 
Or are there any on this mailing list?

Thanks,
Ben___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] active/active failover

2017-12-12 Thread Alex Chekholko
My understanding is that with GlusterFS you cannot change the volume type.
E.g. if your volume is composed of 2-way replicas, you can add more pairs;
if it's a distributed volume, you can add more bricks.  But you can't
change the type of a volume.


On Tue, Dec 12, 2017 at 2:09 PM, Stefan Solbrig 
wrote:

> Hi Alex,
>
> Thank you for the quick reply!
>
> Yes, I'm aware that using “plain” hardware with replication is more what
> GlusterFS is for. I cannot talk about prices where in detail, but for me,
> it evens more or less out.   Moreover, I have more SAN that I'd rather
> re-use (because of Lustre) than buy new hardware.   I'll test more to
> understand what precisely "replace-brick" changes.  I understand the mode
> of operation in case of replicated volumes.  But I was surprised (in a good
> way) that is was also working for distributed volumes, i.e., I was
> surprised that gluster does not complain of the new brick already contains
> data.  It there some technical documentation of the inner workings of
> glusterfs?
>
> This leads me to the question: If I wanted to extend my current
> installation (the one that uses SANs) with more standard hardware:   is is
> possible to mix  replicated and non-replicated bricks?  (I assume no... but
> I still dare to ask.)
>
> best wishes,
> Stefan
>
> Am 11.12.2017 um 23:07 schrieb Alex Chekholko :
>
> Hi Stefan,
>
> I think what you propose will work, though you should test it thoroughly.
>
> I think more generally, "the GlusterFS way" would be to use 2-way
> replication instead of a distributed volume; then you can lose one of your
> servers without outage.  And re-synchronize when it comes back up.
>
> Chances are if you weren't using the SAN volumes; you could have purchased
> two servers each with enough disk to make two copies of the data, all for
> less dollars...
>
> Regards,
> Alex
>
>
> On Mon, Dec 11, 2017 at 12:52 PM, Stefan Solbrig 
> wrote:
>
>> Dear all,
>>
>> I'm rather new to glusterfs but have some experience running lager lustre
>> and beegfs installations. These filesystems provide active/active
>> failover.  Now, I discovered that I can also do this in glusterfs, although
>> I didn't find detailed documentation about it. (I'm using glusterfs 3.10.8)
>>
>> So my question is: can I really use glusterfs to do failover in the way
>> described below, or am I misusing glusterfs? (and potentially corrupting my
>> data?)
>>
>> My setup is: I have two servers (qlogin and gluster2) that access a
>> shared SAN storage. Both servers connect to the same SAN (SAS multipath)
>> and I implement locking via lvm2 and sanlock, so I can mount the same
>> storage on either server.
>> The idea is that normally each server serves one brick, but in case one
>> server fails, the other server can serve both bricks. (I'm not interested
>> on automatic failover, I'll always do this manually.  I could also use this
>> to do maintainance on one server, with only minimal downtime.)
>>
>>
>> #normal setup:
>> [root@qlogin ~]# gluster volume info g2
>> #...
>> # Volume Name: g2
>> # Type: Distribute
>> # Brick1: qlogin:/glust/castor/brick
>> # Brick2: gluster2:/glust/pollux/brick
>>
>> #  failover: let's artificially fail one server by killing one glusterfsd:
>> [root@qlogin] systemctl status glusterd
>> [root@qlogin] kill -9 
>>
>> # unmount brick
>> [root@qlogin] umount /glust/castor/
>>
>> # deactive LV
>> [root@qlogin] lvchange  -a n vgosb06vd05/castor
>>
>>
>> ###  now do the failover:
>>
>> # active same storage on other server:
>> [root@gluster2] lvchange  -a y vgosb06vd05/castor
>>
>> # mount on other server
>> [root@gluster2] mount /dev/mapper/vgosb06vd05-castor  /glust/castor
>>
>> # now move the "failed" brick to the other server
>> [root@gluster2] gluster volume replace-brick g2
>> qlogin:/glust/castor/brick gluster2:/glust/castor/brick commit force
>> ### The last line is the one I have doubts about
>>
>> #now I'm in failover state:
>> #Both bricks on one server:
>> [root@qlogin ~]# gluster volume info g2
>> #...
>> # Volume Name: g2
>> # Type: Distribute
>> # Brick1: gluster2:/glust/castor/brick
>> # Brick2: gluster2:/glust/pollux/brick
>>
>>
>> Is it intended to work this way?
>>
>> Thanks a lot!
>>
>> best wishes,
>> Stefan
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] active/active failover

2017-12-12 Thread Stefan Solbrig
Hi Alex,

Thank you for the quick reply! 

Yes, I'm aware that using “plain” hardware with replication is more what 
GlusterFS is for. I cannot talk about prices where in detail, but for me, it 
evens more or less out.   Moreover, I have more SAN that I'd rather re-use 
(because of Lustre) than buy new hardware.   I'll test more to understand what 
precisely "replace-brick" changes.  I understand the mode of operation in case 
of replicated volumes.  But I was surprised (in a good way) that is was also 
working for distributed volumes, i.e., I was surprised that gluster does not 
complain of the new brick already contains data.  It there some technical 
documentation of the inner workings of glusterfs? 

This leads me to the question: If I wanted to extend my current installation 
(the one that uses SANs) with more standard hardware:   is is possible to mix  
replicated and non-replicated bricks?  (I assume no... but I still dare to 
ask.) 

best wishes,
Stefan

> Am 11.12.2017 um 23:07 schrieb Alex Chekholko :
> 
> Hi Stefan,
> 
> I think what you propose will work, though you should test it thoroughly.
> 
> I think more generally, "the GlusterFS way" would be to use 2-way replication 
> instead of a distributed volume; then you can lose one of your servers 
> without outage.  And re-synchronize when it comes back up.
> 
> Chances are if you weren't using the SAN volumes; you could have purchased 
> two servers each with enough disk to make two copies of the data, all for 
> less dollars...
> 
> Regards,
> Alex
> 
> 
> On Mon, Dec 11, 2017 at 12:52 PM, Stefan Solbrig  > wrote:
> Dear all,
> 
> I'm rather new to glusterfs but have some experience running lager lustre and 
> beegfs installations. These filesystems provide active/active failover.  Now, 
> I discovered that I can also do this in glusterfs, although I didn't find 
> detailed documentation about it. (I'm using glusterfs 3.10.8)
> 
> So my question is: can I really use glusterfs to do failover in the way 
> described below, or am I misusing glusterfs? (and potentially corrupting my 
> data?)
> 
> My setup is: I have two servers (qlogin and gluster2) that access a shared 
> SAN storage. Both servers connect to the same SAN (SAS multipath) and I 
> implement locking via lvm2 and sanlock, so I can mount the same storage on 
> either server.
> The idea is that normally each server serves one brick, but in case one 
> server fails, the other server can serve both bricks. (I'm not interested on 
> automatic failover, I'll always do this manually.  I could also use this to 
> do maintainance on one server, with only minimal downtime.)
> 
> 
> #normal setup:
> [root@qlogin ~]# gluster volume info g2
> #...
> # Volume Name: g2
> # Type: Distribute
> # Brick1: qlogin:/glust/castor/brick
> # Brick2: gluster2:/glust/pollux/brick
> 
> #  failover: let's artificially fail one server by killing one glusterfsd:
> [root@qlogin] systemctl status glusterd
> [root@qlogin] kill -9 
> 
> # unmount brick
> [root@qlogin] umount /glust/castor/
> 
> # deactive LV
> [root@qlogin] lvchange  -a n vgosb06vd05/castor
> 
> 
> ###  now do the failover:
> 
> # active same storage on other server:
> [root@gluster2] lvchange  -a y vgosb06vd05/castor
> 
> # mount on other server
> [root@gluster2] mount /dev/mapper/vgosb06vd05-castor  /glust/castor
> 
> # now move the "failed" brick to the other server
> [root@gluster2] gluster volume replace-brick g2 qlogin:/glust/castor/brick 
> gluster2:/glust/castor/brick commit force
> ### The last line is the one I have doubts about
> 
> #now I'm in failover state:
> #Both bricks on one server:
> [root@qlogin ~]# gluster volume info g2
> #...
> # Volume Name: g2
> # Type: Distribute
> # Brick1: gluster2:/glust/castor/brick
> # Brick2: gluster2:/glust/pollux/brick
> 
> 
> Is it intended to work this way?
> 
> Thanks a lot!
> 
> best wishes,
> Stefan
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users 
> 
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Impossible to add new brick

2017-12-12 Thread Benjamin Knoth
Dear all,

I would like to add a new brick to the running Gluster volume.

I have 3 bricks in 3 different servers. Everything is fine on the
running system. Connection to the application on the filesystem is ok.

All 3 server have a load between 0.5 and 1.5 on top. Iotop is idle and
iftop output is between 1 and 5 MB traffic.

Now I plan to add a new brick to the volume based on a LVM to create
snapshots.


It's easy to add this brick to the running gluster volume.

Shortly after successfully add of the brick the load in top explodes-

Self heal is running on the new brick and on all servers i have a load
of 40 and more.

The sync rate is very slow (150 MB in 5 minutes).

I have 3 VMs with 8 GB RAM, on the Gluster volume is 30 GB used and
every VM has 2 CPU core.


What's wrong with the Gluster cluster?

I use Debian Jessie with the available glusterfs-server package from the
default repositories (Glusterfs-server 3.5.2).

Is it possible to turn on/off the self heal for a brick?

Could it help to upgrade to a newer version of glusterfs-server (3.8.8
from Jessie-Backports or 3.10.8 from gluster.org)? Is it possible to
directly upgrade the glusterfs-server from 3.5.2 to 3.8.8 or 3.10.8?


Thanks

Ben




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Developer Conversations, Jan 16, 15:00 UTC

2017-12-12 Thread Amye Scavarda
Interested in coming and giving a 5 minute lightning talk?
We'll do our next meeting on the 16th so you have plenty of time to
prepare (or forget about it over the end of the year, your choice!)

Respond on this thread with what you'd like to present, I'll post
another reminder as we get closer.

Here's the call details:
To join the meeting on a computer or mobile phone:
https://bluejeans.com/6203770120?src=calendarLink
Just want to dial in on your phone?

1.) Dial one of the following numbers:
 408-915-6466 (US)
 +1.888.240.2560 (US Toll Free)

See all numbers: https://www.redhat.com/en/conference-numbers
2.) Enter Meeting ID: 6203770120

3.) Press #

Want to test your video connection?
https://bluejeans.com/111

Thanks!

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users