Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-11-26 Thread André Bauer
t; file when starting a VM. I can use qemu-img to create a blank file
> using the Gluster profocol but I cannot then start a VM using that
> file. 
> 
> Error message:
> 
>  [MSGID: 104007] [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk]
> 0-glfs-mgmt: failed to fetch volume file (key:VM) [Invalid argument]
> [2016-08-20 11:28:02.985483] E [MSGID: 104024]
> [glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect
> with remote-host: 127.0.0.1 (Permission denied) [Permission denied]
> 2016-08-20T11:28:03.979968Z qemu-system-x86_64: -drive
> 
> file=gluster://127.0.0.1/VM/vm1.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
> 
> <http://127.0.0.1/VM/vm1.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none>:
> Gluster connection failed for server=127.0.0.1 port=0 volume=VM
> image=vm1.qcow2 transport=tcp: Permission denied
> 
> Any assistance on changes to permissions or apparmour in 16.04 would
> be greatly appreciated.
> 
> thanks
> Stephen
> 
> _______
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 

-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Director: Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What application workloads are too slow for you on gluster?

2016-09-27 Thread André Bauer
Dito...

Am 24.09.2016 um 17:29 schrieb Kevin Lemonnier:
> On Sat, Sep 24, 2016 at 07:48:53PM +0530, Pranith Kumar Karampuri wrote:
>>hi,
>>A A A A A  I want to get a sense of the kinds of applications you tried
>>out on gluster but you had to find other alternatives because gluster
>>didn't perform well enough or the soultion would become too expensive if
>>you move to all SSD kind of setup.
> 
> Hi,
> 
> Web Hosting is what comes to mind for me. Applications like prestashop, 
> wordpress,
> some custom apps ... I know that I try to use DRBD as much as I can for that 
> since
> GlusterFS makes the sites just way too slow to use, I tried both fuse and NFS 
> (not
> ganesha since I'm on debian everytime though, don't know if that matters).
> Using things like OPCache and moving the application's cache outside of the 
> volume
> are helping a lot but that brings a whole loads of other problems you can't 
> always
> deal with, so most of the time I just don't use gluster for that.
> 
> Last time I really had to use gluster to host a web app I ended up installing 
> a VM
> with a disk stored on glusterfs and configuring a simple NFS server, that was 
> way
> faster than mounting a gluster volume directly on the web servers. At least 
> that
> proves VM hosting works pretty well now though !
> 
> Now I can't try tiering, unfortunatly I don't have the option of having 
> hardware for
> that, but maybe that would indeed solve it if it makes looking up lots of 
> tiny files
> quicker.
> 
> 
> 
> _______
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-07-05 Thread André Bauer
Just for the record...

In the meantime i also filed a bug at the apparmor bugtracker:

https://bugs.launchpad.net/apparmor/+bug/1595451

Unfortunately they also could not help until now :-(

Regards
André

Am 22.06.2016 um 12:42 schrieb André Bauer:
> Hi Vijay,
> 
> i just used "tail -f /var/log/glusterfs/*.log" and also "tail -f
> /var/log/glusterfs/bricks/glusterfs-vmimages.log" on all 4 nodes to
> check for new log entries when trying to migrate a VM to the host.
> 
> There are no new log entries from start of vm migration until error.
> 
> Does anybody have this (qemu / libgfapi access) running in Ubuntu 16.04?
> 
> Regards
> André
> 
> 
> 
> Am 17.06.2016 um 04:44 schrieb Vijay Bellur:
>> On Wed, Jun 15, 2016 at 8:07 AM, André Bauer  wrote:
>>> Hi Prasanna,
>>>
>>> Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:
>>>
>>>>
>>>> I think you have missed enabling bind insecure which is needed by
>>>> libgfapi access, please try again after following below steps
>>>>
>>>> => edit /etc/glusterfs/glusterd.vol by add "option
>>>> rpc-auth-allow-insecure on" #(on all nodes)
>>>> => gluster vol set $volume server.allow-insecure on
>>>> => systemctl restart glusterd #(on all nodes)
>>>>
>>>
>>> No, thats not the case. All services are up and runnig correctly,
>>> allow-insecure is set and the volume works fine with libgfapi access
>>> from my Ubuntu 14.04 KVM/Qemu servers.
>>>
>>> Just the server which was updated to Ubuntu 16.04 can't access the
>>> volume via libgfapi anmyore (fuse mount still works).
>>>
>>> GlusterFS logs are empty when trying to access the GlusterFS nodes so iyo
>>> think the requests are blocked on the client side.
>>>
>>> Maybe apparmor again?
>>>
>>
>> Might be worth a check again to see if there are any errors seen in
>> glusterd's log file on the server. libvirtd seems to indicate that
>> fetch of the volume configuration file from glusterd has failed.
>>
>> If there are no errors in glusterd or glusterfsd (brick) logs, then we
>> can possibly blame apparmor ;-).
>>
>> Regards,
>> Vijay
>>
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-22 Thread André Bauer
Hi Vijay,

i just used "tail -f /var/log/glusterfs/*.log" and also "tail -f
/var/log/glusterfs/bricks/glusterfs-vmimages.log" on all 4 nodes to
check for new log entries when trying to migrate a VM to the host.

There are no new log entries from start of vm migration until error.

Does anybody have this (qemu / libgfapi access) running in Ubuntu 16.04?

Regards
André



Am 17.06.2016 um 04:44 schrieb Vijay Bellur:
> On Wed, Jun 15, 2016 at 8:07 AM, André Bauer  wrote:
>> Hi Prasanna,
>>
>> Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:
>>
>>>
>>> I think you have missed enabling bind insecure which is needed by
>>> libgfapi access, please try again after following below steps
>>>
>>> => edit /etc/glusterfs/glusterd.vol by add "option
>>> rpc-auth-allow-insecure on" #(on all nodes)
>>> => gluster vol set $volume server.allow-insecure on
>>> => systemctl restart glusterd #(on all nodes)
>>>
>>
>> No, thats not the case. All services are up and runnig correctly,
>> allow-insecure is set and the volume works fine with libgfapi access
>> from my Ubuntu 14.04 KVM/Qemu servers.
>>
>> Just the server which was updated to Ubuntu 16.04 can't access the
>> volume via libgfapi anmyore (fuse mount still works).
>>
>> GlusterFS logs are empty when trying to access the GlusterFS nodes so iyo
>> think the requests are blocked on the client side.
>>
>> Maybe apparmor again?
>>
> 
> Might be worth a check again to see if there are any errors seen in
> glusterd's log file on the server. libvirtd seems to indicate that
> fetch of the volume configuration file from glusterd has failed.
> 
> If there are no errors in glusterd or glusterfsd (brick) logs, then we
> can possibly blame apparmor ;-).
> 
> Regards,
> Vijay
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread André Bauer
Hi Prasanna,

Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:

>
> I think you have missed enabling bind insecure which is needed by
> libgfapi access, please try again after following below steps
>
> => edit /etc/glusterfs/glusterd.vol by add "option
> rpc-auth-allow-insecure on" #(on all nodes)
> => gluster vol set $volume server.allow-insecure on
> => systemctl restart glusterd #(on all nodes)
>

No, thats not the case. All services are up and runnig correctly,
allow-insecure is set and the volume works fine with libgfapi access
from my Ubuntu 14.04 KVM/Qemu servers.

Just the server which was updated to Ubuntu 16.04 can't access the
volume via libgfapi anmyore (fuse mount still works).

GlusterFS logs are empty when trying to access the GlusterFS nodes so i
think the requests are blocked on the client side.

Maybe apparmor again?

Regards
André

>
> --
> Prasanna
>
>>
>> I don't see anything in the apparmor logs when setting everything to
>> complain or audit.
>>
>> It also seems GlusterFS servers don't get any request because brick logs
>> are not complaining anything.
>>
>> Any hints?
>>
>>
>> --
>> Regards
>> André Bauer
>>
>> _______
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread André Bauer
Hi Lists,

i just updated on of my Ubuntu KVM Servers from 14.04 (Trusty) to 16.06
(Xenial).

I use the Glusterfs packages from the officail Ubuntu PPA and my own
Qemu packages (
https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 )
which have libgfapi enabled.

On Ubuntu 14.04 everything is working fine. I only had to add the
following lines to the Apparmor config in
/etc/apparmor.d/abstractions/libvirt-qemu to get it work:

# for glusterfs
/proc/sys/net/ipv4/ip_local_reserved_ports r,
/usr/lib/@{multiarch}/glusterfs/**.so mr,
/tmp/** rw,

In Ubuntu 16.04 i'm not able to start the my VMs via libvirt or to
create new images via qemu-img using libgfapi.

Mounting the volume via fuse does work without problems.

Examples:

qemu-img create gluster://storage.mydomain/vmimages/kvm2test.img 1G
Formatting 'gluster://storage.intdmz.h1.mdd/vmimages/kvm2test.img',
fmt=raw size=1073741824
[2016-06-15 08:15:26.710665] E [MSGID: 108006]
[afr-common.c:4046:afr_notify] 0-vmimages-replicate-0: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2016-06-15 08:15:26.710736] E [MSGID: 108006]
[afr-common.c:4046:afr_notify] 0-vmimages-replicate-1: All subvolumes
are down. Going offline until atleast one of them comes back up.

Libvirtd log:

[2016-06-13 16:53:57.055113] E [MSGID: 104007]
[glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch
volume file (key:vmimages) [Invalid argument]
[2016-06-13 16:53:57.055196] E [MSGID: 104024]
[glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with
remote-host: storage.intdmz.h1.mdd (Permission denied) [Permission denied]
2016-06-13T16:53:58.049945Z qemu-system-x86_64: -drive
file=gluster://storage.intdmz.h1.mdd/vmimages/checkbox.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback:
Gluster connection failed for server=storage.intdmz.h1.mdd port=0
volume=vmimages image=checkbox.qcow2 transport=tcp: Permission denied

I don't see anything in the apparmor logs when setting everything to
complain or audit.

It also seems GlusterFS servers don't get any request because brick logs
are not complaining anything.

Any hints?


-- 
Regards
André Bauer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Automatic arbiter volumes on distributed/replicated volumes with replica 2?

2016-03-31 Thread André Bauer
OK, thanks.

As i understand, quorum does not work on a 2 node replica 2 cluster.
Thats the reason VM images get read only, if one node goes down.

To get it work you need replica 3 and therefore a full third node or at
least an arbiter.

Why is this also the case when having 4 node replica 2 cluster?

I use the other 2 nodes to have distributed/replicated volumes.
Imho this should be enough to get proper quorum?
If not, why?

Imho the distribution nodes could also do the work the arbiter does in
a 4 node replicated/distributed setup?
Is this something thats makes sense from a technical view?

If its technical possible but just no feature at the moment i would
really like to see this in the future.

Regards
André


Am 30.03.2016 um 03:15 schrieb Ravishankar N:
> On 03/30/2016 01:33 AM, André Bauer wrote:
>> Am 24.03.2016 um 13:56 schrieb Ravishankar N:
>>> On 03/24/2016 04:30 PM, André Bauer wrote:
>>>> So if you have a 4 node cluster is it realy needed to have a third
>>>> replica? Imho the 2 of the nodes could also be used as arbiters?
>>> I'm not sure I understand. The 'arbiter' volume is a special type of
>>> replica volume where the 3rd brick of that replica (for every replica)
>>> only holds metadata. So if you're asking if this brick itself can be
>>> co-located on a node which holds the other 'data' bricks of the volume,
>>> then yes that is possible.
>> My Question is:
>>
>> If i have 4 Nodes and use replica 2, why should i need to add 2 more
>> arbiter nodes, when i also have 2 (distributed) nodes which could do the
>> arbiter job automaticly?
> Again, the term 'arbiter' is used to refer to a type of replica-3 volume
> in gluster parlance (at least until another feature comes that uses the
> same terminology ;-) ). A replica-2 configuration does not have an
> 'arbiter'.
> 
>>
>> Imho 4 nodes should be enough to get proper quorum even if only replica
>> 2 is used.
> There are both client and server quorums in gluster.
> http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
> has more information.
> Thanks,
> Ravi
>>
>> Regards
>> André
>>
>>> -Ravi
>>>> Does it make sense to open a feature request in the bugtracker?
>>>>
>>>> Regards
>>>> André
>>>>
>>>> Am 24.03.2016 um 11:02 schrieb Ravishankar N:
>>>>> On 03/24/2016 02:39 PM, André Bauer wrote:
>>>>>> Hi List,
>>>>>>
>>>>>> we just upgraded out 4 node cluster from 3.5.8 to 3.7.8.
>>>>>>
>>>>>> Because of replica 2 on all volumes i run into problems with read
>>>>>> only
>>>>>> file systems of vm images when running on 3.5.x. As i know now the
>>>>>> solution would be to have replica 3 or at least use arbiter volumes.
>>>>>>
>>>>>> Yesterday i stumbled over this post in the list which i missed before
>>>>>> (damn spam filter):
>>>>>>
>>>>>> https://www.gluster.org/pipermail/gluster-users/2015-November/024191.html
>>>>>>
>>>>>>
>>>>>>
>>>>>> Steve Dainard is pointing out that 3.7.x uses an automatic arbiter,
>>>>>> when
>>>>>> you have 4 nodes configured as distributed/replicated.
>>>>>>
>>>>>> Is this true? Could not find something about it in the documentation
>>>>>> :-/
>>>>> There is no 'automatic' arbiter for replica 2. I think he was
>>>>> referring
>>>>> to the dummy node peer probed for maintaining server quorum.
>>>>> -Ravi
>>>>>> Would be nice i could save on having 2 more nodes this way.
>>>>>>
>>>>>> If not, is there a chance to see such feature in the future?
>>>>>>
>>>>>>
>>>>>
>>>
>>>
>>
> 
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Automatic arbiter volumes on distributed/replicated volumes with replica 2?

2016-03-31 Thread André Bauer
Am 24.03.2016 um 13:56 schrieb Ravishankar N:
> On 03/24/2016 04:30 PM, André Bauer wrote:
>> So if you have a 4 node cluster is it realy needed to have a third
>> replica? Imho the 2 of the nodes could also be used as arbiters?
> I'm not sure I understand. The 'arbiter' volume is a special type of
> replica volume where the 3rd brick of that replica (for every replica)
> only holds metadata. So if you're asking if this brick itself can be
> co-located on a node which holds the other 'data' bricks of the volume,
> then yes that is possible.

My Question is:

If i have 4 Nodes and use replica 2, why should i need to add 2 more
arbiter nodes, when i also have 2 (distributed) nodes which could do the
arbiter job automaticly?

Imho 4 nodes should be enough to get proper quorum even if only replica
2 is used.

Regards
André

> -Ravi
>>
>> Does it make sense to open a feature request in the bugtracker?
>>
>> Regards
>> André
>>
>> Am 24.03.2016 um 11:02 schrieb Ravishankar N:
>>> On 03/24/2016 02:39 PM, André Bauer wrote:
>>>> Hi List,
>>>>
>>>> we just upgraded out 4 node cluster from 3.5.8 to 3.7.8.
>>>>
>>>> Because of replica 2 on all volumes i run into problems with read only
>>>> file systems of vm images when running on 3.5.x. As i know now the
>>>> solution would be to have replica 3 or at least use arbiter volumes.
>>>>
>>>> Yesterday i stumbled over this post in the list which i missed before
>>>> (damn spam filter):
>>>>
>>>> https://www.gluster.org/pipermail/gluster-users/2015-November/024191.html
>>>>
>>>>
>>>> Steve Dainard is pointing out that 3.7.x uses an automatic arbiter,
>>>> when
>>>> you have 4 nodes configured as distributed/replicated.
>>>>
>>>> Is this true? Could not find something about it in the documentation
>>>> :-/
>>> There is no 'automatic' arbiter for replica 2. I think he was referring
>>> to the dummy node peer probed for maintaining server quorum.
>>> -Ravi
>>>> Would be nice i could save on having 2 more nodes this way.
>>>>
>>>> If not, is there a chance to see such feature in the future?
>>>>
>>>>
>>>
>>>
>>
> 
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--



signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Automatic arbiter volumes on distributed/replicated volumes with replica 2?

2016-03-24 Thread André Bauer
So if you have a 4 node cluster is it realy needed to have a third
replica? Imho the 2 of the nodes could also be used as arbiters?

Does it make sense to open a feature request in the bugtracker?

Regards
André

Am 24.03.2016 um 11:02 schrieb Ravishankar N:
> On 03/24/2016 02:39 PM, André Bauer wrote:
>> Hi List,
>>
>> we just upgraded out 4 node cluster from 3.5.8 to 3.7.8.
>>
>> Because of replica 2 on all volumes i run into problems with read only
>> file systems of vm images when running on 3.5.x. As i know now the
>> solution would be to have replica 3 or at least use arbiter volumes.
>>
>> Yesterday i stumbled over this post in the list which i missed before
>> (damn spam filter):
>>
>> https://www.gluster.org/pipermail/gluster-users/2015-November/024191.html
>>
>> Steve Dainard is pointing out that 3.7.x uses an automatic arbiter, when
>> you have 4 nodes configured as distributed/replicated.
>>
>> Is this true? Could not find something about it in the documentation :-/
> There is no 'automatic' arbiter for replica 2. I think he was referring
> to the dummy node peer probed for maintaining server quorum.
> -Ravi
>> Would be nice i could save on having 2 more nodes this way.
>>
>> If not, is there a chance to see such feature in the future?
>>
>>
> 
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Automatic arbiter volumes on distributed/replicated volumes with replica 2?

2016-03-24 Thread André Bauer
Hi List,

we just upgraded out 4 node cluster from 3.5.8 to 3.7.8.

Because of replica 2 on all volumes i run into problems with read only
file systems of vm images when running on 3.5.x. As i know now the
solution would be to have replica 3 or at least use arbiter volumes.

Yesterday i stumbled over this post in the list which i missed before
(damn spam filter):

https://www.gluster.org/pipermail/gluster-users/2015-November/024191.html

Steve Dainard is pointing out that 3.7.x uses an automatic arbiter, when
you have 4 nodes configured as distributed/replicated.

Is this true? Could not find something about it in the documentation :-/

Would be nice i could save on having 2 more nodes this way.

If not, is there a chance to see such feature in the future?


-- 
Regards
André Bauer
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter doesn't create

2016-03-23 Thread André Bauer
The third brick should be the arbiter.
Not sure if it should be marked as arbiter in volume info.

Try to put data on it.
Brick 3 should be empty and get only metadata.

Regards
André

Am 23.03.2016 um 14:33 schrieb Ralf Simon:
> Hello,
> 
> I've installed 
> 
> # yum info glusterfs-server
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
> Installed Packages
> Name: glusterfs-server
> Arch: x86_64
> Version : 3.7.6
> Release : 1.el7
> Size: 4.3 M
> Repo: installed
> From repo   : latest
> Summary : Clustered file-system server
> URL : http://www.gluster.org/docs/index.php/GlusterFS
> License : GPLv2 or LGPLv3+
> Description : GlusterFS is a distributed file-system capable of scaling
> to several
> : petabytes. It aggregates various storage bricks over
> Infiniband RDMA
> : or TCP/IP interconnect into one large parallel network file
> : system. GlusterFS is one of the most sophisticated file
> systems in
> : terms of features and extensibility.  It borrows a
> powerful concept
> : called Translators from GNU Hurd kernel. Much of the code
> in GlusterFS
> : is in user space and easily manageable.
> :
> : This package provides the glusterfs server daemon.
> 
> I wanted to build a ...
> 
> # gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0
> d90031:/data/brick0 d90034:/data/brick0
> volume create: gv0: success: please start the volume to access data
> 
> ... but I got a ...
> 
> # gluster volume info
> 
> Volume Name: gv0
> Type: Replicate
> Volume ID: 329325fc-ceed-4dee-926f-038f44281678
> Status: Created
> Number of Bricks: *1 x 3 = 3*
> Transport-type: tcp
> Bricks:
> Brick1: d90029:/data/brick0
> Brick2: d90031:/data/brick0
> Brick3: d90034:/data/brick0
> Options Reconfigured:
> performance.readdir-ahead: on
> 
> ... without the requested arbiter !
> 
> The same situation with 6 bricks ...
> 
> # gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0
> d90031:/data/brick0 d90034:/data/brick0 d90029:/data/brick1
> d90031:/data/brick1 d90034:/data/brick1
> volume create: gv0: success: please start the volume to access data
> [root@d90029 ~]# gluster vol info
> 
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 2b8dbcc0-c4bb-41e3-a870-e164d8d10c49
> Status: Created
> Number of Bricks: *2 x 3 = 6*
> Transport-type: tcp
> Bricks:
> Brick1: d90029:/data/brick0
> Brick2: d90031:/data/brick0
> Brick3: d90034:/data/brick0
> Brick4: d90029:/data/brick1
> Brick5: d90031:/data/brick1
> Brick6: d90034:/data/brick1
> Options Reconfigured:
> performance.readdir-ahead: on
> 
> 
> In contrast the documentation tells 
> 
> 
> *Arbiter configuration*
> 
> The arbiter configuration a.k.a. the arbiter volume is the perfect sweet
> spot between a 2-way replica and 3-way replica to avoid files getting
> into split-brain, */without the 3x storage space/* as mentioned earlier.
> The syntax for creating the volume is:
> 
> *gluster volume create replica 3 arbiter 1 host1:brick1 host2:brick2
> host3:brick3*
> 
> For example:
> 
> *gluster volume create testvol replica 3 arbiter 1
> 127.0.0.2:/bricks/brick{1..6} force*
> 
> volume create: testvol: success: please start the volume to access data
> 
> *gluster volume info*
> 
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: ae6c4162-38c2-4368-ae5d-6bad141a4119
> Status: Created
> Number of Bricks: *2 x (2 + 1) = 6*
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/bricks/brick1
> Brick2: 127.0.0.2:/bricks/brick2
> Brick3: 127.0.0.2:/bricks/brick3 *(arbiter)*
> Brick4: 127.0.0.2:/bricks/brick4
> Brick5: 127.0.0.2:/bricks/brick5
> Brick6: 127.0.0.2:/bricks/brick6 *(arbiter)*
> Options Reconfigured : transport.address-family: inet
> performance.readdir-ahead: on `
> 
> 
> 
> What's going wrong ? Can anybody help ?
> 
> Kind Regards
> Ralf Simon
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

Re: [Gluster-users] Trying XenServer again with Gluster

2016-03-22 Thread André Bauer
Hi Russel,

i'm a KVM user but imho XEN also supports accessing vm images through
libgfapi so you don't need to mount via NFS or fuse client.

Infos:
http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt

Second point is that you need to have at least 3 replicas to get a
working HA setup, because server quorum does not work for 2 replicas.

Infos:
https://www.gluster.org/pipermail/gluster-users/2015-November/024189.html

Regards
André


Am 20.03.2016 um 19:41 schrieb Russell Purinton:
> Hi all, Once again I’m trying to get XenServer working reliably with
> GlusterFS storage for the VHDs. I’m mainly interested in the ability to
> have a pair of storage servers, where if one goes down, the VMs can keep
> running uninterrupted on the other server. So, we’ll be using the
> replicate translator to make sure all the data resides on both servers.
> 
> So initially, I tried using the Gluster NFS server. XenServer supports
> NFS out of the box, so this seemed like a good way to go without having
> to hack XenServer much. I found some major performance issues with this
> however.
> 
> I’m using a server with 12 SAS drives on a single RAID card, with dual
> 10GbE NICs. Without Gluster, using the normal Kernel NFS server, I can
> read and write to this server at over 400MB/sec. VMS run well. However
> when I switch to Gluster for the NFS server, my write performance drops
> to 20MB/sec. Read performance remains high. I found out this is due to
> XenServer’s use of O_DIRECT for VHD access. It helped a lot when the
> server had DDR cache on the RAID card, but for servers without that the
> performance was unusable.
> 
> So I installed the gluster-client in XenServer itself, and mounted the
> volume in dom0. I then created a SR of type “file”. Success, sort of! I
> can do just about everything on that SR, VMs run nicely, and performance
> is acceptable at 270MB/sec, BUT…. I have a problem when I transfer an
> existing VM to it. The transfer gets only so far along then data stops
> moving. XenServer still says it’s copying, but no data is being sent. I
> have to force restart the XenHost to clear the issue (and the VM isn’t
> moved). Other file access to the FUSE mount still works, and other VMs
> are unaffected.
> 
> I think the problem may possibly involve file locks or perhaps a
> performance translator. I’ve tried disabling as many performance
> translators as I can, but no luck.
> 
> I didn’t find anything interesting in the logs, and no crash dumps. I
> tried to do a volume statedump to see the list of locks, but it seemed
> to only output some cpu stats in /tmp.
> 
> Is there a generally accepted list of volume options to use with Gluster
> for volumes meant to store VHDs? Has anyone else had a similar
> experience with VHD access locking up?
> 
> Russell
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Convert existing volume to shard volume

2016-03-22 Thread André Bauer
Thanks for the Info...

Am 17.03.2016 um 18:21 schrieb Krutika Dhananjay:
> If you want the existing files in your volume to get sharded, you would
> need to
> a. enable sharding on the volume and configure block size, both of which
> you have already done,
> b. cp the file(s) into the same volume with temporary names
> c. once done, you can rename the temporary paths back to their old names.
> 
> HTH,
> Krutika
> 
> On Thu, Mar 17, 2016 at 9:51 PM, André Bauer  <mailto:aba...@magix.net>> wrote:
> 
> Hi List,
> 
> i just upgraded from 3.5.8 to 3.7.8 and want to convert my existing VM
> Images volume to a shard volume now:
> 
> gluster volume set dis-rep features.shard on
> gluster volume set dis-rep features.shard-block-size 16MB
> 
> How are the existing image files handled?
> Do i need to start rebalance to convert existing files?
> 
> Or is it better to start with an empty volume?
> If so, why?
> 
> 
> --
> Regards
> André
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Convert existing volume to shard volume

2016-03-20 Thread André Bauer
Hi List,

i just upgraded from 3.5.8 to 3.7.8 and want to convert my existing VM
Images volume to a shard volume now:

gluster volume set dis-rep features.shard on
gluster volume set dis-rep features.shard-block-size 16MB

How are the existing image files handled?
Do i need to start rebalance to convert existing files?

Or is it better to start with an empty volume?
If so, why?


-- 
Regards
André

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] VM fs becomes read only when one gluster node goes down

2015-11-02 Thread André Bauer
Thanks for the hints guys :-)

I think i will try to use an arbiter. As i use distributed / replicated
volumes i think i have to add 2 arbiters, right?

My nodes have 10GBit interfaces. Would be 1 GBit for the arbiter(s) enough?

Regards
André


Am 28.10.2015 um 14:38 schrieb Diego Remolina:
> I am running Ovirt and self-hosted engine with additional vms on a
> replica two gluster volume. I have an "arbiter" node and set quorum
> ratio to 51%. The arbiter node is just another machine with the
> glusterfs bits installed that is part of the gluster peers but has no
> bricks to it.
> 
> You will have to be very careful where you put these three machines if
> they are going to go in separate server rooms or buildings. There are
> pros and cons to distribution of the nodes and network topology may
> also influence that.
> 
> In my case, this is on a campus, I have machines in 3 separate
> buildings and all machines are on the same main campus router (we have
> more than one main router). All machines connected via 10 gbps. If I
> had one node with bricks and the arbiter in the same building and that
> building went down (power/AC/chill water/network), then the other node
> with bricks would be useless. This is why I have machines in 3
> different buildings. Oh, and this is because most of the client
> systems are not even in the same building as the servers. If my client
> machines and servers where in the same building, then doing one node
> with bricks and arbiter in that same building could make sense.
> 
> HTH,
> 
> Diego
> 
> 
> 
> 
> On Wed, Oct 28, 2015 at 5:25 AM, Niels de Vos  wrote:
>> On Tue, Oct 27, 2015 at 07:21:35PM +0100, André Bauer wrote:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA256
>>>
>>> Hi Niels,
>>>
>>> my network.ping-timeout was already set to 5 seconds.
>>>
>>> Unfortunately it seems i dont have the timout setting in Ubuntu 14.04
>>> for my vda disk.
>>>
>>> ls -al /sys/block/vda/device/ gives me only:
>>>
>>> drwxr-xr-x 4 root root0 Oct 26 20:21 ./
>>> drwxr-xr-x 5 root root0 Oct 26 20:21 ../
>>> drwxr-xr-x 3 root root0 Oct 26 20:21 block/
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 device
>>> lrwxrwxrwx 1 root root0 Oct 27 18:13 driver ->
>>> ../../../../bus/virtio/drivers/virtio_blk/
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 features
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 modalias
>>> drwxr-xr-x 2 root root0 Oct 27 18:13 power/
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 status
>>> lrwxrwxrwx 1 root root0 Oct 26 20:21 subsystem ->
>>> ../../../../bus/virtio/
>>> - -rw-r--r-- 1 root root 4096 Oct 26 20:21 uevent
>>> - -r--r--r-- 1 root root 4096 Oct 26 20:21 vendor
>>>
>>>
>>> Is the qourum setting a problem, if you only have 2 replicas?
>>>
>>> My volume has this quorum options set:
>>>
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>>
>>> As i understand the documentation (
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/A
>>> dministration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html
>>> ), cluster.server-quorum-ratio is set to "< 50%" by default, which can
>>> never happen if you only have 2 replicas and one node goes down, right?
>>>
>>> Do in need cluster.server-quorum-ratio = 50% in this case?
>>
>> Replica 2 for VM storage is troublesome. Sahine just responded very
>> nicely to a very similar email:
>>
>>   
>> http://thread.gmane.org/gmane.comp.file-systems.gluster.user/22818/focus=22823
>>
>> HTH,
>> Niels
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
-

Re: [Gluster-users] [Gluster-devel] VM fs becomes read only when one gluster node goes down

2015-10-27 Thread André Bauer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Niels,

my network.ping-timeout was already set to 5 seconds.

Unfortunately it seems i dont have the timout setting in Ubuntu 14.04
for my vda disk.

ls -al /sys/block/vda/device/ gives me only:

drwxr-xr-x 4 root root0 Oct 26 20:21 ./
drwxr-xr-x 5 root root0 Oct 26 20:21 ../
drwxr-xr-x 3 root root0 Oct 26 20:21 block/
- -r--r--r-- 1 root root 4096 Oct 27 18:13 device
lrwxrwxrwx 1 root root0 Oct 27 18:13 driver ->
../../../../bus/virtio/drivers/virtio_blk/
- -r--r--r-- 1 root root 4096 Oct 27 18:13 features
- -r--r--r-- 1 root root 4096 Oct 27 18:13 modalias
drwxr-xr-x 2 root root0 Oct 27 18:13 power/
- -r--r--r-- 1 root root 4096 Oct 27 18:13 status
lrwxrwxrwx 1 root root0 Oct 26 20:21 subsystem ->
../../../../bus/virtio/
- -rw-r--r-- 1 root root 4096 Oct 26 20:21 uevent
- -r--r--r-- 1 root root 4096 Oct 26 20:21 vendor


Is the qourum setting a problem, if you only have 2 replicas?

My volume has this quorum options set:

cluster.quorum-type: auto
cluster.server-quorum-type: server

As i understand the documentation (
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/A
dministration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html
), cluster.server-quorum-ratio is set to "< 50%" by default, which can
never happen if you only have 2 replicas and one node goes down, right?

Do in need cluster.server-quorum-ratio = 50% in this case?



@ Josh

Qemu had this in log for the time the vm got read only fs:

[2015-10-22 17:44:42.60] E [socket.c:2244:socket_connect_finish]
0-vmimages-client-2: connection to 192.168.0.43:24007 failed
(Connection refused)
[2015-10-22 17:45:03.411721] E
[client-handshake.c:1760:client_query_portmap_cbk]
0-vmimages-client-2: failed to get the port number for remote
subvolume. Please run 'gluster volume status' on server to see if
brick process is running.

netstat looks good. As axpected i got connectiosn to all 4 Glusterfs
nodes at the moment.



@ Eivind
I don't think i had a split brain.
Only the vm got read only filesystem not the file on the Glusterfs node.



Regards
André

Am 26.10.2015 um 21:56 schrieb Niels de Vos:
> 
> There are at least two timeouts that are involved in this problem:
> 
> 1. The filesystem in a VM can go read-only when the virtual disk
> where the filesystem is located does not respond for a while.
> 
> 2. When a storage server that holds a replica of the virtual disk 
> becomes unreachable, the Gluster client (qemu+libgfapi) waits for 
> max. network.ping-timeout seconds before it resumes I/O.
> 
> Once a filesystem in a VM goes read-only, you might be able to fsck
> and re-mount it read-writable again. It is not something a VM will
> do by itself.
> 
> 
> The timeouts for (1) are set in sysfs:
> 
> $ cat /sys/block/sda/device/timeout 30
> 
> 30 seconds is the default for SD-devices, and for testing you can
> change it with an echo:
> 
> # echo 300 > /sys/block/sda/device/timeout
> 
> This is not a peristent change, you can create a udev-rule to apply
> this change at bootup.
> 
> Some of the filesystem offer a mount option that can change the 
> behaviour after a disk error is detected. "man mount" shows the
> "errors" option for ext*. Changing this to "continue" is not
> recommended, "abort" or "panic" will be the most safe for your
> data.
> 
> 
> The timeout mentioned in (2) is for the Gluster Volume, and checked
> by the client. When a client does a write to a replicated volume,
> the write needs to be acknowledged by both/all replicas. The client
> (libgfapi) delays the reply to the application (qemu) until
> both/all replies from the replicas has been received. This delay is
> configured as the volume option network.ping-timeout (42 seconds by
> default).
> 
> 
> Now, if the VM returns block errors after 30 seconds, and the
> client waits up to 42 seconds for recovery, there is an issue...
> So, your solution could be to increase the timeout for error
> detection of the disks inside the VMs, and/or decrease the
> network.ping-timeout.
> 
> It would be interesting to know if adapting these values prevents
> the read-only occurrences in your environment. If you do any
> testing with this, please keep me informed about the results.
> 
> Niels
> 


- -- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <

Re: [Gluster-users] VM fs becomes read only when one gluster node goes down

2015-10-26 Thread André Bauer
Just some. But i think the reason is some vm images are replicated on
node 1 & 2 and some on node 3 & 4 because i use distributed/replicated
volume.

You're right. I think i have to try it on a testsetup.

At the moment i'm also no completly sure, if its a Glusterfs problem
(not connecting to the node with the replicated file immediately, when
read/write fails) or a problem of the filesystem (ext4 fs goes read only
on error to early)?


Regards
André

Am 26.10.2015 um 20:23 schrieb Josh Boon:
> Hmm even five should be OK.  Do you lose all VMs or just some? 
> 
> Also, we had issues with
> 
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> 
> and had to instead go with
> 
> cluster.server-quorum-type: none
> cluster.quorum-type: none
> 
> though we only replicate instead distribute and replicate so I'd be wary of 
> changing those without advice from folks more familiar with the impact on 
> your config. 
> 
> gfapi upon connect gets the volume file and is aware of the configuration and 
> changes to it so it should be OK when a node is lost since it knows where the 
> other nodes are. 
> 
> If you have a lab with your gluster config setup and you lose all of your 
> VM's I'd suggest trying my config to see what happens.  The gluster logs and 
> qemu clients could also have some tips on what happens when a node 
> disappears. 
> - Original Message -
> From: "André Bauer" 
> To: "Josh Boon" 
> Cc: "Krutika Dhananjay" , "gluster-users" 
> , gluster-de...@gluster.org
> Sent: Monday, October 26, 2015 7:08:15 PM
> Subject: Re: [Gluster-users] VM fs becomes read only when one gluster node 
> goes down
> 
> Thanks guys!
> My volume info is attached at the bottom of this mail...
> 
> @ Josh
> As you can see, i already have a 5 second ping timeout set. I will try
> it with 3 seconds.
> 
> Not sure, if i want to have errors=continue on the fs level but i will
> give it a try, if its the only possibility to get automatic failover work.
> 
> 
> @ Roman
> I use qemu with libgfapi to access the images. So no glusterfs entries
> in fstab for my vm hosts. It also seems this is kind of deprecated:
> 
> http://blog.gluster.org/category/mount-glusterfs/
> 
> "`backupvolfile-server` - This option did not really do much rather than
> provide a 'shell' script based failover which was highly racy and
> wouldn't work during many occasions.  It was necessary to remove this to
> make room for better options (while it is still provided for backward
> compatibility in the code)"
> 
> 
> @ all
> Can anybody tell me how Glusterfs handles this internaly?
> Is the libgfapi client already aware of the server which replicates the
> image?
> Is there a way i can configure it manualy for a volume?
> 
> 
> 
> 
> Volume Name: vmimages
> Type: Distributed-Replicate
> Volume ID: 029285b2-dfad-4569-8060-3827c0f1d856
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: storage1.domain.local:/glusterfs/vmimages
> Brick2: storage2.domain.local:/glusterfs/vmimages
> Brick3: storage3.domain.local:/glusterfs/vmimages
> Brick4: storage4.domain.local:/glusterfs/vmimages
> Options Reconfigured:
> network.ping-timeout: 5
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> auth.allow:
> 192.168.0.21,192.168.0.22,192.168.0.23,192.168.0.24,192.168.0.25,192.168.0.26
> server.allow-insecure: on
> storage.owner-uid: 2000
> storage.owner-gid: 2000
> 
> 
> 
> Regards
> André
> 
> 
> Am 26.10.2015 um 17:41 schrieb Josh Boon:
>> Andre,
>>
>> I've not explored using a DNS solution to publish the gluster cluster
>> addressing space but things you'll want to check out
>> are network.ping-timeout and whether or not your VM goes read-only on
>> filesystem error. If your network is consistent and robust
>> tuning network.ping-timeout to a very low value such as three seconds
>> will instruct the client to drop that client on failure. The default
>> value for this is 42 seconds which will cause your VM to go read-only as
>> you've seen. You could also choose to have your VM's mount their
>> partitions errors=continue as well depending on the filesystem they run.
>> Our setup has timeout at seven seconds and errors=continue and has
>> survived both testing and storage node segfaults. No data integrity
>> issues have presented ye

Re: [Gluster-users] VM fs becomes read only when one gluster node goes down

2015-10-26 Thread André Bauer
Thanks guys!
My volume info is attached at the bottom of this mail...

@ Josh
As you can see, i already have a 5 second ping timeout set. I will try
it with 3 seconds.

Not sure, if i want to have errors=continue on the fs level but i will
give it a try, if its the only possibility to get automatic failover work.


@ Roman
I use qemu with libgfapi to access the images. So no glusterfs entries
in fstab for my vm hosts. It also seems this is kind of deprecated:

http://blog.gluster.org/category/mount-glusterfs/

"`backupvolfile-server` - This option did not really do much rather than
provide a 'shell' script based failover which was highly racy and
wouldn't work during many occasions.  It was necessary to remove this to
make room for better options (while it is still provided for backward
compatibility in the code)"


@ all
Can anybody tell me how Glusterfs handles this internaly?
Is the libgfapi client already aware of the server which replicates the
image?
Is there a way i can configure it manualy for a volume?




Volume Name: vmimages
Type: Distributed-Replicate
Volume ID: 029285b2-dfad-4569-8060-3827c0f1d856
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: storage1.domain.local:/glusterfs/vmimages
Brick2: storage2.domain.local:/glusterfs/vmimages
Brick3: storage3.domain.local:/glusterfs/vmimages
Brick4: storage4.domain.local:/glusterfs/vmimages
Options Reconfigured:
network.ping-timeout: 5
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
auth.allow:
192.168.0.21,192.168.0.22,192.168.0.23,192.168.0.24,192.168.0.25,192.168.0.26
server.allow-insecure: on
storage.owner-uid: 2000
storage.owner-gid: 2000



Regards
André


Am 26.10.2015 um 17:41 schrieb Josh Boon:
> Andre,
> 
> I've not explored using a DNS solution to publish the gluster cluster
> addressing space but things you'll want to check out
> are network.ping-timeout and whether or not your VM goes read-only on
> filesystem error. If your network is consistent and robust
> tuning network.ping-timeout to a very low value such as three seconds
> will instruct the client to drop that client on failure. The default
> value for this is 42 seconds which will cause your VM to go read-only as
> you've seen. You could also choose to have your VM's mount their
> partitions errors=continue as well depending on the filesystem they run.
> Our setup has timeout at seven seconds and errors=continue and has
> survived both testing and storage node segfaults. No data integrity
> issues have presented yet but our data is mostly temporal so integrity
> hasn't been tested thoroughly. Also we're qemu 2.0 running gluster 3.6
> on ubuntu 14.04 for those curious. 
> 
> Best,
> Josh 
> 
> 
> *From: *"Roman" 
> *To: *"Krutika Dhananjay" 
> *Cc: *"gluster-users" , gluster-de...@gluster.org
> *Sent: *Monday, October 26, 2015 1:33:57 PM
> *Subject: *Re: [Gluster-users] VM fs becomes read only when one gluster
> node goes down
> 
> Hi,
> got backupvolfile-server=NODE2NAMEHERE in fstab ? :)
> 
> 2015-10-23 5:24 GMT+03:00 Krutika Dhananjay  <mailto:kdhan...@redhat.com>>:
> 
> Could you share the output of 'gluster volume info', and also
> information as to which node went down on reboot?
> 
> -Krutika
> 
> 
> *From: *"André Bauer" mailto:aba...@magix.net>>
> *To: *"gluster-users"  <mailto:gluster-users@gluster.org>>
> *Cc: *gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
> *Sent: *Friday, October 23, 2015 12:15:04 AM
> *Subject: *[Gluster-users] VM fs becomes read only when one
> gluster node goesdown
> 
> Hi,
> 
> i have a 4 node Glusterfs 3.5.6 Cluster.
> 
> My VM images are in an replicated distributed volume which is
> accessed
> from kvm/qemu via libgfapi.
> 
> Mount is against storage.domain.local which has IPs for all 4
> Gluster
> nodes set in DNS.
> 
> When one of the Gluster nodes goes down (accidently reboot) a
> lot of the
> vms getting read only filesystem. Even when the node comes back up.
> 
> How can i prevent this?
> I expect that the vm just uses the replicated file on the other
> node,
> without getting ro fs.
&g

[Gluster-users] VM fs becomes read only when one gluster node goes down

2015-10-22 Thread André Bauer
Hi,

i have a 4 node Glusterfs 3.5.6 Cluster.

My VM images are in an replicated distributed volume which is accessed
from kvm/qemu via libgfapi.

Mount is against storage.domain.local which has IPs for all 4 Gluster
nodes set in DNS.

When one of the Gluster nodes goes down (accidently reboot) a lot of the
vms getting read only filesystem. Even when the node comes back up.

How can i prevent this?
I expect that the vm just uses the replicated file on the other node,
without getting ro fs.

Any hints?

Thanks in advance.

-- 
Regards
André Bauer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Ubuntu Repo for Bareos with GlusterFS support available

2015-10-08 Thread André Bauer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I know. I was there ;-)

Already submitted Ubuntu patches to Bareos:
https://github.com/bareos/bareos/pull/33

I'm in contact with Jörg Steffens. Seems they will build Ubuntu
packages, build against the Gluster packages from the Ubuntu universe
repo, in the near future. Starting with packages for Ubuntu 15.04 :-)

Regards
André


Am 07.10.2015 um 00:35 schrieb Niels de Vos:
> On Mon, Oct 05, 2015 at 11:24:37AM +0200, André Bauer wrote:
>> Hey Gluster lists,
>> 
>> just added Bareos with Glusterfs Libgfapi support to my Ubuntu
>> PPA:
>> 
>> https://launchpad.net/~monotek
>> 
>> Packages are avialble for Ubuntu Trusty and Vivid with Glusterfs
>> Plugins build against GlusterFS 3.5, 3.6 and 3.7.
>> 
>> Feel free to try the packages. Feedback is welcome :-)
>> 
>> Bareos is a 100% open source fork of the backup project from
>> bacula.org.
>> 
>> With the Glusterfs Packages for Bareos its possible to use
>> Glusterfs via libgfapi as backend or backup GlusterFS volumes
>> directly without using the mountpoint.
> 
> Very cool, thanks for providing these packages!
> 
> Last week the Open Source Backup Conference took place, and we've
> had one presentation about the Bareos and Gluster integration. A 
> description on how to configure Bareos to store backups on Gluste
> can be found in the Admininstrator Guide: -
> http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Bareos/
>
>  The slides from the presentation are on 
> http://gluster.readthedocs.org/en/latest/presentations/ but were
> created with Bareos 14.2 (like the doc link above). The just
> released 15.x version adds the option to make backups from Gluster
> volumes using the new gfapi-fd File Daemon.
> 
> Cheers, Niels
> 


- -- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
- --
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
- --
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJWFivXAAoJEES+J36frTguOQoH/3LYhZy8RGhI2VV88nJsHgEO
4IDI/ZS8Md/UjdDhGfQglKy7XSclfYD0D7fwxDZ4VU7jDj4wgMHVIB/pitXu7jOU
ng6jxlBwGxtjwGKTS6OlvExY0b6+T7eo7zpByC+/LolzGmJAFSkqy8AjDeOuQQNg
l+eVZLsN6+ndmBOKLQGlpbrv0qPBLCqQMJTNQZnaIByA5nZFubZmDSAlvDr/nlpb
Qm2LJtUZIOFhKeFWZ6gz4Lgu7VlGEuB4xhdKAngcYvjntvijzEBK9EnQchV5GdWv
cEIObEQC7g6HMpoEWiGV9CORCZ//ehDGtpk5P0ty4b91Ky0ROnRPE8t/u8ahCFg=
=kVtB
-END PGP SIGNATURE-
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Ubuntu Repo for Bareos with GlusterFS support available

2015-10-05 Thread André Bauer
Hey Gluster lists,

just added Bareos with Glusterfs Libgfapi support to my Ubuntu PPA:

https://launchpad.net/~monotek

Packages are avialble for Ubuntu Trusty and Vivid with Glusterfs Plugins
build against GlusterFS 3.5, 3.6 and 3.7.

Feel free to try the packages. Feedback is welcome :-)

Bareos is a 100% open source fork of the backup project from bacula.org.

With the Glusterfs Packages for Bareos its possible to use Glusterfs via
libgfapi as backend or backup GlusterFS volumes directly without using
the mountpoint.


-- 
Regards
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tuning for small files

2015-10-01 Thread André Bauer
Are these options enabled by default, at least for new volumes in 3.7?
If not, why?

Regards
André

Am 30.09.2015 um 16:22 schrieb Ben Turner:
> - Original Message -
>> From: "Iain Milne" 
>> To: gluster-users@gluster.org
>> Sent: Wednesday, September 30, 2015 2:48:57 AM
>> Subject: Re: [Gluster-users] Tuning for small files
>>
>>> Where you run into problems with smallfiles on gluster is latency of
>> sending
>>> data over the wire.  For every smallfile create there are a bunch of
>> different
>>> file opetations we have to do on every file.  For example we will have
>> to do
>>> at least 1 lookup per brick to make sure that the file doesn't exist
>> anywhere
>>> before we create it.  We actually got it down to 1 per brick with lookup
>>> optimize on, its 2 IIRC(maybe more?) with it disabled.
>>
>> Is this lookup optimize something that needs to be enabled manually with
>> 3.7, and if so, how?
> 
> Here are all 3 of the settings I was talking about:
> 
> gluster v set testvol client.event-threads 4
> gluster v set testvol server.event-threads 4
> gluster v set testvol performance.lookup-optimize on
> 
> Yes, lookup optimize needs to be enabled.
> 
> -b
> 
>> Thanks
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> _______
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Tuning for small files

2015-09-28 Thread André Bauer
If you're not already on Glusterfs 3.7.x i would recommend an update first.

Am 25.09.2015 um 17:49 schrieb Thibault Godouet:
> Hi,
> 
> There are quite a few tuning parameters for Gluster (as seen in Gluster
> volume XYZ get all), but I didn't find much documentation on those.
> Some people do seem to set at least some of them, so the knowledge must
> be somewhere...
> 
> Is there a good source of information to understand what they mean, and
> recommendation on how to set them to get a good small file performance?
> 
> Basically what I'm trying to optimize is for svn operations (e.g. svn
> checkout, or svn branch) on a replicated 2 x 1 volume (hosted on 2 VMs,
> 16GB ram, 4 cores each, 10Gb/s network tested at full speed), using a
> NFS mount which appears much faster than fuse in this case (but still
> much slower than when served by a normal NFS server).
> Any recommendation for such a setup?
> 
> Thanks,
> Thibault.
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster-Nagios

2015-09-24 Thread André Bauer
I would also love to see packages for Ubuntu.

Are the sources of the Nagios plugins available somewhere?

Regards
André

Am 20.09.2015 um 11:02 schrieb Prof. Dr. Michael Schefczyk:
> Dear All,
> 
> In June 2014, the gluster-nagios team (thanks!) published the availability of 
> gluster-nagios-common and gluster-nagios-addons on this list. As far as I can 
> tell, this quite extensive gluster nagios monitoring tool is available for 
> el6 only. Are there known plans to make this available for el7 outside the 
> RHEL-repos 
> (http://ftp.redhat.de/pub/redhat/linux/enterprise/7Server/en/RHS/SRPMS/), 
> e.g. for use with oVirt / Centos 7 also? It would be good to be able to 
> monitor gluster without playing around with scripts from sources other than a 
> rpm repo.
> 
> Regards,
> 
> Michael
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] News of the week missing

2015-09-10 Thread André Bauer
Hey guys,

seems the news blog for last week(s) is missing?
Last one i found on the planet.gluster.org blog is for week 32:
https://atinmu.wordpress.com/2015/08/17/gluster-news-of-week-322015-2/

I already put some stuff in the Etherpad, if its because of lack of
news: https://public.pad.fsfe.org/p/gluster-weekly-news

Maybe its also an idea to switch to monthly news, if this would be
easier to maintain.

Maybe its also a good idea to have the news more prominent on
gluster.org. Maybe you could filter for news blogs (only) in the "Planet
Gluster News" section or have another "Gluster news" section?

-- 
Regards
André Bauer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2015-08-24 Thread André Bauer
Do you already have added my ppa?

https://launchpad.net/~monotek/+archive/ubuntu/samba-vfs-glusterfs-3.7

This is needed because Samba version of Ubuntu has no GlusterFS Support.

Regards
André


Am 06.08.2015 um 11:30 schrieb Mayur Patel:
> André Bauer  writes:
> 
>>
>> Ok, Seems i need to create a blog tomorrow :D
>> IMHO you could also link to the ppa... No plan to remove it...On 25. März
> 2014 19:31:01 MEZ, John Mark Walker  gluster.org> wrote:
>> If someone wants to put this on their blog, I'll make sure to syndicate on
> gluster.org. Hint, hint... ;)-JM- Original Message - On 03/25/2014
> 02:52 PM, André Bauer wrote: Am 25.03.2014 04:40, schrieb Lalatendu Mohanty:
> Yes, Gluster server and Samba server can be on different servers.
> Theoretically this should work. I think I have seen some mails/configuration
>  from community around it. In my test set-up, I had kept gluster and Samba
> on same server and haven't tried gluster and Samba on different servers till
> now.
>>  Thanks. I found the needed smb.con configuration option yesterday. Just
> wrote all together in the
> ppa:https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs
>>
>>  Awesome! :) Thanks. Gluster-users mailing list Gluster-users 
> gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
> 
> 
> I am getting below error message when trying to install
> samba-vfs-glusterfs-3.7 on ubuntu 14.04.
> 
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> E: Unable to locate package samba-vfs-glusterfs-3.7
> E: Couldn't find any package by regex 'samba-vfs-glusterfs-3.7'
> 
> And because of above there is below entry in Samba log.
> 
> Error loading module '/usr/lib/x86_64-linux-gnu/samba/vfs/glusterfs.so':
> /usr/lib/x86_64-linux-gnu/samba/vfs/glusterfs.so: cannot open shared object
> file: No such file or directory
> 
> Kind regards,
> Mayur Patel
> 
> Please help. 
> _______
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bareos backup from Gluster mount

2015-07-30 Thread André Bauer
Hi David,

i never used Bareos until now. We like to switch from Bacula in the
future but i think this will not happen before next Ubuntu LTS release
(16.04).

I also never directly compared with rsync but i think rsync is faster in
transfering because it does not have to do any compression and so on...

What i can say about Bacula on Glusterfs volumes is, that copying big
files works at reasonable speed while small files (especialy if there
are a lot) are a bit slow, whats in Glusterfs nature until versions
prior 3.6(?).

With Glusterfs 3.6 / 3.7 this should be a bit faster in the meantime but
i have no experience with the performance gains because i'm still on
Glusterfs 3.5.5.

In conclusion i still prefer Bacula over Rsync even if its slower.

Some more info about Glusterfs small file performance can be found here:

https://gluster.readthedocs.org/en/latest/Feature%20Planning/GlusterFS%203.7/Small%20File%20Performance/

Regards
André

Am 30.07.2015 um 15:23 schrieb David F. Robinson:
> Andre,
> 
> I am looking at a backup alternative to rsnc for gluster. My storage system 
> is growing and rsync takes too long on my system (300TB). Do you have any 
> idea of the relative performance of bareos as compared to that of rsync? Can 
> it be run in a multi-threaded mode? Rsync takes an extremely long time just 
> searching the directory tree to figure out what to copy. Before digging into 
> bareos, I was wondering if you had any thoughts on performance for gluster. 
> 
> David  (Sent from mobile)
> 
> ===
> David F. Robinson, Ph.D. 
> President - Corvid Technologies
> 704.799.6944 x101 [office]
> 704.252.1310  [cell]
> 704.799.7974  [fax]
> david.robin...@corvidtec.com
> http://www.corvidtechnologies.com
> 
>> On Jul 29, 2015, at 1:36 PM, André Bauer  wrote:
>>
>> We're using Bacula (Bareos is a fork of it) for backups.
>> Never had any problems doing backups of Gluster volumes.
>>
>>> Am 27.07.2015 um 23:02 schrieb Ryan Clough:
>>> Hello,
>>>
>>> I have cross-posted this question in the bareos-users mailing list.
>>>
>>> Wondering if anyone has tried this because I am unable to backup data
>>> that is mounted via Gluster Fuse or Gluster NFS. Basically, I have the
>>> Gluster volume mounted on the Bareos Director which also has the tape
>>> changer attached.
>>>
>>> Here is some information about versions:
>>> Bareos version 14.2.2
>>> Gluster version 3.7.2
>>> Scientific Linux version 6.6
>>>
>>> Our Gluster volume consists of two nodes in distribute only. Here is the
>>> configuration of our volume:
>>> [root@hgluster02 ~]# gluster volume info
>>>
>>> Volume Name: export_volume
>>> Type: Distribute
>>> Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
>>> Status: Started
>>> Number of Bricks: 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: hgluster01:/gluster_data
>>> Brick2: hgluster02:/gluster_data
>>> Options Reconfigured:
>>> performance.io-thread-count: 24
>>> server.event-threads: 20
>>> client.event-threads: 4
>>> performance.readdir-ahead: on
>>> features.inode-quota: on
>>> features.quota: on
>>> nfs.disable: off
>>> auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
>>> server.allow-insecure: on
>>> server.root-squash: on
>>> performance.read-ahead: on
>>> features.quota-deem-statfs: on
>>> diagnostics.brick-log-level: WARNING
>>>
>>> When I try to backup a directory from Gluster Fuse or Gluster NFS mount
>>> and I monitor the network communication I only see data being pulled
>>> from the hgluster01 brick. When the job finishes Bareos thinks that it
>>> completed without error but included in the messages for the job are
>>> lots and lots of permission denied errors like this:
>>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
>>> "/export/rclough/psdv-2014-archives-2/scan_111.tar.bak": ERR=Permission
>>> denied.
>>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
>>> "/export/rclough/psdv-2014-archives-2/run_219.tar.bak": ERR=Permission
>>> denied.
>>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
>>> "/export/rclough/psdv-2014-archives-2/scan_112.tar.bak": ERR=Permission
>>> denied.
>>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
>>> "/export/rclough/psdv-2014-archives-2/run_220.tar.bak": ERR=Permission
>>> d

Re: [Gluster-users] Bareos backup from Gluster mount

2015-07-30 Thread André Bauer
Hi Ryan,

all my Servers are on Ubuntu 14.04 64 Bit.
Bacula version is 5.2.6+dfsg-9.1ubuntu3
Using Gluster 3.5.5 packages from: https://launchpad.net/~gluster
4 Gluster nodes. All volumes are distributed-replicate.
Mounted via fuse or nfs.

Only "problem" i have is poor small file performance while backuping.
Workaround is to mount an image via libgfapi in my vms. This works
faster as fuse / nfs mount.


Regards
André

Am 29.07.2015 um 20:03 schrieb Ryan Clough:
> Can you tell me some information about your setup? I would be interested
> in the OS and version of OS on the Bareos Director, version of Gluster,
> and the version of Bacula that you are using. Also, what type(dist,
> repl) of Gluster cluster? Thank you for taking the time to try to help
> me with this.
> 
> ___
> ¯\_(ツ)_/¯
> Ryan Clough
> Information Systems
> Decision Sciences International Corporation
> <http://www.decisionsciencescorp.com/><http://www.decisionsciencescorp.com/>
> 
> On Wed, Jul 29, 2015 at 10:36 AM, André Bauer  <mailto:aba...@magix.net>> wrote:
> 
> We're using Bacula (Bareos is a fork of it) for backups.
> Never had any problems doing backups of Gluster volumes.
> 
> Am 27.07.2015 um 23:02 schrieb Ryan Clough:
> > Hello,
> >
> > I have cross-posted this question in the bareos-users mailing list.
> >
> > Wondering if anyone has tried this because I am unable to backup data
> > that is mounted via Gluster Fuse or Gluster NFS. Basically, I have the
> > Gluster volume mounted on the Bareos Director which also has the tape
> > changer attached.
> >
> > Here is some information about versions:
> > Bareos version 14.2.2
> > Gluster version 3.7.2
> > Scientific Linux version 6.6
> >
> > Our Gluster volume consists of two nodes in distribute only. Here
> is the
> > configuration of our volume:
> > [root@hgluster02 ~]# gluster volume info
> >
> > Volume Name: export_volume
> > Type: Distribute
> > Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
> > Status: Started
> > Number of Bricks: 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: hgluster01:/gluster_data
> > Brick2: hgluster02:/gluster_data
> > Options Reconfigured:
> > performance.io-thread-count: 24
> > server.event-threads: 20
> > client.event-threads: 4
> > performance.readdir-ahead: on
> > features.inode-quota: on
> > features.quota: on
> > nfs.disable: off
> > auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
> > server.allow-insecure: on
> > server.root-squash: on
> > performance.read-ahead: on
> > features.quota-deem-statfs: on
> > diagnostics.brick-log-level: WARNING
> >
> > When I try to backup a directory from Gluster Fuse or Gluster NFS
> mount
> > and I monitor the network communication I only see data being pulled
> > from the hgluster01 brick. When the job finishes Bareos thinks that it
> > completed without error but included in the messages for the job are
> > lots and lots of permission denied errors like this:
> > 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> > "/export/rclough/psdv-2014-archives-2/scan_111.tar.bak":
> ERR=Permission
> > denied.
> > 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> > "/export/rclough/psdv-2014-archives-2/run_219.tar.bak": ERR=Permission
> > denied.
> > 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> > "/export/rclough/psdv-2014-archives-2/scan_112.tar.bak":
> ERR=Permission
> > denied.
> > 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> > "/export/rclough/psdv-2014-archives-2/run_220.tar.bak": ERR=Permission
> > denied.
> > 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> > "/export/rclough/psdv-2014-archives-2/scan_114.tar.bak":
> ERR=Permission
> > denied.
> >
> > At first I thought this might be a root-squash problem but, if I
> try to
> > read/copy a file using the root user from the Bareos server that is
> > trying to do the backup, I can read files just fine.
> >
> > When the job finishes is reports that it finished "OK -- with
> warnings"
> > but, again the log for the job is filled with &quo

Re: [Gluster-users] Bareos backup from Gluster mount

2015-07-29 Thread André Bauer
We're using Bacula (Bareos is a fork of it) for backups.
Never had any problems doing backups of Gluster volumes.

Am 27.07.2015 um 23:02 schrieb Ryan Clough:
> Hello,
> 
> I have cross-posted this question in the bareos-users mailing list.
> 
> Wondering if anyone has tried this because I am unable to backup data
> that is mounted via Gluster Fuse or Gluster NFS. Basically, I have the
> Gluster volume mounted on the Bareos Director which also has the tape
> changer attached.
> 
> Here is some information about versions:
> Bareos version 14.2.2
> Gluster version 3.7.2
> Scientific Linux version 6.6
> 
> Our Gluster volume consists of two nodes in distribute only. Here is the
> configuration of our volume:
> [root@hgluster02 ~]# gluster volume info
>  
> Volume Name: export_volume
> Type: Distribute
> Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: hgluster01:/gluster_data
> Brick2: hgluster02:/gluster_data
> Options Reconfigured:
> performance.io-thread-count: 24
> server.event-threads: 20
> client.event-threads: 4
> performance.readdir-ahead: on
> features.inode-quota: on
> features.quota: on
> nfs.disable: off
> auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
> server.allow-insecure: on
> server.root-squash: on
> performance.read-ahead: on
> features.quota-deem-statfs: on
> diagnostics.brick-log-level: WARNING
> 
> When I try to backup a directory from Gluster Fuse or Gluster NFS mount
> and I monitor the network communication I only see data being pulled
> from the hgluster01 brick. When the job finishes Bareos thinks that it
> completed without error but included in the messages for the job are
> lots and lots of permission denied errors like this:
> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> "/export/rclough/psdv-2014-archives-2/scan_111.tar.bak": ERR=Permission
> denied.
> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> "/export/rclough/psdv-2014-archives-2/run_219.tar.bak": ERR=Permission
> denied.
> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> "/export/rclough/psdv-2014-archives-2/scan_112.tar.bak": ERR=Permission
> denied.
> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> "/export/rclough/psdv-2014-archives-2/run_220.tar.bak": ERR=Permission
> denied.
> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:  Cannot open
> "/export/rclough/psdv-2014-archives-2/scan_114.tar.bak": ERR=Permission
> denied.
> 
> At first I thought this might be a root-squash problem but, if I try to
> read/copy a file using the root user from the Bareos server that is
> trying to do the backup, I can read files just fine.
> 
> When the job finishes is reports that it finished "OK -- with warnings"
> but, again the log for the job is filled with "ERR=Permission denied"
> messages. In my opinion, this job did not finish OK and should be
> Failed. Some of the files from the HGluster02 brick are backed up but
> all of the ones with permission errors do not. When I restore the job,
> all of the files with permission errors are empty.
> 
> Has anyone successfully used Bareos to backup data from Gluster mounts?
> This is an important use case for us because this is the largest single
> volume that we have to prepare large amounts of data to be archived.
> 
> Thank you for your time,
> ___
> ¯\_(ツ)_/¯
> Ryan Clough
> Information Systems
> Decision Sciences International Corporation
> <http://www.decisionsciencescorp.com/><http://www.decisionsciencescorp.com/>
> 
> This email and its contents are confidential. If you are not the
> intended recipient, please do not disclose or use the information within
> this email or its attachments. If you have received this email in error,
> please report the error to the sender by return email and delete this
> communication from your records.
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<ht

Re: [Gluster-users] Performance issues with one node

2015-07-27 Thread André Bauer
Some more infos:
http://gluster.readthedocs.org/en/latest/Feature%20Planning/GlusterFS%203.7/Small%20File%20Performance/index.html?highlight=small%20file%20performance

Am 24.07.2015 um 20:15 schrieb Mathieu Chateau:
> Hello,
> 
> gluster performance are not good with large number of small files.
> Recent version do a better job with them, but not yet what I would enjoy.
> 
> As you are starting at gluster having an existing architecture, you
> should first setup a lab to learn about it Else you will learn the hard way.
> Don't play with turning off nodes, as you may create more issues than solve.
> 
> just my 2cents
> 
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
> 
> 2015-07-24 19:34 GMT+02:00 John Kennedy  <mailto:skeb...@gmail.com>>:
> 
> I am new to Gluster and have not found anything useful from my
> friend Google. I have not dealt with physical hardware in a few
> years (my last few jobs have been VM's and AWS based)
> 
> I inherited a 4 node gluster configuration. There are 2 bricks, one
> is 9TB the other 11TB.
> 
> The 11TB brick has a HUGE number of small files taking up only 1.3TB
> of the brick. For some reason, even a simple ls command can take
> hours to even start listing files. I removed a node by shutting down
> gluster on that node. The change in performance is dramatic. If I
> try and do ls on the 11TB brick on the downed node, I am still
> getting the slow response. I have narrowed the issue down to this
> one node as a result.
> 
> When I start gluster on the bad node, glusterfsd hits over 1000%CPU
> use (The server has dual 8 core CPU's) and the load will jump to
> 25-30 within 5 minutes. As such, I think this is a gluster issue and
> not a hardware issue. I am trying to not reinstall gluster yet. 
> 
> Is there something I am missing in my checks or will I need to
> reinstall gluster on that node?
> 
> Thanks,
> John
> 
> John Kennedy  (_8(|)
> Sometimes it happens, sometimes it doesn't - Pedro Catacora
> 
> Just because nobody disagrees with you doesn't mean you are correct.
> 
> Anatidaephobia is the fear that somehow, somewhere a duck is
> watching you - urbandictionary.com <http://urbandictionary.com>
> 
> The Dunning-Kruger effect occurs when incompetent people not only
> fail to realize their incompetence, but consider themselves much
> more competent than everyone else. Basically - they're too stupid to
> know that they're stupid.
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster healing VM images

2015-07-24 Thread André Bauer
Same Here.
Glusterfs 3.5.5.
4 Nodes. Distributed Replicated.
Qemu uses libgfapi to access vmimages.

When one node goes down, Harddisk of VM goes ro.
Works again immediately after VM restart.


Am 21.07.2015 um 12:21 schrieb Gregor Burck:
> Am Dienstag, 21. Juli 2015, 10:04:25 schrieb Andrew Roberts:
>> The VMs using the healing image files freeze completely, also freezing
>> Virt-Manager, and then all of the other VMs either freeze or become slow.
> This is the same as in my test enviroment.
> 
> Work in the vm isn't possible after freeze for me, cause the filesystem in vm 
> is ro,... No solution for yet, but I test somethings on.
> 
> On which OS did you run the glusterfs-server?
> 
> Bye
> 
> Gregor
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] back to problems: gluster 3.5.4, qemu and debian 8

2015-07-13 Thread André Bauer
Hey Roman,

sorry, but this does not help.

If you want help, you should give some more detailed infos -> Logs.

I was not able to find any logs in your old mails.

Regards
André




Am 13.07.2015 um 21:54 schrieb Roman:
> Hi,
> 
> You can check the list history for my reports without answers. Yes, the
> problem I've faced was/is only with Debian8. Other OSes installed and
> ran smoothly.
> In short:
> errors are pretty random during installation process (it can stop
> installation mirror choosing step, just says there is no connection to
> any mirror, or it can stop on the packages installation step (just
> throws an error on random package about dependency etc). Or, it can
> install well, but after first boot I can't even install and run apache
> due to corrupted modules. It says there are no modules, but they are at
> their place. just corrupted.
> If I choose to raw disk during VM setup, the installation process takes
> ages before complete. With qcow2 those moment I described above.
> At the end of my last debugging I thought it was due to bad NIC in one
> of the servers (due to errors random nature, I was lucky enough to
> install the D8 on 3 of 4 nodes few times in a row without problems, but
> not tested them live.) But today on one of those 3 nodes I wasn't able
> to make a clean install of D8 with Gnome3 and Mate DE-s. Same problem:
> dependency errors during installation. Base install was fine. At the
> same moment, I can install D8 on local drives without problems. And no,
> there are no networking problems between nodes and gluster servers.
> 
> Please, try to install the D8 base install and then install mate or
> gnome3 using your VE.
> 
> Tonight I upgraded to 3.6.4, i will check out if the problem still exists.
> 
> 2015-07-13 22:07 GMT+03:00 André Bauer  <mailto:aba...@magix.net>>:
> 
> What means "out of luck"?
> Whats the error message in Debian?
> Whats in the glusterfs logs?
> 
> I run Glusterfs Server 3.5.5 on Ubuntu 14.04 accessing the Qemu images
> via Libgfapi. No Problems so far...
> 
> Am 13.07.2015 um 18:59 schrieb Roman:
> > Hi,
> >
> > I've reported a lot about this, but every  time there was
> something that
> > made me think, that it was not a glusterfs problem, but it seems
> it is.
> >
> > Please, some1 from dev-s, setup a very simple installation:
> >
> > lates proxmox
> > glusterfs server and client 3.5.4
> > and try to install debian jessie (8) on such platform.
> >
> > You will be out of luck every single try with random errors. Even
> if you
> > will get a lucky bird and install the OS itself, you will fail on any
> > installation that requires a lot of small files to download and
> unpack:
> > like gnome3 installation ie. the files a being corrupted.
> >
> > Need help with this ASAP.
> >
> > --
> > Best regards,
> > Roman.
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> 
> 
> --
> Mit freundlichen Grüßen
> André Bauer
> 
> MAGIX Software GmbH
> André Bauer
> Administrator
> August-Bebel-Straße 48
> 01219 Dresden
> GERMANY
> 
> tel.: 0351 41884875
> e-mail: aba...@magix.net <mailto:aba...@magix.net>
> aba...@magix.net <mailto:aba...@magix.net> <mailto:Email <mailto:Email>>
> www.magix.com <http://www.magix.com> <http://www.magix.com/>
> 
> 
> Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
> Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205
> 
> Find us on:
> 
> <http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
> <http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
> --
> The information in this email is intended only for the addressee named
> above. Access to this email by anyone else is unauthorized. If you are
> not the intended recipient of this message any disclosure, copying,
> distribution or any action taken in reliance on it is prohibited and
> may be unlawful. MAGIX does not warrant that any attachments are free
> from viruses or other defects and accepts no liability for any losses
> re

Re: [Gluster-users] back to problems: gluster 3.5.4, qemu and debian 8

2015-07-13 Thread André Bauer
What means "out of luck"?
Whats the error message in Debian?
Whats in the glusterfs logs?

I run Glusterfs Server 3.5.5 on Ubuntu 14.04 accessing the Qemu images
via Libgfapi. No Problems so far...

Am 13.07.2015 um 18:59 schrieb Roman:
> Hi,
> 
> I've reported a lot about this, but every  time there was something that
> made me think, that it was not a glusterfs problem, but it seems it is.
> 
> Please, some1 from dev-s, setup a very simple installation:
> 
> lates proxmox
> glusterfs server and client 3.5.4
> and try to install debian jessie (8) on such platform.
> 
> You will be out of luck every single try with random errors. Even if you
> will get a lucky bird and install the OS itself, you will fail on any
> installation that requires a lot of small files to download and unpack:
> like gnome3 installation ie. the files a being corrupted.
> 
> Need help with this ASAP.
> 
> --
> Best regards,
> Roman.
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Updates on Gluster.org website

2015-07-10 Thread André Bauer
Hi @ all,

i would realy like to see, that gluster.org website would be updated a
bit more frequently. For me its realy hard to keep up with gluster,
without reading the mailing list, whats not exactly what i want.

1.) If somebody opens http://www.gluster.org/ it seems version 3.5.4 ist
the current version.

2.) If http://www.gluster.org/news/ is opened what you see is:
- "Looking for a Gluster Community Manager"
- "GlusterFS v3.7.0 is available"

No word about Gluster 3.7.2 or Gluster 3.5.5.

3.) The "Planet Gluster news" is to much offtopic for me. Sure, running
is fun (at least for some ;-) ) but i dont think its the right place to
talk about "A Year Of Running" on the gluster site.

Also some other tech topics seem to be far away from gluster topic.


Before i used the old documentation and looked through recent changes to
see whats going on but imho this is not longer possible with the new
documentation system.

Imho the Elasticsearch guys doing a great job keeping the commuinity
updated. Have a look at their blog: https://www.elastic.co/blog

Would love to have a "This week in gluster" post, which shows me whats
going on.

Thanks in advance and keep up the good work :-)


Regards
André

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS 3.6.1 breaks VM images on cluster node restart

2015-06-08 Thread André Bauer
I saw similar behaviour when file permissions of vm image was set to
root:root instead of hypervisor user.

"chown -R libvirt-qemu:kvm /var/lib/libvirt/images" before starting vm
did the trick for me...


Am 04.06.2015 um 16:08 schrieb Roger Lehmann:
> Hello, I'm having a serious problem with my GlusterFS cluster.
> I'm using Proxmox 3.4 for high available VM management which works with
> GlusterFS as storage.
> Unfortunately, when I restart every node in the cluster sequentially one
> by one (with online migration of the running HA VM first of course) the
> qemu image of the HA VM gets corrupted and the VM itself has problems
> accessing it.
> 
> May 15 10:35:09 blog kernel: [339003.942602] end_request: I/O error, dev
> vda, sector 2048
> May 15 10:35:09 blog kernel: [339003.942829] Buffer I/O error on device
> vda1, logical block 0
> May 15 10:35:09 blog kernel: [339003.942929] lost page write due to I/O
> error on vda1
> May 15 10:35:09 blog kernel: [339003.942952] end_request: I/O error, dev
> vda, sector 2072
> May 15 10:35:09 blog kernel: [339003.943049] Buffer I/O error on device
> vda1, logical block 3
> May 15 10:35:09 blog kernel: [339003.943146] lost page write due to I/O
> error on vda1
> May 15 10:35:09 blog kernel: [339003.943153] end_request: I/O error, dev
> vda, sector 4196712
> May 15 10:35:09 blog kernel: [339003.943251] Buffer I/O error on device
> vda1, logical block 524333
> May 15 10:35:09 blog kernel: [339003.943350] lost page write due to I/O
> error on vda1
> May 15 10:35:09 blog kernel: [339003.943363] end_request: I/O error, dev
> vda, sector 4197184
> 
> 
> After the image is broken, it's impossible to migrate the VM or start it
> when it's down.
> 
> root@pve2 ~ # gluster volume heal pve-vol info
> Gathering list of entries to be healed on volume pve-vol has been
> successful
> 
> Brick pve1:/var/lib/glusterd/brick
> Number of entries: 1
> /images//200/vm-200-disk-1.qcow2
> 
> Brick pve2:/var/lib/glusterd/brick
> Number of entries: 1
> /images/200/vm-200-disk-1.qcow2
> 
> Brick pve3:/var/lib/glusterd/brick
> Number of entries: 1
> /images//200/vm-200-disk-1.qcow2
> 
> 
> 
> I couldn't really reproduce this in my test environment with GlusterFS
> 3.6.2 but I had other problems while testing (may also be because of a
> virtualized test environment), so I don't want to upgrade to 3.6.2 until
> I definitely know the problems I encountered are fixed in 3.6.2.
> Anybody else experienced this problem? I'm not sure if issue 1161885
> (Possible file corruption on dispersed volumes) is the issue I'm
> experiencing. I have a 3 node replicate cluster.
> Thanks for your help!
> 
> Regards,
> Roger Lehmann
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.6.3 Ubuntu PPA

2015-05-26 Thread André Bauer
Louis Zuckerman was/is working on automatic package builds for Ubuntu.

Infos: https://github.com/semiosis/glusterfs-debian/issues/5

Regards
André

Am 23.05.2015 um 19:00 schrieb Tom Pepper:
> Just wondering if we can expect 3.6.3 to make it to launchpad anytime soon?
> 
> Thanks,
> -t
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net <mailto:Email>
www.magix.com <http://www.magix.com/>


Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

<http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
<http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Dovecot and glusterfs

2014-12-23 Thread André Bauer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

do you have both dovecot servers active?
Im not completly sure if this will work.
Imho you should try an active / passive setup...

Regards
André

Am 21.12.2014 um 08:34 schrieb Michael Schwartzkopff:
> Hi,
> 
> I wanted t test gluster a little bit for the usage in Dovecot 
> Postbox Serves.
> 
> I set up two servers with a replicated gluster and mounted the 
> bricks via gluster client. On these two nodes I installed dovecot. 
> So I created a high available test szenario.
> 
> I configured dovecdot to usr maildir format and mainly copied the 
> dovecot options for NFS setup:
> 
> mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes 
> mmap_disable = yes
> 
> Basically that tells dovecot to use fcntl() locks.
> 
> With the thunderbird IMAP client I accessed te two postboxes on
> the two node. One client the first dovecot node, the other client
> the second node. Both used the same user, so did access the same 
> maildir files.
> 
> First everything looked good, but within ten minutes of creating, 
> moving and deleting mails and folders I produced the following 
> log:
> 
> node2 dovecot: imap(us...@example.net): Error: Corrupted 
> transaction log file /srv/mail/us...@example.net/dovecot.index.log 
> seq 3: file_seq=3, min_file_offset (4768) > max_file_offset (3392) 
> (sync_offset=3392)
> 
> node2 dovecot: imap(us...@example.net): Error: Index 
> /srv/mail/us...@example.net/dovecot.index: Lost log for seq=3 
> offset=2948
> 
> node2 dovecot: imap(us...@example.net): Warning: fscking index
> file /srv/mail/us...@example.net/dovecot.index
> 
> The mailbox was not accessible any more and dovecot did not accept 
> any new mails for the user.
> 
> Any ideas what went wrong? I this a legitimate use case for 
> gluster? Could I prevent this from happening if I restrict oew
> user access only o one gluster node?
> 
> Mit freundlichen Grüßen,
> 
> Michael Schwartzkopff
> 
> 
> 
> _______ Gluster-users 
> mailing list Gluster-users@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


- -- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUmTv/AAoJEES+J36frTguaTEH/jkM5d9i52YIwA4rlOGahyp8
9zPrcMkMKeT0caFIrSBmGCGY6vSQBbgCDQZWDGkTOS8MDTkbekm7eT+Kip9Q8Ssk
EAFPWc/YDAsfBWM01sywsa/0Z67u5FXC0XGM8Iv262m0IKzz3E0gyZpP90FnKMNb
Z+hf7iGFYLxR5pBBtNZAwZm4HCFtOjFCi7G5S2DGC8xaNirSYV9207ODTJusXr+O
GOm/VyVP4iPRI6OLypOR4QC+AFESFaMm+ugXVzl0wsFqcOAmhUVSF9H32cpG6o5D
VL5VNQr3qLslZCwz3EZkc8QbA6kvEu9WCxhtU9Bup6XmkgVbrYmwQJ2jGaCh7aU=
=rJMW
-END PGP SIGNATURE-
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs performance tuning

2014-12-18 Thread André Bauer
I just saw ther is an offical gluster ppa now, which also contains the
qemu packages.

Maybe its better to use these:

https://launchpad.net/~gluster


Am 19.12.2014 um 08:01 schrieb André Bauer:
> Hi Bernhard,
> 
>> so in ubuntu 14.04 we can use libgfapi without recompiling qemu? 
>> it works just out of the box (as described in your first link)?
> 
> no, but you could use Luis Zuckermans or my Gluster/Qemu PPA:
> 
> https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs
> 
> https://launchpad.net/~semiosis/+archive/ubuntu/ubuntu-qemu-glusterfs
> 
> I'm using mine in production for some months without problems now.
> 
> --
> Regards
> André Bauer
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs performance tuning

2014-12-18 Thread André Bauer
Hi Bernhard,

> so in ubuntu 14.04 we can use libgfapi without recompiling qemu? 
> it works just out of the box (as described in your first link)?

no, but you could use Luis Zuckermans or my Gluster/Qemu PPA:

https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs

https://launchpad.net/~semiosis/+archive/ubuntu/ubuntu-qemu-glusterfs

I'm using mine in production for some months without problems now.

--
Regards
André Bauer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs performance tuning

2014-12-18 Thread André Bauer
Hi Bernhard,

i have nearly the same setup:
- Ubuntu 14.04.
- 4 nodes (relplicate distributed)
- bricks are on lvm on a mdadm raid 10

used bonding (802.3ad/bond-mode 4) before but did not see real benefit
in speed for vms. depending on what you do 1 gb might be already enough.
nevertheless we bought 10 ge hardware now

For the vms i recommend to use libgfapi which will have better
performance and the load of your gluster nodes will go down:

http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt

I also used these hints:
http://www.gluster.org/community/documentation/index.php/Virt-store-usecase#Tunables

Whats also important:
http://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/

Regards
André

Am 18.12.2014 um 12:39 schrieb Bernhard Glomm:
> hi all,
> 
> I'm looking for some performance tuning suggestions for glusterfs.
> 
> The set is:
> - 2 server with several bricks each (ubuntu 14.04),
> - each brick is part of a 2way-mirror between them both 
> - both server are connected with 2 * 1GB NIC
> - the servers also mount the gluster volumes (fuse) 
> - each gluster volume hosts mainly a single large file (10-20GB) (VM-Image),
>   sometimes two or three of them
> 
> My qustions:
> - Which bond mode would you recommend for this set up? 
> - what are performance parameters/values for "few large files per gluster 
> volume"
> - are there some performance parameters regarding the use as vm.image storage
>(the vms do some caching by themself?)
> 
> TIA
> 
> Bernhard
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Michael Keith
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] REMOVEXATTR warnings in client log

2014-09-07 Thread André Bauer
Anyone?

Am 03.06.2014 um 17:22 schrieb André Bauer:
> Hi List,
>
> after updateing from Ubuntu Precise to Ubuntu Trusty my GlusterFS 3.4.2
> clients have a lot of this warnings in the log:
>
> [2014-06-03 11:11:24.266842] W
> [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-2:
> remote operation failed: No data available
> [2014-06-03 11:11:24.266891] W
> [client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-3:
> remote operation failed: No data available
> [2014-06-03 11:11:24.267277] W [fuse-bridge.c:1172:fuse_err_cbk]
> 0-glusterfs-fuse: 49105: REMOVEXATTR() /2014-06/myfile.zip => -1 (No
> data available)
>
> The log fills whit this warniongs when files are created.
>
> Is this a big problem?
>
> Evrything seems to work fine so far...
>
>
>

-- 
--
Regards
André Bauer

Administrator
Magix Software GmbH

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] libgfapi / qemu - disk cache?

2014-06-06 Thread André Bauer
Hi,

when using Glusterfs Libgfapi with qemu it seeem i cant use disk cache
in libvirt because i get i/o erros when trying it.

Is this "correct"?

Do i have to use "cache=none" (which works without problems)?



-- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein,
Michael Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMOVEXATTR warnings in client log

2014-06-03 Thread André Bauer
Hi List,

after updateing from Ubuntu Precise to Ubuntu Trusty my GlusterFS 3.4.2
clients have a lot of this warnings in the log:

[2014-06-03 11:11:24.266842] W
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-2:
remote operation failed: No data available
[2014-06-03 11:11:24.266891] W
[client-rpc-fops.c:1232:client3_3_removexattr_cbk] 0-gv5-client-3:
remote operation failed: No data available
[2014-06-03 11:11:24.267277] W [fuse-bridge.c:1172:fuse_err_cbk]
0-glusterfs-fuse: 49105: REMOVEXATTR() /2014-06/myfile.zip => -1 (No
data available)

The log fills whit this warniongs when files are created.

Is this a big problem?

Evrything seems to work fine so far...



-- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein,
Michael Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2014-03-25 Thread André Bauer
Ok, Seems i need to create a blog tomorrow :D

IMHO you could also link to the ppa... No plan to remove it...



On 25. März 2014 19:31:01 MEZ, John Mark Walker  wrote:
>If someone wants to put this on their blog, I'll make sure to syndicate
>on gluster.org. Hint, hint... ;)
>
>-JM
>
>
>- Original Message -
>> On 03/25/2014 02:52 PM, André Bauer wrote:
>> > Am 25.03.2014 04:40, schrieb Lalatendu Mohanty:
>> >
>> >> Yes, Gluster server and Samba server can be on different servers.
>> >> Theoretically this should work. I think I have seen some
>> >> mails/configuration  from community around it. In my test set-up,
>I had
>> >> kept gluster and Samba on same server and haven't tried gluster
>and
>> >> Samba on different servers till now.
>> >>
>> > Thanks. I found the needed smb.con configuration option yesterday.
>> >
>> > Just wrote all together in the ppa:
>> >
>> > https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs
>> >
>> >
>> Awesome! :) Thanks.
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Regards
André Bauer___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2014-03-25 Thread André Bauer
Am 25.03.2014 04:40, schrieb Lalatendu Mohanty:

> Yes, Gluster server and Samba server can be on different servers.
> Theoretically this should work. I think I have seen some
> mails/configuration  from community around it. In my test set-up, I had
> kept gluster and Samba on same server and haven't tried gluster and
> Samba on different servers till now.
> 

Thanks. I found the needed smb.con configuration option yesterday.

Just wrote all together in the ppa:

https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs


-- 
Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein,
Michael Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2014-03-24 Thread André Bauer
Will put the infos in the ppa tomorrow...

On 24. März 2014 23:53:55 MEZ, John Mark Walker  wrote:
>This sounds like it should be in a howto.
>
>Want to try to write something up? 
>
>-JM
>
>
>- Original Message -
>> Am 24.03.2014 22:35, schrieb André Bauer:
>> 
>> > 
>> > What makes me wonder is, that there is no possibility to set a
>glusterfs
>> > server. The log says it tries to access localhost:24007 what was
>not
>> > working because glusterfs is not runing on the same machine as
>samba.
>> 
>> Found the smb.conf option:
>> 
>> glusterfs:volfile_server = storage3.local
>> 
>> 
>> --
>> Regards
>> 
>> André Bauer
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Regards
André Bauer___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2014-03-24 Thread André Bauer
Am 24.03.2014 22:35, schrieb André Bauer:

> 
> What makes me wonder is, that there is no possibility to set a glusterfs
> server. The log says it tries to access localhost:24007 what was not
> working because glusterfs is not runing on the same machine as samba.

Found the smb.conf option:

glusterfs:volfile_server = storage3.local


-- 
Regards

André Bauer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2014-03-24 Thread André Bauer
Am 24.03.2014 14:51, schrieb John Mark Walker:
> Thanks! I'd be very interested in hearing feedback.

Hi,

in the meantime i got it working :-)

Gluster volume needs to have: volume set volname server.allow-insecure on

In /etc/glusterfs/glusterd.vol add "option rpc-auth-allow-insecure on".


my smb.conf looks like:
[Test]
wide links = no
writeable = yes
path = /
force user = fileserver
force group = fileserver
public = yes
guest ok = yes
create mode = 660
directory mode = 770
kernel share modes = No
vfs objects = glusterfs
glusterfs:loglevel = 10
glusterfs:logfile = /var/log/glusterfs/glusterfs-test.log
glusterfs:volume = test




What makes me wonder is, that there is no possibility to set a glusterfs
server. The log says it tries to access localhost:24007 what was not
working because glusterfs is not runing on the same machine as samba.

My solution is now to redirect localhost:24007 via xinetd to one of my
glusterfs nodes :-/

Does anybody know a better solution?

Is this even implemented? If not -> bugreport?


> You might also want to consider collaborating with Semiosis on his PPA:
> 
> https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4
> 
> He's usually in the IRC channel (#gluster on freenode IRC network)
> 


I know semiosis and his paa. Using it for Ubuntu Precise ;-)
You think he would allo me to upload to his ppa or should i just ask to
add samba?

> -JM
> 
> 
> - Original Message -
>> Hi List,
>>
>> if somebody is interested in Sambas VFS Gluster plugin in Ubuntu
>> Trusty... I just created a ppa:
>>
>> https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs
>>
>> Currently untested. Feedback welcome :-)
>>
>> --
>> Regards
>>
>> André Bauer
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
> 


-- 
Regards

André Bauer


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Samba VFS Gluster plugin PPA for Ubuntu Trusty available

2014-03-24 Thread André Bauer
Hi List,

if somebody is interested in Sambas VFS Gluster plugin in Ubuntu
Trusty... I just created a ppa:

https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs

Currently untested. Feedback welcome :-)

-- 
Regards

André Bauer


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

2014-03-17 Thread André Bauer
Hi,

i vote for 3, 2, 1.

But i dont like the idea to have an extra node for 3, which means
bandwidth/speed of the whole cluster is limited to the interface of the
cache node (like in ceph).

I had some similar whish in mind, but wanted to have a ssd cache in
front of a brick. I know this means you need 4 SSDs on a 4 node cluster
but imho its better than one caching node which is limitng the cluster.


Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein,
Michael Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.

Am 13.03.2014 12:10, schrieb Carlos Capriotti:
> Hello, all.
> 
> I am a little bit impressed by the lack of action on this topic. I hate to
> be "that guy", specially being new here, but it has to be done.
> 
> If I've got this right, we have here a chance of developing Gluster even
> further, sponsored by Google, with a dedicated programmer for the summer.
> 
> In other words, if we play our cards right, we can get a free programmer
> and at least a good start/advance on this fantastic.
> 
> Well, I've checked the trello board, and there is a fair amount of things
> there.
> 
> There are a couple of things that are not there as well.
> 
> I think it would be nice to listen to the COMMUNITY (yes, that means YOU),
> for either suggestions, or at least a vote.
> 
> My opinion, being also my vote, in order of PERSONAL preference:
> 
> 1) There is a project going on (https://forge.gluster.org/disperse), that
> consists on re-writing the stripe module on gluster. This is specially
> important because it has a HUGE impact on Total Cost of Implementation
> (customer side), Total Cost of Ownership, and also matching what the
> competition has to offer. Among other things, it would allow gluster to
> implement a RAIDZ/RAID5 type of fault tolerance, much more efficient, and
> would, as far as I understand, allow you to use 3 nodes as a minimum
> stripe+replication. This means 25% less money in computer hardware, with
> increased data safety/resilience.
> 
> 2) We have a recurring issue with split-brain solution. There is an entry
> on trello asking/suggesting a mechanism that arbitrates this resolution
> automatically. I pretty much think this could come together with another
> solution that is file replication consistency check.
> 
> 3) Accelerator node project. Some storage solutions out there offer an
> "accelerator node", which is, in short, a, extra node with a lot of RAM,
> eventually fast disks (SSD), and that works like a proxy to the regular
> volumes. active chunks of files are moved there, logs (ZIL style) are
> recorded on fast media, among other things. There is NO active project for
> this, or trello entry, because it is something I started discussing with a
> few fellows just a couple of days ago. I thought of starting to play with
> RAM disks (tmpfs) as scratch disks, but, since we have an opportunity to do
> something more efficient, or at the very least start it, why not ?
> 
> Now, c'mon ! Time is running out. We need hands on deck here, for a simple
> vote !
> 
> Can you share 3 lines with your thoughts ?
> 
> Thanks
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Permission problems with Apache

2014-01-17 Thread André Bauer

Hello List,

it seem i got my problem solved.

The problem is caused by this OTRS file:

https://github.com/OTRS/otrs/blob/rel-2_4/Kernel/System/Ticket/ArticleStorageFS.pm

It works if i uncoment:

# check fs write permissions!
my $Path = 
"$Self->{ArticleDataDir}/$Self->{ArticleContentPath}/check_permissions.$$";

if ( -d $Path ) {
File::Path::rmtree( [$Path] ) || die "Can't remove $Path: $!\n";
}
if ( mkdir( "$Self->{ArticleDataDir}/check_permissions_$$", 022 ) ) {
if ( !rmdir("$Self->{ArticleDataDir}/check_permissions_$$") ) {
die "Can't remove $Self->{ArticleDataDir}/check_permissions_$$: $!\n";
}
if ( File::Path::mkpath( [$Path], 0, 0775 ) ) {
File::Path::rmtree( [$Path] ) || die "Can't remove $Path: $!\n";
}
}
else {
my $Error = $!;
$Self->{LogObject}->Log(
Priority => 'notice',
Message => "Can't create $Self->{ArticleDataDir}/check_permissions_$$: 
$Error, "

. "Try: \$OTRS_HOME/bin/SetPermissions.pl !",
);
die "Error: Can't create $Self->{ArticleDataDir}/check_permissions_$$: 
$Error \n\n "

. "Try: \$OTRS_HOME/bin/SetPermissions.pl !!!\n";
}
return 1;



After that evrything file related works fine so far. No errors in any 
logs...


Can somebody explain whats causing the error?

Mit freundlichen Grüßen

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein, Michael 
Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.

Am 16.01.2014 19:03, schrieb André Bauer:

Hello Gluster community,

i'm trying to move my OTRS filesystem from DRBD to Glusterfs on Ubuntu 
12.04 Server using Gluster 3.4.2 via the deb files of the semiosis ppa.

I allredy run my samba and kvm on glusterfs which are running fine.

I created a new gfs volume and rsynced all OTRS files to it without 
problems.
All files are user/group otrs with uid 1001 which exists on server and 
client.


Webserver (which is running as otrs user too) is restarting without 
problems but when i try to access the otrs site i get a 500 because of 
permission problems.

Accessing the files as root or otrs user from a shell works.

Here is the Gluster client log: http://pastie.org/8639756
Here is the Gluster server log: http://pastie.org/8639760


Any hints?



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Permission problems with Apache

2014-01-16 Thread André Bauer

Hello Gluster community,

i'm trying to move my OTRS filesystem from DRBD to Glusterfs on Ubuntu 
12.04 Server using Gluster 3.4.2 via the deb files of the semiosis ppa.

I allredy run my samba and kvm on glusterfs which are running fine.

I created a new gfs volume and rsynced all OTRS files to it without 
problems.
All files are user/group otrs with uid 1001 which exists on server and 
client.


Webserver (which is running as otrs user too) is restarting without 
problems but when i try to access the otrs site i get a 500 because of 
permission problems.

Accessing the files as root or otrs user from a shell works.

Here is the Gluster client log: http://pastie.org/8639756
Here is the Gluster server log: http://pastie.org/8639760


Any hints?

--
Regards

André Bauer

MAGIX Software GmbH
André Bauer
Administrator
Postfach 200914
01194 Dresden

Tel. Support Deutschland: 0900/1771115 (1,24 Euro/Min.)
Tel. Support Österreich:  0900/454571 (1,56 Euro/Min.)
Tel. Support Schweiz: 0900/454571 (1,50 CHF/Min.)

Email: mailto:aba...@magix.net
Web:   http://www.magix.com

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Erhard Rein, Michael 
Keith, Tilman Herberger
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful.

MAGIX does not warrant that any attachments are free from viruses or
other defects and accepts no liability for any losses resulting from
infected email transmissions. Please note that any views expressed in
this email may be those of the originator and do not necessarily
represent the agenda of the company.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users