[Gluster-users] IMP: Release 3.10 beta1 tagging (day-to-day slip and updates)

2017-01-30 Thread Shyam

Hi,

Before the release of 3.10 beta1, we are awaiting the merge of the 
following commits on master, and a backport of the same into the 3.10 
branch. All these commits relate to the brick multiplexing feature, and 
hence I request concerned developers attention towards reviewing the 
same and providing the required review scores.


# Patches pending review/merge and backport:
- https://review.gluster.org/14763 (core: run many bricks within one 
glusterfsd process)


- https://review.gluster.org/15645 (libglusterfs: make memory pools more 
thread-friendly)


- https://review.gluster.org/15745 (libglusterfs+transport+io-threads: 
fix 256KB stack abuse)


# Release notes pending:
We further need release notes per feature updated in the 3.10.0 release 
notes. This [1] is an initial commit for the same, I will be merging it 
if the regressions pass, else request Raghavendra Talur to nurture it in 
the IST TZ, so that others can start adding commits that update this page.


When doing so, please use the BZ as in [2] for the commit.

Please try an update the release notes for your feature ASAP, so that we 
do not hold back beta1 for the same.


# Current beta1 target date: 01-Feb-2017

# Further schedule
Once we have beta1 ready, we would need to get testing feedback from the 
community, and also testing feedback from the maintainers before 
releasing. Your assistance and timeliness in this phase is needed, to 
get a quality release out, so request some attention on testing in the 
near future.


Thanks,
Shyam

[1] Release notes: https://review.gluster.org/16481
[2] Bug to update release notes: 
https://bugzilla.redhat.com/show_bug.cgi?id=1417735

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Expected behaviour of hypervisor on Gluster node loss

2017-01-30 Thread Niklaus Hofer

Hi

I have a question concerning the 'correct' behaviour of GlusterFS:

We a nice Gluster setup up and running. Most things are working nicely. 
Our setup is as follows:
 - Storage is a 2+1 Gluster setup (2 replicating hosts + 1 arbiter) 
with a volume for virtual machines.

 - Two virtualisation hosts running libvirt / qemu / kvm.

Now the question is, what is supposed to happen when we unplug one of 
the storage nodes (aka power outage in one of our data centers)?
Initially we were hoping that the virtualisation hosts would 
automatically switch over to the second storage node and keep all VMs 
running.


However, during our tests, we have found that this is not the case. 
Instead, when we unplug one of the storage nodes, the virtual machines 
run into all sorts of problems; being unable to read/write, crashing 
applications and even corrupting the filesystem. That is of course not 
acceptable.


Reading the documentation again, we now think that we have misunderstood 
what we're supposed to be doing. To our understanding, what should 
happen is this:
 - If the virtualisation host is connected to the storage node which is 
still running:

   - everything is fine and the VM keeps running
 - If the virtualisation host was connected to the storage node which 
is now absent:

   - qemu is supposed to 'pause' / 'freeze' the VM
   - Virtualisation host waits for ping timeout
   - Virtualisation host switches over to the other storage node
   - qemu 'unpauses' the VMs
   - The VM is fully operational again

Does my description match the 'optimal' GlusterFS behaviour?


Greets
Niklaus Hofer
--
stepping stone GmbH
Neufeldstrasse 9
CH-3012 Bern

Telefon: +41 31 332 53 63
www.stepping-stone.ch
niklaus.ho...@stepping-stone.ch
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Compile xlator separately

2017-01-30 Thread Kaushal M
On Thu, Jan 26, 2017 at 9:20 PM, David Spisla 
wrote:

> Hello Gluster Community,
>
>
>
> I want to make some small changes to the read-only xlator. For this I want
> to re-compile the .so-file separately.
>
> I use the source from gluster 3.8.8 and the makefile according to this
> tutorial:
>
>
>
> https://github.com/gluster/glusterfs/blob/master/doc/
> developer-guide/translator-development.md#this-time-for-real
>
>
>
> But this tutorial seems to be obsolet because I did some small changes to
> re-compile the read-only.so. This ist my makefile:
>
>
>
> # Change these to match your source code.
>
> TARGET  = read-only.so
>
> OBJECTS = read-only.o
>
>
>
> # Change these to match your environment.
>
> GLFS_SRC = /srv/glusterfs-3.8.8
>
> GLFS_LIB = /usr/lib64
>
> HOST_OS  = GF_LINUX_HOST_OS
>
>
>
> # You shouldn't need to change anything below here.
>
>
>
> CFLAGS  = -fPIC -Wall -O0 -g \
>
>   -DHAVE_CONFIG_H -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE \
>
>   -D$(HOST_OS) -I$(GLFS_SRC) -I$(GLFS_SRC)/contrib/uuid \
>
>   -I$(GLFS_SRC)/libglusterfs/src
>
> LDFLAGS = -shared -nostartfiles -L$(GLFS_LIB)
>
> LIBS = -lpthread
>
>
>
> $(TARGET): $(OBJECTS)
>
> $(CC) $(LDFLAGS) -o $(TARGET) $(OBJECTS) $(LIBS)
>
>
>
>
>
> You see I removed the –lglusterfs from LIBS, because the compiler can not
> find this library. Is there another path actually?
>
> I also removed the first $(OBJECTS), because the compiler give me error
> messages.
>
>
>
> What is the best way to compile a xlator manually?
>

Wouldn't doing `make -C xlators/features/read-only` suffice for you?


>
>
> One more question: Does glusterd bind those feature-xlators dynamically to
> one volume? Because in the volfiles I can not see an entry for them.
>

For a translator to become part of the volume graph, it needs to be added
in glusterd's volgen code. Once this is done, glusterd will add the
translator to volumes when required.
Glusterd right now cannot dynamically pick up any translator and add it to
a volume graph. We are working on glusterd2, the next version of glusterd,
which will be able to dynamically pick up and insert translators in to a
volume graph.


>
> Thank you for your attention!
>
>
>
> *David Spisla*
>
> Software Developer
>
> david.spi...@iternity.com
>
> www.iTernity.com 
>
> Tel:   +49 761-590 34 841 <+49%20761%2059034841>
>
>
>
> [image: cid:image001.png@01D239C7.FDF7B430]
>
>
>
> iTernity GmbH
> Heinrich-von-Stephan-Str. 21
> 79100 Freiburg – Germany
> ---
> unseren technischen Support erreichen Sie unter +49 761-387 36 66
> <+49%20761%203873666>
> ---
>
> Geschäftsführer: Ralf Steinemann
> Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
> USt.Id de-24266431
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Compile xlator separately

2017-01-30 Thread David Spisla
Hello Gluster Community,

I want to make some small changes to the read-only xlator. For this I want to 
re-compile the .so-file separately.
I use the source from gluster 3.8.8 and the makefile according to this tutorial:

https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/translator-development.md#this-time-for-real

But this tutorial seems to be obsolet because I did some small changes to 
re-compile the read-only.so. This ist my makefile:

# Change these to match your source code.
TARGET  = read-only.so
OBJECTS = read-only.o

# Change these to match your environment.
GLFS_SRC = /srv/glusterfs-3.8.8
GLFS_LIB = /usr/lib64
HOST_OS  = GF_LINUX_HOST_OS

# You shouldn't need to change anything below here.

CFLAGS  = -fPIC -Wall -O0 -g \
  -DHAVE_CONFIG_H -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE \
  -D$(HOST_OS) -I$(GLFS_SRC) -I$(GLFS_SRC)/contrib/uuid \
  -I$(GLFS_SRC)/libglusterfs/src
LDFLAGS = -shared -nostartfiles -L$(GLFS_LIB)
LIBS = -lpthread

$(TARGET): $(OBJECTS)
$(CC) $(LDFLAGS) -o $(TARGET) $(OBJECTS) $(LIBS)


You see I removed the -lglusterfs from LIBS, because the compiler can not find 
this library. Is there another path actually?
I also removed the first $(OBJECTS), because the compiler give me error 
messages.

What is the best way to compile a xlator manually?

One more question: Does glusterd bind those feature-xlators dynamically to one 
volume? Because in the volfiles I can not see an entry for them.

Thank you for your attention!

David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841

[cid:image001.png@01D239C7.FDF7B430]

iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Location of the gluster client log with libgfapi?

2017-01-30 Thread Doug Ingham
Hi Kevin,

On 27 Jan 2017 16:20, "Kevin Lemonnier"  wrote:

On Fri, Jan 27, 2017 at 02:45:46PM -0300, Gambit15 wrote:
> Hey guys,
>  Would anyone be able to tell me the name/location of the gluster client
> log when mounting through libgfapi?
>

Nowhere, unfortunatly. If you are talking about KVM (qemu) you'll get it
on the stdout of the VM, which can be annoying to get depending on what
you are using.


I'm using oVirt 4, which whilst backed by KVM, I'm aware has a few defaults
which differ from pure KVM.

On the gluster side, I'm running 3.8 in a (2+1)x2 setup & default server
quorum settings.

Basically, every now & then I notice random VHD images popping up in the
heal queue, and they're almost always in pairs, "healing" the same file on
2 of the 3 replicate bricks.
That already strikes me as odd, as if a file is "dirty" on more than one
brick, surely that's a split-brain scenario? (nothing logged in "info
split-brain" though)

Anyway, these heal processes always hang around for a couple of hours, even
when it's just metadata on an arbiter brick.
That doesn't make sense to me, an arbiter shouldn't take more than a couple
of seconds to heal!?

I spoke with Joe on IRC, and he suggested I'd find more info in the
client's logs...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users