On 06-07-2015 5:18, Niels de Vos wrote:
On Mon, Jul 06, 2015 at 02:46:52AM -0300, J. Christopher Pereira Zimmermann
wrote:
Hi Niels,
Thanks for bouncing.
I'm not sure if you are asking how to figure out the hole/data offsets from
the underlying fs at once, or about the ec communic
On 10-05-2015 6:26, Niels de Vos wrote:
On Sat, May 09, 2015 at 06:34:55AM -0300, Christopher Pereira wrote:
Core was generated by `glusterd --xlator-option *.upgrade=on -N'.
Program terminated with signal 11, Segmentation fault.
#0 0x7f489c747c3b in ?? ()
[...]
Bug reported here:
On 11-05-2015 17:57, Vijay Bellur wrote:
On 05/11/2015 10:45 PM, Christopher Pereira wrote:
On 11-05-2015 15:40, Christopher Pereira wrote:
There is an arbiter feature for replica 3 volumes
(https://github.com/gluster/glusterfs/blob/master/doc/features/afr-arbiter-volumes.md)
being
On 11-05-2015 15:40, Christopher Pereira wrote:
There is an arbiter feature for replica 3 volumes
(https://github.com/gluster/glusterfs/blob/master/doc/features/afr-arbiter-volumes.md)
being released in glusterfs 3.7 which would prevent files from going
into split-brains, you could try that
On 11-05-2015 1:43, Ravishankar N wrote:
Besides, geo-replication allows to replicate a replica-1 volume in
order to achieve similar results as replica-2.
But since geo-rep uses rsync I guess that it's less optimal than
using "replica-n" where I guess blocks are marked as dirty to be
replica
er to resolve *all*
split-brains by choosing the authority brick as the winner (?).
Do we currently have an option for doing something like this?
Benefits for gluster:
- replica-n won't cause no split-brains
- scalability (write performance won't be limited to the slow
On 10-05-2015 7:30, Niels de Vos wrote:
[...]
This shows two read calls over the network:
1. offset: 0; size: 131072
2. offset: 131072; size: 131072
There are no lseek() requests for SEEK_HOLE/SEEK_DATA sent over the
network. Looking into the operations (FOPs) that are available to the
Glu
On 10-05-2015 6:26, Niels de Vos wrote:
Because 'glusterd --xlator-option *.upgrade=on -N' is the command that
failed, I guess that the core was generated during an update process.
It sound possible that the packages got updated because the failing of
the glusterd command was not fatal. HTH, Ni
_64
glusterfs-3.8dev-0.58.gitf692757.el7.centos.x86_64
glusterfs-debuginfo-3.8dev-0.58.gitf692757.el7.centos.x86_64
Best regards,
Christopher Pereira
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
Hi,
I did a local sparse image copy benchmark copying from a gluster mount
(of a local gluster-server) vs copying directly from the gluster-brick
to test SEEK_HOLE support and performance.
Motivation : VM's image files are sparse.
Benchmark:
cp /mnt/gluster-mount/image to /tmp/
=> takes
10 matches
Mail list logo