[Gluster-devel] Single brick expansion

2018-04-26 Thread Gandalf Corvotempesta
Any updates about this feature?
It was planned for v4 but seems to be postponed...
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Health output like mdstat

2017-11-07 Thread Gandalf Corvotempesta
I think would be useful to add a cumulative cluster health output like
mdstat for mdadm.
So that with a simple command, would be possible to see:

1) how many nodes are UP and DOWN (and which nodes are DOWN)
2) any background operation running (like healing, scrubbing) with
their progress
3) any split brain files that won't be fixed automatically

Currently, we need to run multiple commands to see cluster health.

An even better version would be something similiar to "mdadm --detail /dev/md0"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] BoF - Gluster for VM store use case

2017-11-03 Thread Gandalf Corvotempesta
Could you please share fio command line used for this test?
Additionally, can you tell me the time needed to extract the kernel source?

Il 2 nov 2017 11:24 PM, "Ramon Selga"  ha scritto:

> Hi,
>
> Just for your reference we got some similar values in a customer setup
> with three nodes single Xeon and 4x8TB HDD each with a double 10GbE
> backbone.
>
> We did a simple benchmark with fio tool on a virtual disk (virtio) of a
> 1TiB of size, XFS formatted directly no partitions no LVM, inside a VM
> (debian stretch, dual core 4GB RAM) deployed in a gluster volume disperse 3
> redundancy 1 distributed 2, sharding enabled.
>
> We run a sequential write test 10GB file in 1024k blocks, a random read
> test with 4k blocks and a random write test also with 4k blocks several
> times with results very similar to the following:
>
> writefile: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio,
> iodepth=200
> fio-2.16
> Starting 1 process
>
> writefile: (groupid=0, jobs=1): err= 0: pid=11515: Thu Nov  2 16:50:05 2017
>   write: io=10240MB, bw=473868KB/s, iops=462, runt= 22128msec
> slat (usec): min=20, max=98830, avg=1972.11, stdev=6612.81
> clat (msec): min=150, max=2979, avg=428.49, stdev=189.96
>  lat (msec): min=151, max=2979, avg=430.47, stdev=189.90
> clat percentiles (msec):
>  |  1.00th=[  204],  5.00th=[  249], 10.00th=[  273], 20.00th=[  293],
>  | 30.00th=[  306], 40.00th=[  318], 50.00th=[  351], 60.00th=[  502],
>  | 70.00th=[  545], 80.00th=[  578], 90.00th=[  603], 95.00th=[  627],
>  | 99.00th=[  717], 99.50th=[  775], 99.90th=[ 2966], 99.95th=[ 2966],
>  | 99.99th=[ 2966]
> lat (msec) : 250=5.09%, 500=54.65%, 750=39.64%, 1000=0.31%, 2000=0.07%
> lat (msec) : >=2000=0.24%
>   cpu  : usr=7.81%, sys=1.48%, ctx=1221, majf=0, minf=11
>   IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%,
> >=64=99.4%
>  submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>  complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.1%
>  issued: total=r=0/w=10240/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
>  latency   : target=0, window=0, percentile=100.00%, depth=200
>
> Run status group 0 (all jobs):
>   WRITE: io=10240MB, aggrb=473868KB/s, minb=473868KB/s, maxb=473868KB/s,
> mint=22128msec, maxt=22128msec
>
> Disk stats (read/write):
>   vdg: ios=0/10243, merge=0/0, ticks=0/2745892, in_queue=2745884,
> util=99.18
>
> benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio,
> iodepth=128
> ...
> fio-2.16
> Starting 4 processes
>
> benchmark: (groupid=0, jobs=4): err= 0: pid=11529: Thu Nov  2 16:52:40 2017
>   read : io=1123.9MB, bw=38347KB/s, iops=9586, runt= 30011msec
> slat (usec): min=1, max=228886, avg=415.40, stdev=3975.72
> clat (usec): min=482, max=328648, avg=52664.65, stdev=30216.00
>  lat (msec): min=9, max=527, avg=53.08, stdev=30.38
> clat percentiles (msec):
>  |  1.00th=[   12],  5.00th=[   22], 10.00th=[   23], 20.00th=[   25],
>  | 30.00th=[   33], 40.00th=[   38], 50.00th=[   47], 60.00th=[   55],
>  | 70.00th=[   64], 80.00th=[   76], 90.00th=[   95], 95.00th=[  111],
>  | 99.00th=[  151], 99.50th=[  163], 99.90th=[  192], 99.95th=[  196],
>  | 99.99th=[  210]
> lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
> lat (msec) : 10=0.03%, 20=3.59%, 50=52.41%, 100=36.01%, 250=7.96%
> lat (msec) : 500=0.01%
>   cpu  : usr=0.29%, sys=1.10%, ctx=10157, majf=0, minf=549
>   IO depths: 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
> >=64=99.9%
>  submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>  complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.1%
>  issued: total=r=287705/w=0/d=0, short=r=0/w=0/d=0,
> drop=r=0/w=0/d=0
>  latency   : target=0, window=0, percentile=100.00%, depth=128
>
> Run status group 0 (all jobs):
>READ: io=1123.9MB, aggrb=38346KB/s, minb=38346KB/s, maxb=38346KB/s,
> mint=30011msec, maxt=30011msec
>
> Disk stats (read/write):
>   vdg: ios=286499/2, merge=0/0, ticks=3707064/64, in_queue=3708680,
> util=99.83%
>
> benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio,
> iodepth=128
> ...
> fio-2.16
> Starting 4 processes
>
> benchmark: (groupid=0, jobs=4): err= 0: pid=11545: Thu Nov  2 16:55:54 2017
>   write: io=422464KB, bw=14079KB/s, iops=3519, runt= 30006msec
> slat (usec): min=1, max=230620, avg=1130.75, stdev=6744.31
> clat (usec): min=643, max=540987, avg=143999.57, stdev=66693.45
>  lat (msec): min=8, max=541, avg=145.13, stdev=67.01
> clat percentiles (msec):
>  |  1.00th=[   34],  5.00th=[   75], 10.00th=[   87], 20.00th=[  100],
>  | 30.00th=[  109], 40.00th=[  116], 50.00th=[  123], 60.00th=[  135],
>  | 70.00th=[  151], 80.00th=[  182], 90.00th=[  241], 95.00th=[  289],
>  | 99.00th=[  359], 99.50th=[  416], 99.90th=[  465], 99.95th=[  

Re: [Gluster-devel] [Gluster-users] [New Release] GlusterD2 v4.0dev-7

2017-07-05 Thread Gandalf Corvotempesta
Il 5 lug 2017 11:31 AM, "Kaushal M"  ha scritto:

- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)


What do you mean with this?
Any differences in volume expansion from the current architecture?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-03 Thread Gandalf Corvotempesta
2017-05-03 14:22 GMT+02:00 Atin Mukherjee :
> Fix is up @ https://review.gluster.org/#/c/17160/ . The only thing which
> we'd need to decide (and are debating on) is that should we bypass this
> validation with rebalance start force or not. What do others think?

This is a good way to manage bugs. Release notes was updated accordingly and
gluster cli doesn't allow "stupid" operation like rebalance on a
sharded volume that
lead to data loss.

Well done.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [POC] disaster recovery: reconstruct all shards

2017-02-26 Thread Gandalf Corvotempesta
Would be possible to add a command to use in case of disaster recovery
(where everything is broken) to recreate files from sharding ?

In example, let's assume a totally down cluster. no trusted pools and
so on but sysadmin knows which hdd is part of any distributed replica:

hdd1 + hdd2 + hdd3 are distributed and replicated to hdd4 + hdd5 + hdd6

a CLI could traverse hdd1,hdd2,hdd3 and reconstruct all shards
creating the original, unsharded file.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Feature Request: Lock Volume Settings

2016-11-14 Thread Gandalf Corvotempesta
Il 14 nov 2016 7:28 PM, "Joe Julian"  ha scritto:
>
> IMHO, if a command will result in data loss, fall it. Period.
>
> It should never be ok for a filesystem to lose data. If someone wanted to
do that with ext or xfs they would have to format.
>

Exactly. I've wrote something similiar in some mail.
Gluster should preserve data consistency at any cost.
If you are trying to do something bad, this should be blocked or, AT
MINIMUM,  a confirm must be asked

Like doing fsck on a mounted FS
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Gandalf Corvotempesta
2016-11-11 16:09 GMT+01:00 Sander Eikelenboom :
> I think that could also be useful
> when trying to recover from total disaster (where glusterfs bricks are 
> brokedown
> and you end up with lose bricks. At least you would be able to keep the
> filesystem data, remove the .glusterfs metadata dir. Then you can use low 
> level
> filesystem tools and rebuild your glusterfs volume and brick inplace, instead 
> of
> to move it out which could be difficult datasize wise.

This would be awesome.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-09 Thread Gandalf Corvotempesta
Il 10 nov 2016 08:22, "Raghavendra
 ha scritto:
>
> Kyle,
>
> Thanks for your your response :). This really helps. From 13s to 0.23s
seems like huge improvement.

>From 13 minutes to 23 seconds, not from 13 seconds :)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Custom Transport layers

2016-10-31 Thread Gandalf Corvotempesta
2016-10-31 12:40 GMT+01:00 Lindsay Mathieson :
> But you can broadcast with UDP - one packet of data through one nic to all
> nodes, so in theory you could broadcast 1GB *per nic* or 3GB via three nics.
> Minus overhead for acks, nacks and ordering :)
>
> But I'm not sure it would work at all in practice now through a switch.

I don't like this idea.
I stil prefere a properly configured bonding. There is a bonding mode
that does exactly this.
Probably, also balance-xor and active-tld could do the trick
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Custom Transport layers

2016-10-31 Thread Gandalf Corvotempesta
Il 28 ott 2016 2:50 PM, "Lindsay Mathieson" 
ha scritto:
>
> I'd like to experiment with broadcast udp to see if its feasible in local
networks. It would be amazing if we could write at 1GB speeds
simultaneously to all nodes.
>

Is you have replica 3 and set a 3 nic bonded interface with balance-alb on
the gluster client,  you are able to use the 3 nics simultaneously writing
at 1gb on each node.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Allow ":" as brick path

2016-10-29 Thread Gandalf Corvotempesta
Would be possible to add support for ":" in brick path?

i'm trying to use the following command:

# gluster volume replace-brick gv0 1.2.3.4:/export/brick1/brick
1.2.3.4:/export/pci-:01:00.0-scsi-0:0:2:0/brick commit force
wrong brick type: 1.2.3.4:/export/pci-:01:00.0-scsi-0:0:2:0/brick,
use :
Usage: volume replace-brick{commit force}

but the new brick name is not reconized, probably because there are
too many ":" in the argument.

As the hostname (or the IP address) can't have ":", you should get the
hostname/ip up to the first occurence of ":" and leaving the rest as
brick name/path
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Gandalf Corvotempesta
2016-10-28 12:32 GMT+02:00 Pranith Kumar Karampuri :
> No it is not completely valid. We will update it and announce the release
> sometime soon.

Thank you.
Could you also fix the other roadmaps with certain features and what
is being worked on?
There is a little bit confusion in this gluster area.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Gandalf Corvotempesta
Il 25 ott 2016 12:42, "Aravinda"  ha scritto:
>
> Hi,
>
> Since Automated test framework for Gluster is in progress, we need help
from Maintainers and developers to test the features and bug fixes to
release Gluster 3.9.
>

Is the following roadmap still valid or any changes was made for this
release?
https://www.gluster.org/community/roadmap/3.9/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Progress about small files performance

2016-10-26 Thread Gandalf Corvotempesta
Any progress about the major issue with gluster: the small files
performance?

Anyone working on this?

I would really like to use gluster as storage for maildirs or web hosting,
but with the current performance this wouldn't be possible without adding
additional layers (like exporting huge files with iscsi or creating an NFS
VM on top of gluster)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Question on merging zfs snapshot support into the mainline glusterfs

2016-10-13 Thread Gandalf Corvotempesta
Il 20 giu 2016 8:08 AM, "B.K.Raghuram"  ha scritto:
>
> We had hosted some changes to an old version of glusterfs (3.6.1) in
order to incorporate ZFS snapshot support for gluster snapshot commands.

Sorry for this OT but can someone explain me what's the meaning for these
patches?
Are you trying to merge ZFS snapshot support in gluster by replacing the
gluster snapshot code, or to make gluster able to create ZFS snapshots when
gluster is used with  ZFS bricks?

I hope it's clear what I'm asking.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [gluster-devel] Documentation Tooling Review

2016-09-22 Thread Gandalf Corvotempesta
2016-09-22 18:34 GMT+02:00 Amye Scavarda :
> Nope! RHGS is the supported version and gluster.org is the open source
> version. We'd like to keep the documentation reflecting that, but good
> catch.

Ok but features should be the same, right?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-devel] Documentation Tooling Review

2016-09-22 Thread Gandalf Corvotempesta
2016-09-22 17:25 GMT+02:00 Rajesh Joseph :
>
> I merged our upstream documenation with the Red Hat documentation and
> removed
> few Red Hat specific contents.
>
> Which content is RH specific? Aren't gluster and RHGS the same ?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [gluster-devel] Documentation Tooling Review

2016-08-15 Thread Gandalf Corvotempesta
Il 15 ago 2016 18:32, "Amye Scavarda"  ha scritto:
>
> I'm not sure what you're proposing here?
>>

I'm proposing to not move current docs from markdown to asciidoc (that
means to rewrite everithing) but just change read the docs with something
else

If read the docs is the main issue, just change that and not also the docs
format.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [gluster-devel] Documentation Tooling Review

2016-08-13 Thread Gandalf Corvotempesta
Il 13 ago 2016 1:18 AM, "Amye Scavarda"  ha scritto:
> Pushing this one higher again to see if anyone has objections to looking
more into ASCIIdocs.
> Our RTD search issue is a known issue and not likely to be resolved in
the RTD platform.
> - amye

If main issue is the RTD broken search,  why rewrote the whole docs moving
from markdown to asciidoc?
Just change RTD with something else and still keep using the current
markdown pages

Much faster and easier, and you don't have to rewrite everything.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-07-22 Thread Gandalf Corvotempesta
Il 22 lug 2016 07:54, "Frank Rothenstein" 
ha scritto:
>
> So 3.7.11 is the last usable version when using ZFS on bricks, afaik.

Even with 3.8 this issue is present?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel