[Gluster-devel] Feature proposal: xlator to optimize heal and rebalance operations

2017-11-02 Thread Xavi Hernandez
Hi all,

I've created a new GitHub issue [1] to discuss an idea to optimize
self-heal and rebalance operations by not requiring to take a lock during
data operations.

Any thoughts will be welcome.

Regards,

Xavi

[1] https://github.com/gluster/glusterfs/issues/347
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] RIO scope in release 4.0 (Was: Request for Comments: Upgrades from 3.x to 4.0+)

2017-11-02 Thread Shyam Ranganathan

On 11/02/2017 08:10 AM, Kotresh Hiremath Ravishankar wrote:

Hi Amudhan,

Please go through the following that would clarify up-gradation concerns 
from DHT to RIO in 4.0


 1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
 2. DHT volumes would not be migrated to RIO. DHT volumes would still be
using DHT code.
 3. The new volume creation should specifically opt for RIO volume once
RIO is in place.
 4. RIO should be perceived as another volume type which is chosed
during volume creation
just like replicate, EC which would avoid most of the confusions.
   5. RIO will be alpha quality (in terms of features and 
functionality) when it releases with 4.0, it is a tech preview to get 
feedback from the community.
   6. RIO is not a blocker for releasing 4.0, so if said alpha tasks 
are not met, it may not be part of 4.0 as well


Hope this clarifies volume compatibility concerns from a distribute 
layer perspective in 4.0.


Thanks,
Shyam


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Unplanned Gerrit Outage yesterday

2017-11-02 Thread Nigel Babu
Hello folks,

Yesterday, we had an unplanned Gerrit outage. We have now determined that
for some reason the machine rebooted for some reason. Michael is continuing
to debug what lead to this issue. Gerrit does not start automatically when
the VM restarted at this point.

We are currently testing a systemd unit file for Gerrit in staging. Once
that's in place, we can ensure that we start Gerrit automatically when we
restart the server.

Timeline of events (in CET):
16:29 - I receive an alert that Gerrit is down. This goes ignored because
we're still working on Jenkins.

18:25 - I notice the alerts as we're packing up for the day and start
Gerrit.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] BoF - Gluster for VM store use case

2017-11-02 Thread Ramon Selga

Hi,

Just for your reference we got some similar values in a customer setup with 
three nodes single Xeon and 4x8TB HDD each with a double 10GbE backbone.


We did a simple benchmark with fio tool on a virtual disk (virtio) of a 1TiB of 
size, XFS formatted directly no partitions no LVM, inside a VM (debian stretch, 
dual core 4GB RAM) deployed in a gluster volume disperse 3 redundancy 1 
distributed 2, sharding enabled.


We run a sequential write test 10GB file in 1024k blocks, a random read test 
with 4k blocks and a random write test also with 4k blocks several times with 
results very similar to the following:


writefile: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=200
fio-2.16
Starting 1 process

writefile: (groupid=0, jobs=1): err= 0: pid=11515: Thu Nov  2 16:50:05 2017
  write: io=10240MB, bw=473868KB/s, iops=462, runt= 22128msec
    slat (usec): min=20, max=98830, avg=1972.11, stdev=6612.81
    clat (msec): min=150, max=2979, avg=428.49, stdev=189.96
 lat (msec): min=151, max=2979, avg=430.47, stdev=189.90
    clat percentiles (msec):
 |  1.00th=[  204],  5.00th=[  249], 10.00th=[ 273], 20.00th=[  293],
 | 30.00th=[  306], 40.00th=[  318], 50.00th=[ 351], 60.00th=[  502],
 | 70.00th=[  545], 80.00th=[  578], 90.00th=[ 603], 95.00th=[  627],
 | 99.00th=[  717], 99.50th=[  775], 99.90th=[ 2966], 99.95th=[ 2966],
 | 99.99th=[ 2966]
    lat (msec) : 250=5.09%, 500=54.65%, 750=39.64%, 1000=0.31%, 2000=0.07%
    lat (msec) : >=2000=0.24%
  cpu  : usr=7.81%, sys=1.48%, ctx=1221, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
 submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued    : total=r=0/w=10240/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=200

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=473868KB/s, minb=473868KB/s, maxb=473868KB/s, 
mint=22128msec, maxt=22128msec


Disk stats (read/write):
  vdg: ios=0/10243, merge=0/0, ticks=0/2745892, in_queue=2745884, util=99.18

benchmark: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, 
iodepth=128
...
fio-2.16
Starting 4 processes

benchmark: (groupid=0, jobs=4): err= 0: pid=11529: Thu Nov  2 16:52:40 2017
  read : io=1123.9MB, bw=38347KB/s, iops=9586, runt= 30011msec
    slat (usec): min=1, max=228886, avg=415.40, stdev=3975.72
    clat (usec): min=482, max=328648, avg=52664.65, stdev=30216.00
 lat (msec): min=9, max=527, avg=53.08, stdev=30.38
    clat percentiles (msec):
 |  1.00th=[   12],  5.00th=[   22], 10.00th=[ 23], 20.00th=[   25],
 | 30.00th=[   33], 40.00th=[   38], 50.00th=[ 47], 60.00th=[   55],
 | 70.00th=[   64], 80.00th=[   76], 90.00th=[ 95], 95.00th=[  111],
 | 99.00th=[  151], 99.50th=[  163], 99.90th=[ 192], 99.95th=[  196],
 | 99.99th=[  210]
    lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 10=0.03%, 20=3.59%, 50=52.41%, 100=36.01%, 250=7.96%
    lat (msec) : 500=0.01%
  cpu  : usr=0.29%, sys=1.10%, ctx=10157, majf=0, minf=549
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
 submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
 issued    : total=r=287705/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: io=1123.9MB, aggrb=38346KB/s, minb=38346KB/s, maxb=38346KB/s, 
mint=30011msec, maxt=30011msec


Disk stats (read/write):
  vdg: ios=286499/2, merge=0/0, ticks=3707064/64, in_queue=3708680, util=99.83%

benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, 
iodepth=128
...
fio-2.16
Starting 4 processes

benchmark: (groupid=0, jobs=4): err= 0: pid=11545: Thu Nov  2 16:55:54 2017
  write: io=422464KB, bw=14079KB/s, iops=3519, runt= 30006msec
    slat (usec): min=1, max=230620, avg=1130.75, stdev=6744.31
    clat (usec): min=643, max=540987, avg=143999.57, stdev=66693.45
 lat (msec): min=8, max=541, avg=145.13, stdev=67.01
    clat percentiles (msec):
 |  1.00th=[   34],  5.00th=[   75], 10.00th=[   87], 20.00th=[  100],
 | 30.00th=[  109], 40.00th=[  116], 50.00th=[  123], 60.00th=[  135],
 | 70.00th=[  151], 80.00th=[  182], 90.00th=[  241], 95.00th=[  289],
 | 99.00th=[  359], 99.50th=[  416], 99.90th=[  465], 99.95th=[  490],
 | 99.99th=[  529]
    lat (usec) : 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.05%, 50=1.80%
    lat (msec) : 100=18.07%, 250=71.25%, 500=8.80%, 750=0.02%
  cpu  : usr=0.29%, sys=1.28%, ctx=115493, majf=0, minf=33
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
 submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.

Re: [Gluster-devel] String manipulation

2017-11-02 Thread Atin Mukherjee
Missed to click "reply all" earlier :)

On Thu, Nov 2, 2017 at 9:34 PM, Xavi Hernandez  wrote:

> Hi Atin,
>
> On 2 November 2017 at 16:31, Atin Mukherjee 
> wrote:
>
>>
>>
>> On Thu, Nov 2, 2017 at 3:35 PM, Xavi Hernandez 
>> wrote:
>>
>>> Hi all,
>>>
>>> Several times I've seen issues with the way strings are handled in many
>>> parts of the code. Sometimes it's because of an incorrect use of some
>>> functions, like strncat(). Others it's because of a lack of error
>>> conditions check. Others it's a failure in allocating the right amount of
>>> memory, or even creating a big array in the stack.
>>>
>>> Maybe we should create a set of library functions to work with strings
>>> to hide all these details and make it easier (and less error prone) to
>>> manipulate strings. I've something already written some time ago that I can
>>> adapt to gluster.
>>>
>>> On top of that we could expand it by adding path manipulation functions
>>> and string parsing features.
>>>
>>
>>> Do you think it's worth it ?
>>>
>>
>> +1, one of the the major offender I see is strncpy () where it has been
>> handled differently across the code base.
>>
>>
> I've just created a GitHub issue [1] to track this.
>
> Xavi
>
> [1] https://github.com/gluster/glusterfs/issues/348
>
>
>>
>>> Xavi
>>>
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity fixes

2017-11-02 Thread Atin Mukherjee
While I appreciate the folks to contribute lot of coverity fixes over last
few days, I have an observation for some of the patches the coverity issue
id(s) are *not* mentioned which gets maintainers in a difficult situation
to understand the exact complaint coming out of the coverity. From my past
experience in fixing coverity defects, sometimes the fixes might look
simple but they are not.

May I request all the developers to include the defect id in the commit
message for all the coverity fixes?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] tendrl-release v1.5.4 is available

2017-11-02 Thread Rohan Kanade
Hello,

The Tendrl team is happy to present tendrl-release v1.5.4

Install docs:
https://github.com/Tendrl/documentation/wiki/Tendrl-release-v1.5.4-(install-guide)

Metrics: https://github.com/Tendrl/documentation/wiki/Metrics
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amudhan P
does RIO improves folder listing and rebalance, when compared to 3.x?

if yes, do you have any performance data comparing RIO and DHT?

On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M  wrote:

> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
> > if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> > volume without any challenge?
> >
> > I am asking this because 4.0 comes with DHT2?
>
> Very short answer, yes. Your volumes will remain the same. And you
> will continue to access them the same way.
>
> RIO (as DHT2 is now known as) developers in CC can provide more
> information on this. But in short, RIO will not be replacing DHT. It
> was renamed to make this clear.
> Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
> that exist will continue to use DHT, and continue to work as they
> always have.
> You will only be able to create new RIO volumes, and will not be able
> to migrate DHT to RIO.
>
> >
> >
> >
> >
> > On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
> >>
> >> We're fast approaching the time for Gluster-4.0. And we would like to
> >> set out the expected upgrade strategy and try to polish it to be as
> >> user friendly as possible.
> >>
> >> We're getting this out here now, because there was quite a bit of
> >> concern and confusion regarding the upgrades between 3.x and 4.0+.
> >>
> >> ---
> >> ## Background
> >>
> >> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> >> which is backwards incompatible with the GlusterD (GD1) in
> >> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> >> established, rolling upgrades are not possible. This meant that
> >> upgrades from 3.x to 4.0 would require a volume downtime and possible
> >> client downtime.
> >>
> >> This was a cause of concern among many during the recently concluded
> >> Gluster Summit 2017.
> >>
> >> We would like to keep pains experienced by our users to a minimum, so
> >> we are trying to develop an upgrade strategy that avoids downtime as
> >> much as possible.
> >>
> >> ## (Expected) Upgrade strategy from 3.x to 4.0
> >>
> >> Gluster-4.0 will ship with both GD1 and GD2.
> >> For fresh installations, only GD2 will be installed and available by
> >> default.
> >> For existing installations (upgrades) GD1 will be installed and run by
> >> default. GD2 will also be installed simultaneously, but will not run
> >> automatically.
> >>
> >> GD1 will allow rolling upgrades, and allow properly setup Gluster
> >> volumes to be upgraded to 4.0 binaries, without downtime.
> >>
> >> Once the full pool is upgraded, and all bricks and other daemons are
> >> running 4.0 binaries, migration to GD2 can happen.
> >>
> >> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> >> and GD2 started instead.
> >> GD2 will not automatically form a cluster. A migration script will be
> >> provided, which will form a new GD2 cluster from the existing GD1
> >> cluster information, and migrate volume information from GD1 into GD2.
> >>
> >> Once migration is complete, GD2 will pick up the running brick and
> >> other daemon processes and continue. This will only be possible if the
> >> rolling upgrade with GD1 happened successfully and all the processes
> >> are running with 4.0 binaries.
> >>
> >> During the whole migration process, the volume would still be online
> >> for existing clients, who can still continue to work. New clients will
> >> not be possible during this time.
> >>
> >> After migration, existing clients will connect back to GD2 for
> >> updates. GD2 listens on the same port as GD1 and provides the required
> >> SunRPC programs.
> >>
> >> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> >> versions. without volume downtime, will be possible.
> >>
> >> ### FAQ and additional info
> >>
> >>  Both GD1 and GD2? What?
> >>
> >> While both GD1 and GD2 will be shipped, the GD1 shipped will
> >> essentially be the GD1 from the last 3.x series. It will not support
> >> any of the newer storage or management features being planned for 4.0.
> >> All new features will only be available from GD2.
> >>
> >>  How long will GD1 be shipped/maintained for?
> >>
> >> We plan to maintain GD1 in the 4.x series for at least a couple of
> >> releases, at least 1 LTM release. Current plan is to maintain it till
> >> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> >> then upgrade to newer releases.
> >>
> >>  Migration script
> >>
> >> The GD1 to GD2 migration script and the required features in GD2 are
> >> being planned only for 4.1. This would technically mean most users
> >> will only be able to migrate from 3.x to 4.1. But users can still
> >> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> >> improvements. They would only be missing any new features. Users who
> >> live on the edge, should be able to the migration manually in 4.0.
> >>
> >> ---
> >>
> >> Please note that the document above gives the expected u

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amudhan P
if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
volume without any challenge?

I am asking this because 4.0 comes with DHT2?




On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:

> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
>
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
>
> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> and GD2 started instead.
> GD2 will not automatically form a cluster. A migration script will be
> provided, which will form a new GD2 cluster from the existing GD1
> cluster information, and migrate volume information from GD1 into GD2.
>
> Once migration is complete, GD2 will pick up the running brick and
> other daemon processes and continue. This will only be possible if the
> rolling upgrade with GD1 happened successfully and all the processes
> are running with 4.0 binaries.
>
> During the whole migration process, the volume would still be online
> for existing clients, who can still continue to work. New clients will
> not be possible during this time.
>
> After migration, existing clients will connect back to GD2 for
> updates. GD2 listens on the same port as GD1 and provides the required
> SunRPC programs.
>
> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> versions. without volume downtime, will be possible.
>
> ### FAQ and additional info
>
>  Both GD1 and GD2? What?
>
> While both GD1 and GD2 will be shipped, the GD1 shipped will
> essentially be the GD1 from the last 3.x series. It will not support
> any of the newer storage or management features being planned for 4.0.
> All new features will only be available from GD2.
>
>  How long will GD1 be shipped/maintained for?
>
> We plan to maintain GD1 in the 4.x series for at least a couple of
> releases, at least 1 LTM release. Current plan is to maintain it till
> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> then upgrade to newer releases.
>
>  Migration script
>
> The GD1 to GD2 migration script and the required features in GD2 are
> being planned only for 4.1. This would technically mean most users
> will only be able to migrate from 3.x to 4.1. But users can still
> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> improvements. They would only be missing any new features. Users who
> live on the edge, should be able to the migration manually in 4.0.
>
> ---
>
> Please note that the document above gives the expected upgrade
> strategy, and is not final, nor complete. More details will be added
> and steps will be expanded upon, as we move forward.
>
> To move forward, we need your participation. Please reply to this
> thread with any comments you have. We will try to answer and solve any
> questions or concerns. If there a good new ideas/suggestions, they
> will be integrated. If you just like it as is, let us know any way.
>
> Thanks.
>
> Kaushal and Gluster Developers.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] BoF - Gluster for VM store use case

2017-11-02 Thread Alex K
Yes, I would be interested to hear more on the findings. Let us know once
you have them.

On Nov 1, 2017 13:10, "Shyam Ranganathan"  wrote:

> On 10/31/2017 08:36 PM, Ben Turner wrote:
>
>> * Erasure coded volumes with sharding - seen as a good fit for VM disk
>>> storage
>>>
>> I am working on this with a customer, we have been able to do 400-500 MB
>> / sec writes!  Normally things max out at ~150-250.  The trick is to use
>> multiple files, create the lvm stack and use native LVM striping.  We have
>> found that 4-6 files seems to give the best perf on our setup.  I don't
>> think we are using sharding on the EC vols, just multiple files and LVM
>> striping.  Sharding may be able to avoid the LVM striping, but I bet
>> dollars to doughnuts you won't see this level of perf:)   I am working on a
>> blog post for RHHI and RHEV + RHS performance where I am able to in some
>> cases get 2x+ the performance out of VMs / VM storage.  I'd be happy to
>> share my data / findings.
>>
>>
> Ben, we would like to hear more, so please do share your thoughts further.
> There are a fair number of users in the community who have this use-case
> and may have some interesting questions around the proposed method.
>
> Shyam
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kotresh Hiremath Ravishankar
Hi Amudhan,

Please go through the following that would clarify up-gradation concerns
from DHT to RIO in 4.0


   1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
   2. DHT volumes would not be migrated to RIO. DHT volumes would still be
   using DHT code.
   3. The new volume creation should specifically opt for RIO volume once
   RIO is in place.
   4. RIO should be perceived as another volume type which is chosed during
   volume creation
   just like replicate, EC which would avoid most of the confusions.

Shaym,

Please add if I am missing anything.

Thanks,
Kotresh HR

On Thu, Nov 2, 2017 at 4:36 PM, Amudhan P  wrote:

> does RIO improves folder listing and rebalance, when compared to 3.x?
>
> if yes, do you have any performance data comparing RIO and DHT?
>
> On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M  wrote:
>
>> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
>> > if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
>> > volume without any challenge?
>> >
>> > I am asking this because 4.0 comes with DHT2?
>>
>> Very short answer, yes. Your volumes will remain the same. And you
>> will continue to access them the same way.
>>
>> RIO (as DHT2 is now known as) developers in CC can provide more
>> information on this. But in short, RIO will not be replacing DHT. It
>> was renamed to make this clear.
>> Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
>> that exist will continue to use DHT, and continue to work as they
>> always have.
>> You will only be able to create new RIO volumes, and will not be able
>> to migrate DHT to RIO.
>>
>> >
>> >
>> >
>> >
>> > On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>> >>
>> >> We're fast approaching the time for Gluster-4.0. And we would like to
>> >> set out the expected upgrade strategy and try to polish it to be as
>> >> user friendly as possible.
>> >>
>> >> We're getting this out here now, because there was quite a bit of
>> >> concern and confusion regarding the upgrades between 3.x and 4.0+.
>> >>
>> >> ---
>> >> ## Background
>> >>
>> >> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> >> which is backwards incompatible with the GlusterD (GD1) in
>> >> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> >> established, rolling upgrades are not possible. This meant that
>> >> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> >> client downtime.
>> >>
>> >> This was a cause of concern among many during the recently concluded
>> >> Gluster Summit 2017.
>> >>
>> >> We would like to keep pains experienced by our users to a minimum, so
>> >> we are trying to develop an upgrade strategy that avoids downtime as
>> >> much as possible.
>> >>
>> >> ## (Expected) Upgrade strategy from 3.x to 4.0
>> >>
>> >> Gluster-4.0 will ship with both GD1 and GD2.
>> >> For fresh installations, only GD2 will be installed and available by
>> >> default.
>> >> For existing installations (upgrades) GD1 will be installed and run by
>> >> default. GD2 will also be installed simultaneously, but will not run
>> >> automatically.
>> >>
>> >> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> >> volumes to be upgraded to 4.0 binaries, without downtime.
>> >>
>> >> Once the full pool is upgraded, and all bricks and other daemons are
>> >> running 4.0 binaries, migration to GD2 can happen.
>> >>
>> >> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> >> and GD2 started instead.
>> >> GD2 will not automatically form a cluster. A migration script will be
>> >> provided, which will form a new GD2 cluster from the existing GD1
>> >> cluster information, and migrate volume information from GD1 into GD2.
>> >>
>> >> Once migration is complete, GD2 will pick up the running brick and
>> >> other daemon processes and continue. This will only be possible if the
>> >> rolling upgrade with GD1 happened successfully and all the processes
>> >> are running with 4.0 binaries.
>> >>
>> >> During the whole migration process, the volume would still be online
>> >> for existing clients, who can still continue to work. New clients will
>> >> not be possible during this time.
>> >>
>> >> After migration, existing clients will connect back to GD2 for
>> >> updates. GD2 listens on the same port as GD1 and provides the required
>> >> SunRPC programs.
>> >>
>> >> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> >> versions. without volume downtime, will be possible.
>> >>
>> >> ### FAQ and additional info
>> >>
>> >>  Both GD1 and GD2? What?
>> >>
>> >> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> >> essentially be the GD1 from the last 3.x series. It will not support
>> >> any of the newer storage or management features being planned for 4.0.
>> >> All new features will only be available from GD2.
>> >>
>> >>  How long will GD1 be shipped/maintained for?
>> >>
>> >> We plan to maintain GD1 in the 4.x series for

[Gluster-devel] glusters 3.12.2: bricks do not start on NetBSD

2017-11-02 Thread Emmanuel Dreyfus
Hello

I have been missing updates for a while. Now I try to upgrade
from 3.8.9 to 3.12.2 and I hit a regression: brick processes
start, but gluster volume status show them as not started.

A relevant line in the brick process log is:
[2017-11-02 12:32:56.867606] E [MSGID: 115092] [server-handshake.c:586:server_se
tvolume] 0-gfs-server: No xlator /export/wd0e is found in child status list
[2017-11-02 12:32:56.867803] I [addr.c:55:compare_addr_and_update] 0-/export/wd0
e: allowed = "*", received addr = "192.0.2.109"
[2017-11-02 12:32:56.867803] I [addr.c:55:compare_addr_and_update] 0-/export/wd0
e: allowed = "*", received addr = "192.0.2.109"
[2017-11-02 12:32:56.867863] I [MSGID: 115029] 
[server-handshake.c:793:server_setvolume] 0-gfs-server: accepted client from 
bidon.example.net-25092-2017/11/02-12:32:48:770637-gfs-client-0-0-0 (version: 
3.12.2)
[2017-11-02 12:32:57.429885] E [MSGID: 115092] 
[server-handshake.c:586:server_setvolume] 0-gfs-server: No xlator /export/wd0e 
is found in child status list
[2017-11-02 12:32:57.430162] I [MSGID: 115091] 
[server-handshake.c:761:server_setvolume] 0-gfs-server: Failed to get client 
opversion

Any idea of what goes wrong?

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Coverity covscan for 2017-11-02-84f4f68b (master branch)

2017-11-02 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-02-84f4f68b
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amar Tumballi
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:

> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?
>
>
Thanks for bringing this up. We did hear such concerns earlier too.

Multiple things here:

   - DHT2 name was bit confusing, and hence we have renamed it as 'RIO'
   (Relation Inherited Objects)
   - RIO is another way of distributing the data, like DHT. Different
   backend layout format.
   - RIO and DHT will co-exist forever, they will be different volume type
   (rather in future a distribution logic type) while creating volume.
   - The only change which would happen in future is, what would be default
   distribution type of volume? DHT in 4.0 for sure, may be RIO in 5.0, or it
   may be choosen based on the config (like if you create a volume with more
   than 128 bricks, it may be RIO etc).


Others more close to development of RIO can confirm on the other details if
there are any more confusions.

Regards,
Amar


>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improvements. They would only be missing any new features. Users who
>> live on the edge, should be able to the migration manually in 4.0.
>>
>> ---
>>
>> Please note that the document above gives the expected upgrade
>> strategy, and is not final, nor complete. More details will be added
>> and steps

[Gluster-devel] String manipulation

2017-11-02 Thread Xavi Hernandez
Hi all,

Several times I've seen issues with the way strings are handled in many
parts of the code. Sometimes it's because of an incorrect use of some
functions, like strncat(). Others it's because of a lack of error
conditions check. Others it's a failure in allocating the right amount of
memory, or even creating a big array in the stack.

Maybe we should create a set of library functions to work with strings to
hide all these details and make it easier (and less error prone) to
manipulate strings. I've something already written some time ago that I can
adapt to gluster.

On top of that we could expand it by adding path manipulation functions and
string parsing features.

Do you think it's worth it ?

Xavi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?

Very short answer, yes. Your volumes will remain the same. And you
will continue to access them the same way.

RIO (as DHT2 is now known as) developers in CC can provide more
information on this. But in short, RIO will not be replacing DHT. It
was renamed to make this clear.
Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
that exist will continue to use DHT, and continue to work as they
always have.
You will only be able to create new RIO volumes, and will not be able
to migrate DHT to RIO.

>
>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improvements. They would only be missing any new features. Users who
>> live on the edge, should be able to the migration manually in 4.0.
>>
>> ---
>>
>> Please note that the document above gives the expected upgrade
>> strategy, and is not final, nor complete. More details will be added
>> and steps will be expanded upon, as we move forward.
>>
>> To move forward, we need your participation. Please reply to this
>> thread with any comments you have. We will try to answer and solve any
>> questions or concerns. If there a good new ideas/suggestions, they
>> will be integrated. If you just like it as is, let us kn

[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
We're fast approaching the time for Gluster-4.0. And we would like to
set out the expected upgrade strategy and try to polish it to be as
user friendly as possible.

We're getting this out here now, because there was quite a bit of
concern and confusion regarding the upgrades between 3.x and 4.0+.

---
## Background

Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
which is backwards incompatible with the GlusterD (GD1) in
GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
established, rolling upgrades are not possible. This meant that
upgrades from 3.x to 4.0 would require a volume downtime and possible
client downtime.

This was a cause of concern among many during the recently concluded
Gluster Summit 2017.

We would like to keep pains experienced by our users to a minimum, so
we are trying to develop an upgrade strategy that avoids downtime as
much as possible.

## (Expected) Upgrade strategy from 3.x to 4.0

Gluster-4.0 will ship with both GD1 and GD2.
For fresh installations, only GD2 will be installed and available by default.
For existing installations (upgrades) GD1 will be installed and run by
default. GD2 will also be installed simultaneously, but will not run
automatically.

GD1 will allow rolling upgrades, and allow properly setup Gluster
volumes to be upgraded to 4.0 binaries, without downtime.

Once the full pool is upgraded, and all bricks and other daemons are
running 4.0 binaries, migration to GD2 can happen.

To migrate to GD2, all GD1 processes in the cluster need to be killed,
and GD2 started instead.
GD2 will not automatically form a cluster. A migration script will be
provided, which will form a new GD2 cluster from the existing GD1
cluster information, and migrate volume information from GD1 into GD2.

Once migration is complete, GD2 will pick up the running brick and
other daemon processes and continue. This will only be possible if the
rolling upgrade with GD1 happened successfully and all the processes
are running with 4.0 binaries.

During the whole migration process, the volume would still be online
for existing clients, who can still continue to work. New clients will
not be possible during this time.

After migration, existing clients will connect back to GD2 for
updates. GD2 listens on the same port as GD1 and provides the required
SunRPC programs.

Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
versions. without volume downtime, will be possible.

### FAQ and additional info

 Both GD1 and GD2? What?

While both GD1 and GD2 will be shipped, the GD1 shipped will
essentially be the GD1 from the last 3.x series. It will not support
any of the newer storage or management features being planned for 4.0.
All new features will only be available from GD2.

 How long will GD1 be shipped/maintained for?

We plan to maintain GD1 in the 4.x series for at least a couple of
releases, at least 1 LTM release. Current plan is to maintain it till
4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
then upgrade to newer releases.

 Migration script

The GD1 to GD2 migration script and the required features in GD2 are
being planned only for 4.1. This would technically mean most users
will only be able to migrate from 3.x to 4.1. But users can still
migrate from 3.x to 4.0 with GD1 and get many bug fixes and
improvements. They would only be missing any new features. Users who
live on the edge, should be able to the migration manually in 4.0.

---

Please note that the document above gives the expected upgrade
strategy, and is not final, nor complete. More details will be added
and steps will be expanded upon, as we move forward.

To move forward, we need your participation. Please reply to this
thread with any comments you have. We will try to answer and solve any
questions or concerns. If there a good new ideas/suggestions, they
will be integrated. If you just like it as is, let us know any way.

Thanks.

Kaushal and Gluster Developers.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Maintainers' meeting: Minutes (11/01/2017 - Nov 1st, 2017)

2017-11-02 Thread Amar Tumballi
Links

   - Bridge: https://bluejeans.com/205933580
   - Download: https://bluejeans.com/s/9vElE


Attendance

   - [Sorry Note] Jose, Atin (Holiday in India)
   - jdarcy, shyam, amarts, amye, nigelb, michael (misc), ndevos (DST
   failure)


Agenda

   -

   Gluster Summit Updates
   - How to handle first time users properly, so they continue to
  contribute more?
 - AI: Report generation to identify first time contributors in the
 works [Nigel]
  - Handling the patches if more than 10 revision happen in a patchset?
 - “Patch etiquette” documentation (jdarcy)
 - AI: General agreement on this process is present, we need to
 document and share the news to the larger contributors lists
  - Feature deliverables diligence enforcement [Shyam]
 - A feature should not be commited to code without,
- Sufficient design
- Documentation updates
- Relevant test cases
 - Maintainers would be reponsible to ensure all deliverables are
 in place before the merge
 - All deliverables can be tracked in the github issue for the
 feature
 - Please! no more slips of features masking as bugs
 - Should we extend test cases for bug patches as well, noting in
 the review if an exception is taken for a bug commit without
a test case?
 - Discussion:
- Can we keep the github issue open till all deliverables are
met and use some flags there, like BZ
   - Possibly not, as folks post code submission, do not
   revisit the github issue
- Documentation versioning and handling this with that request
   - There are challenges here, repo versions, search, and such
   in maintianing versions
- AI: Continue this on the maintainers list and decide on
further course of action [Shyam]
 - Further updates if any?
  - Infra Updates:
 - Jenkins now runs on Centos 7. The downtime is not yet complete.
 - We’re going to also move bits.gluster.org onto a new server
 today.
 - If all goes well, this deprecates the old Jenkins server which a
 lot of people had SSH access into.
  -

   Gluster 4.0
   - mass reformat to improve consistency (jdarcy)
 - git history can get mangled (cregit tool can help maybe)
 - Ex: cregit.linuxsources.org/
 - AI: [Nigel] to check and see if history mangling can be avoided
 with the above tool
 - AI: [Shyam] add to 4.0 plans mails
 - AI: [Jeff] Provide an example that can help assess work
  - Release tasks update [Shyam]
 - Post 3.13 branching, master is open to absorb all of 4.0
 goodness!
 - We need to ensure we call out the features that are going to
 make it, as branching deadline for 4.0 would be mid-december
considering
 end Jan release!
 - 3 months from 3.13 is end Feb, but we have decided end Jan as
 4.0 is also an STM
 - Features/Major changes:
- GD2 is a big piece and needs some decisions there
- Protocol changes
- Monitoring changes
- FB Changes
 - AI: [Shyam] to start threads on the list to get things moving
  -

   Round Table
   - Suggest adding “Decisions” section to document, that calls out
  decisions made [Shyam]
 - AI: [Amar] to try and incorporate this into the notes
  - Summit video recording to be uploaded and made public
 - AI: Do we have all the slides? [Amye]
 - AI: BoF summary mails reminder to the owners [Amye]
  - gNFS maintainers announcement [Shyam]
 - AI: To be done in a day or two, adding Shreyas and Jeff from
 Facebook to the MAINTAINERS file [Shyam]
  -

   Decisions:
   - Minor modifications to patches from maintainers is OK if below is
  properly followed-up:
 - Author information remains intact. (Use ‘git --author’ option)
 - Original Author is notified with clear reason for edits:
- Can be “I like the idea, and would like to land it in next
release as I feel its important, would like to make minor
modifications to
some log-messages and send a patch myself on your behalf
to cut down the
time. Thanks, sincerely”
- Or “I like this idea, and don’t see much activity on this
from last few weeks. If you don’t mind I am planning to
resend it on your
behalf for the review”.
- etc, etc.

---

Join us back in another 2 weeks if you have any further points to discuss.
Meantime, feel free to discuss further in this thread if there some
thoughts you have on the meeting.


-