[Gluster-users] RIO scope in release 4.0 (Was: Request for Comments: Upgrades from 3.x to 4.0+)

2017-11-02 Thread Shyam Ranganathan

On 11/02/2017 08:10 AM, Kotresh Hiremath Ravishankar wrote:

Hi Amudhan,

Please go through the following that would clarify up-gradation concerns 
from DHT to RIO in 4.0


 1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
 2. DHT volumes would not be migrated to RIO. DHT volumes would still be
using DHT code.
 3. The new volume creation should specifically opt for RIO volume once
RIO is in place.
 4. RIO should be perceived as another volume type which is chosed
during volume creation
just like replicate, EC which would avoid most of the confusions.
   5. RIO will be alpha quality (in terms of features and 
functionality) when it releases with 4.0, it is a tech preview to get 
feedback from the community.
   6. RIO is not a blocker for releasing 4.0, so if said alpha tasks 
are not met, it may not be part of 4.0 as well


Hope this clarifies volume compatibility concerns from a distribute 
layer perspective in 4.0.


Thanks,
Shyam


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Scale Limitations

2017-11-02 Thread Mayur Dewaikar
Hi All—
Thanks for the responses. I am mainly curious about performance impact for 
read/write workloads associated with metadata updates as the number of nodes 
increase. Any commentary on performance impact specific to various read/write 
random/sequential IO scenario as the scale increases? Not particularly worried 
about restart/reboot condition as that is an edge use case for us.


Thanks,
Mayur



From: Atin Mukherjee [mailto:amukh...@redhat.com]
Sent: Wednesday, November 1, 2017 8:53 PM
To: Mayur Dewaikar ; gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster Scale Limitations


On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar 
> wrote:
Hi all,
Are there any scale limitations in terms of how many nodes can be in a single 
Gluster Cluster or how much storage capacity can be managed in a single 
cluster? What are some of the large deployments out there that you know of?

The current design of GlusterD is not capable of handling too many nodes in the 
cluster specially on the node restart/reboot condition. We have heard about 
deployments with ~100-150 nodes where things are stable but in node reboot 
scenario some special tweaking of parameters like network.listen-backlog is 
required to ensure TCP packets don’t get overflowed resulting into connection 
between brick to glusterd fail. GlusterD2 project will definitely address this 
aspect of the problems.

Also since all the directory layouts are replicated on all the bricks of a 
volume, mkdir, unlink or any other directory operations are costly and with 
more number of bricks this impacts the latency. We’re also working on a project 
called RIO to address this issue.


Thanks,
Mayur


***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
--
- Atin (atinm)
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Darrell Budic
Will the various client packages (centos in my case) be able to automatically 
handle the upgrade vs new install decision, or will we be required to do 
something manually to determine that?

It’s a little unclear that things will continue without interruption because of 
the way you describe the change from GD1 to GD2, since it sounds like it stops 
GD1. Early days, obviously, but if you could clarify if that’s what we’re used 
to as a rolling upgrade or how it works, that would be appreciated. Also 
clarification that we’ll be able to upgrade from 3.x (3.1x?) to 4.0, manually 
or automatically?


> From: Kaushal M 
> Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> Date: November 2, 2017 at 3:56:05 AM CDT
> To: gluster-users@gluster.org; Gluster Devel
> 
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
> 
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
> 
> ---
> ## Background
> 
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
> 
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
> 
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
> 
> ## (Expected) Upgrade strategy from 3.x to 4.0
> 
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
> 
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
> 
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
> 
> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> and GD2 started instead.
> GD2 will not automatically form a cluster. A migration script will be
> provided, which will form a new GD2 cluster from the existing GD1
> cluster information, and migrate volume information from GD1 into GD2.
> 
> Once migration is complete, GD2 will pick up the running brick and
> other daemon processes and continue. This will only be possible if the
> rolling upgrade with GD1 happened successfully and all the processes
> are running with 4.0 binaries.
> 
> During the whole migration process, the volume would still be online
> for existing clients, who can still continue to work. New clients will
> not be possible during this time.
> 
> After migration, existing clients will connect back to GD2 for
> updates. GD2 listens on the same port as GD1 and provides the required
> SunRPC programs.
> 
> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> versions. without volume downtime, will be possible.
> 
> ### FAQ and additional info
> 
>  Both GD1 and GD2? What?
> 
> While both GD1 and GD2 will be shipped, the GD1 shipped will
> essentially be the GD1 from the last 3.x series. It will not support
> any of the newer storage or management features being planned for 4.0.
> All new features will only be available from GD2.
> 
>  How long will GD1 be shipped/maintained for?
> 
> We plan to maintain GD1 in the 4.x series for at least a couple of
> releases, at least 1 LTM release. Current plan is to maintain it till
> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> then upgrade to newer releases.
> 
>  Migration script
> 
> The GD1 to GD2 migration script and the required features in GD2 are
> being planned only for 4.1. This would technically mean most users
> will only be able to migrate from 3.x to 4.1. But users can still
> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> improvements. They would only be missing any new features. Users who
> live on the edge, should be able to the migration manually in 4.0.
> 
> ---
> 
> Please note that the document above gives the expected upgrade
> strategy, and is not final, nor complete. More details will be added
> and steps will be expanded upon, as we move forward.
> 
> To move forward, we need your participation. Please reply to this
> thread with any comments you have. We will try to answer and solve any
> questions or concerns. If there a good new ideas/suggestions, they
> will be integrated. If you just like it as is, let us know any way.
> 
> Thanks.
> 
> Kaushal and 

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amar Tumballi
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:

> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?
>
>
Thanks for bringing this up. We did hear such concerns earlier too.

Multiple things here:

   - DHT2 name was bit confusing, and hence we have renamed it as 'RIO'
   (Relation Inherited Objects)
   - RIO is another way of distributing the data, like DHT. Different
   backend layout format.
   - RIO and DHT will co-exist forever, they will be different volume type
   (rather in future a distribution logic type) while creating volume.
   - The only change which would happen in future is, what would be default
   distribution type of volume? DHT in 4.0 for sure, may be RIO in 5.0, or it
   may be choosen based on the config (like if you create a volume with more
   than 128 bricks, it may be RIO etc).


Others more close to development of RIO can confirm on the other details if
there are any more confusions.

Regards,
Amar


>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improvements. They would only be missing any new features. Users who
>> live on the edge, should be able to the migration manually in 4.0.
>>
>> ---
>>
>> Please note that the document above gives the expected upgrade
>> strategy, and is not final, nor 

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?

Very short answer, yes. Your volumes will remain the same. And you
will continue to access them the same way.

RIO (as DHT2 is now known as) developers in CC can provide more
information on this. But in short, RIO will not be replacing DHT. It
was renamed to make this clear.
Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
that exist will continue to use DHT, and continue to work as they
always have.
You will only be able to create new RIO volumes, and will not be able
to migrate DHT to RIO.

>
>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improvements. They would only be missing any new features. Users who
>> live on the edge, should be able to the migration manually in 4.0.
>>
>> ---
>>
>> Please note that the document above gives the expected upgrade
>> strategy, and is not final, nor complete. More details will be added
>> and steps will be expanded upon, as we move forward.
>>
>> To move forward, we need your participation. Please reply to this
>> thread with any comments you have. We will try to answer and solve any
>> questions or concerns. If there a good new ideas/suggestions, they
>> will be 

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amudhan P
does RIO improves folder listing and rebalance, when compared to 3.x?

if yes, do you have any performance data comparing RIO and DHT?

On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M  wrote:

> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
> > if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> > volume without any challenge?
> >
> > I am asking this because 4.0 comes with DHT2?
>
> Very short answer, yes. Your volumes will remain the same. And you
> will continue to access them the same way.
>
> RIO (as DHT2 is now known as) developers in CC can provide more
> information on this. But in short, RIO will not be replacing DHT. It
> was renamed to make this clear.
> Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
> that exist will continue to use DHT, and continue to work as they
> always have.
> You will only be able to create new RIO volumes, and will not be able
> to migrate DHT to RIO.
>
> >
> >
> >
> >
> > On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
> >>
> >> We're fast approaching the time for Gluster-4.0. And we would like to
> >> set out the expected upgrade strategy and try to polish it to be as
> >> user friendly as possible.
> >>
> >> We're getting this out here now, because there was quite a bit of
> >> concern and confusion regarding the upgrades between 3.x and 4.0+.
> >>
> >> ---
> >> ## Background
> >>
> >> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> >> which is backwards incompatible with the GlusterD (GD1) in
> >> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> >> established, rolling upgrades are not possible. This meant that
> >> upgrades from 3.x to 4.0 would require a volume downtime and possible
> >> client downtime.
> >>
> >> This was a cause of concern among many during the recently concluded
> >> Gluster Summit 2017.
> >>
> >> We would like to keep pains experienced by our users to a minimum, so
> >> we are trying to develop an upgrade strategy that avoids downtime as
> >> much as possible.
> >>
> >> ## (Expected) Upgrade strategy from 3.x to 4.0
> >>
> >> Gluster-4.0 will ship with both GD1 and GD2.
> >> For fresh installations, only GD2 will be installed and available by
> >> default.
> >> For existing installations (upgrades) GD1 will be installed and run by
> >> default. GD2 will also be installed simultaneously, but will not run
> >> automatically.
> >>
> >> GD1 will allow rolling upgrades, and allow properly setup Gluster
> >> volumes to be upgraded to 4.0 binaries, without downtime.
> >>
> >> Once the full pool is upgraded, and all bricks and other daemons are
> >> running 4.0 binaries, migration to GD2 can happen.
> >>
> >> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> >> and GD2 started instead.
> >> GD2 will not automatically form a cluster. A migration script will be
> >> provided, which will form a new GD2 cluster from the existing GD1
> >> cluster information, and migrate volume information from GD1 into GD2.
> >>
> >> Once migration is complete, GD2 will pick up the running brick and
> >> other daemon processes and continue. This will only be possible if the
> >> rolling upgrade with GD1 happened successfully and all the processes
> >> are running with 4.0 binaries.
> >>
> >> During the whole migration process, the volume would still be online
> >> for existing clients, who can still continue to work. New clients will
> >> not be possible during this time.
> >>
> >> After migration, existing clients will connect back to GD2 for
> >> updates. GD2 listens on the same port as GD1 and provides the required
> >> SunRPC programs.
> >>
> >> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> >> versions. without volume downtime, will be possible.
> >>
> >> ### FAQ and additional info
> >>
> >>  Both GD1 and GD2? What?
> >>
> >> While both GD1 and GD2 will be shipped, the GD1 shipped will
> >> essentially be the GD1 from the last 3.x series. It will not support
> >> any of the newer storage or management features being planned for 4.0.
> >> All new features will only be available from GD2.
> >>
> >>  How long will GD1 be shipped/maintained for?
> >>
> >> We plan to maintain GD1 in the 4.x series for at least a couple of
> >> releases, at least 1 LTM release. Current plan is to maintain it till
> >> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> >> then upgrade to newer releases.
> >>
> >>  Migration script
> >>
> >> The GD1 to GD2 migration script and the required features in GD2 are
> >> being planned only for 4.1. This would technically mean most users
> >> will only be able to migrate from 3.x to 4.1. But users can still
> >> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> >> improvements. They would only be missing any new features. Users who
> >> live on the edge, should be able to the migration manually in 4.0.
> >>
> >> ---
> 

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amudhan P
if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
volume without any challenge?

I am asking this because 4.0 comes with DHT2?




On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:

> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
>
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
>
> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> and GD2 started instead.
> GD2 will not automatically form a cluster. A migration script will be
> provided, which will form a new GD2 cluster from the existing GD1
> cluster information, and migrate volume information from GD1 into GD2.
>
> Once migration is complete, GD2 will pick up the running brick and
> other daemon processes and continue. This will only be possible if the
> rolling upgrade with GD1 happened successfully and all the processes
> are running with 4.0 binaries.
>
> During the whole migration process, the volume would still be online
> for existing clients, who can still continue to work. New clients will
> not be possible during this time.
>
> After migration, existing clients will connect back to GD2 for
> updates. GD2 listens on the same port as GD1 and provides the required
> SunRPC programs.
>
> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> versions. without volume downtime, will be possible.
>
> ### FAQ and additional info
>
>  Both GD1 and GD2? What?
>
> While both GD1 and GD2 will be shipped, the GD1 shipped will
> essentially be the GD1 from the last 3.x series. It will not support
> any of the newer storage or management features being planned for 4.0.
> All new features will only be available from GD2.
>
>  How long will GD1 be shipped/maintained for?
>
> We plan to maintain GD1 in the 4.x series for at least a couple of
> releases, at least 1 LTM release. Current plan is to maintain it till
> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> then upgrade to newer releases.
>
>  Migration script
>
> The GD1 to GD2 migration script and the required features in GD2 are
> being planned only for 4.1. This would technically mean most users
> will only be able to migrate from 3.x to 4.1. But users can still
> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> improvements. They would only be missing any new features. Users who
> live on the edge, should be able to the migration manually in 4.0.
>
> ---
>
> Please note that the document above gives the expected upgrade
> strategy, and is not final, nor complete. More details will be added
> and steps will be expanded upon, as we move forward.
>
> To move forward, we need your participation. Please reply to this
> thread with any comments you have. We will try to answer and solve any
> questions or concerns. If there a good new ideas/suggestions, they
> will be integrated. If you just like it as is, let us know any way.
>
> Thanks.
>
> Kaushal and Gluster Developers.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
We're fast approaching the time for Gluster-4.0. And we would like to
set out the expected upgrade strategy and try to polish it to be as
user friendly as possible.

We're getting this out here now, because there was quite a bit of
concern and confusion regarding the upgrades between 3.x and 4.0+.

---
## Background

Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
which is backwards incompatible with the GlusterD (GD1) in
GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
established, rolling upgrades are not possible. This meant that
upgrades from 3.x to 4.0 would require a volume downtime and possible
client downtime.

This was a cause of concern among many during the recently concluded
Gluster Summit 2017.

We would like to keep pains experienced by our users to a minimum, so
we are trying to develop an upgrade strategy that avoids downtime as
much as possible.

## (Expected) Upgrade strategy from 3.x to 4.0

Gluster-4.0 will ship with both GD1 and GD2.
For fresh installations, only GD2 will be installed and available by default.
For existing installations (upgrades) GD1 will be installed and run by
default. GD2 will also be installed simultaneously, but will not run
automatically.

GD1 will allow rolling upgrades, and allow properly setup Gluster
volumes to be upgraded to 4.0 binaries, without downtime.

Once the full pool is upgraded, and all bricks and other daemons are
running 4.0 binaries, migration to GD2 can happen.

To migrate to GD2, all GD1 processes in the cluster need to be killed,
and GD2 started instead.
GD2 will not automatically form a cluster. A migration script will be
provided, which will form a new GD2 cluster from the existing GD1
cluster information, and migrate volume information from GD1 into GD2.

Once migration is complete, GD2 will pick up the running brick and
other daemon processes and continue. This will only be possible if the
rolling upgrade with GD1 happened successfully and all the processes
are running with 4.0 binaries.

During the whole migration process, the volume would still be online
for existing clients, who can still continue to work. New clients will
not be possible during this time.

After migration, existing clients will connect back to GD2 for
updates. GD2 listens on the same port as GD1 and provides the required
SunRPC programs.

Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
versions. without volume downtime, will be possible.

### FAQ and additional info

 Both GD1 and GD2? What?

While both GD1 and GD2 will be shipped, the GD1 shipped will
essentially be the GD1 from the last 3.x series. It will not support
any of the newer storage or management features being planned for 4.0.
All new features will only be available from GD2.

 How long will GD1 be shipped/maintained for?

We plan to maintain GD1 in the 4.x series for at least a couple of
releases, at least 1 LTM release. Current plan is to maintain it till
4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
then upgrade to newer releases.

 Migration script

The GD1 to GD2 migration script and the required features in GD2 are
being planned only for 4.1. This would technically mean most users
will only be able to migrate from 3.x to 4.1. But users can still
migrate from 3.x to 4.0 with GD1 and get many bug fixes and
improvements. They would only be missing any new features. Users who
live on the edge, should be able to the migration manually in 4.0.

---

Please note that the document above gives the expected upgrade
strategy, and is not final, nor complete. More details will be added
and steps will be expanded upon, as we move forward.

To move forward, we need your participation. Please reply to this
thread with any comments you have. We will try to answer and solve any
questions or concerns. If there a good new ideas/suggestions, they
will be integrated. If you just like it as is, let us know any way.

Thanks.

Kaushal and Gluster Developers.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users