Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-07 Thread Darrell Budic
Will the various client packages (centos in my case) be able to automatically 
handle the upgrade vs new install decision, or will we be required to do 
something manually to determine that?

It’s a little unclear that things will continue without interruption because of 
the way you describe the change from GD1 to GD2, since it sounds like it stops 
GD1. Early days, obviously, but if you could clarify if that’s what we’re used 
to as a rolling upgrade or how it works, that would be appreciated. Also 
clarification that we’ll be able to upgrade from 3.x (3.1x?) to 4.0, manually 
or automatically?


> From: Kaushal M 
> Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> Date: November 2, 2017 at 3:56:05 AM CDT
> To: gluster-us...@gluster.org; Gluster Devel
> 
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
> 
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
> 
> ---
> ## Background
> 
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
> 
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
> 
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
> 
> ## (Expected) Upgrade strategy from 3.x to 4.0
> 
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
> 
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
> 
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
> 
> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> and GD2 started instead.
> GD2 will not automatically form a cluster. A migration script will be
> provided, which will form a new GD2 cluster from the existing GD1
> cluster information, and migrate volume information from GD1 into GD2.
> 
> Once migration is complete, GD2 will pick up the running brick and
> other daemon processes and continue. This will only be possible if the
> rolling upgrade with GD1 happened successfully and all the processes
> are running with 4.0 binaries.
> 
> During the whole migration process, the volume would still be online
> for existing clients, who can still continue to work. New clients will
> not be possible during this time.
> 
> After migration, existing clients will connect back to GD2 for
> updates. GD2 listens on the same port as GD1 and provides the required
> SunRPC programs.
> 
> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> versions. without volume downtime, will be possible.
> 
> ### FAQ and additional info
> 
>  Both GD1 and GD2? What?
> 
> While both GD1 and GD2 will be shipped, the GD1 shipped will
> essentially be the GD1 from the last 3.x series. It will not support
> any of the newer storage or management features being planned for 4.0.
> All new features will only be available from GD2.
> 
>  How long will GD1 be shipped/maintained for?
> 
> We plan to maintain GD1 in the 4.x series for at least a couple of
> releases, at least 1 LTM release. Current plan is to maintain it till
> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> then upgrade to newer releases.
> 
>  Migration script
> 
> The GD1 to GD2 migration script and the required features in GD2 are
> being planned only for 4.1. This would technically mean most users
> will only be able to migrate from 3.x to 4.1. But users can still
> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> improvements. They would only be missing any new features. Users who
> live on the edge, should be able to the migration manually in 4.0.
> 
> ---
> 
> Please note that the document above gives the expected upgrade
> strategy, and is not final, nor complete. More details will be added
> and steps will be expanded upon, as we move forward.
> 
> To move forward, we need your participation. Please reply to this
> thread with any comments you have. We will try to answer and solve any
> questions or concerns. If there a good new ideas/suggestions, they
> will be integrated. If you just like it as is, let us know any way.
> 
> Thanks.
> 
> Kaushal and 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-06 Thread Alastair Neil
Ahh OK I see, thanks


On 6 November 2017 at 00:54, Kaushal M  wrote:

> On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil 
> wrote:
> > Just so I am clear the upgrade process will be as follows:
> >
> > upgrade all clients to 4.0
> >
> > rolling upgrade all servers to 4.0 (with GD1)
> >
> > kill all GD1 daemons on all servers and run upgrade script (new clients
> > unable to connect at this point)
> >
> > start GD2 ( necessary or does the upgrade script do this?)
> >
> >
> > I assume that once the cluster had been migrated to GD2 the glusterd
> startup
> > script will be smart enough to start the correct version?
> >
>
> This should be the process, mostly.
>
> The upgrade script needs to GD2 running on all nodes before it can
> begin migration.
> But they don't need to have a cluster formed, the script should take
> care of forming the cluster.
>
>
> > -Thanks
> >
> >
> >
> >
> >
> > On 3 November 2017 at 04:06, Kaushal M  wrote:
> >>
> >> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic 
> >> wrote:
> >> > Will the various client packages (centos in my case) be able to
> >> > automatically handle the upgrade vs new install decision, or will we
> be
> >> > required to do something manually to determine that?
> >>
> >> We should be able to do this with CentOS (and other RPM based distros)
> >> which have well split glusterfs packages currently.
> >> At this moment, I don't know exactly how much can be handled
> >> automatically, but I expect the amount of manual intervention to be
> >> minimal.
> >> The least minimum amount of manual work needed would be enabling and
> >> starting GD2 and starting the migration script.
> >>
> >> >
> >> > It’s a little unclear that things will continue without interruption
> >> > because
> >> > of the way you describe the change from GD1 to GD2, since it sounds
> like
> >> > it
> >> > stops GD1.
> >>
> >> With the described upgrade strategy, we can ensure continuous volume
> >> access to clients during the whole process (provided volumes have been
> >> setup with replication or ec).
> >>
> >> During the migration from GD1 to GD2, any existing clients still
> >> retain access, and can continue to work without interruption.
> >> This is possible because gluster keeps the management  (glusterds) and
> >> data (bricks and clients) parts separate.
> >> So it is possible to interrupt the management parts, without
> >> interrupting data access to existing clients.
> >> Clients and the server side brick processes need GlusterD to start up.
> >> But once they're running, they can run without GlusterD. GlusterD is
> >> only required again if something goes wrong.
> >> Stopping GD1 during the migration process, will not lead to any
> >> interruptions for existing clients.
> >> The brick process continue to run, and any connected clients continue
> >> to remain connected to the bricks.
> >> Any new clients which try to mount the volumes during this migration
> >> will fail, as a GlusterD will not be available (either GD1 or GD2).
> >>
> >> > Early days, obviously, but if you could clarify if that’s what
> >> > we’re used to as a rolling upgrade or how it works, that would be
> >> > appreciated.
> >>
> >> A Gluster rolling upgrade process, allows data access to volumes
> >> during the process, while upgrading the brick processes as well.
> >> Rolling upgrades with uninterrupted access requires that volumes have
> >> redundancy (replicate or ec).
> >> Rolling upgrades involves upgrading servers belonging to a redundancy
> >> set (replica set or ec set), one at a time.
> >> One at a time,
> >> - A server is picked from a redundancy set
> >> - All Gluster processes are killed on the server, glusterd, bricks and
> >> other daemons included.
> >> - Gluster is upgraded and restarted on the server
> >> - A heal is performed to heal new data onto the bricks.
> >> - Move onto next server after heal finishes.
> >>
> >> Clients maintain uninterrupted access, because a full redundancy set
> >> is never taken offline all at once.
> >>
> >> > Also clarification that we’ll be able to upgrade from 3.x
> >> > (3.1x?) to 4.0, manually or automatically?
> >>
> >> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
> >> gdeploy has playbooks to automate it.
> >> At the end of this you will be left with a 4.0 cluster, but still be
> >> running GD1.
> >> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
> >> that automates this is planned only for 4.1.
> >>
> >> >
> >> >
> >> > 
> >> > From: Kaushal M 
> >> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to
> 4.0+
> >> > Date: November 2, 2017 at 3:56:05 AM CDT
> >> > To: gluster-us...@gluster.org; Gluster Devel
> >> >
> >> > We're fast approaching the time for Gluster-4.0. And we would like to
> >> > set out the expected upgrade strategy and try to polish it to be as

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-05 Thread Kaushal M
On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil  wrote:
> Just so I am clear the upgrade process will be as follows:
>
> upgrade all clients to 4.0
>
> rolling upgrade all servers to 4.0 (with GD1)
>
> kill all GD1 daemons on all servers and run upgrade script (new clients
> unable to connect at this point)
>
> start GD2 ( necessary or does the upgrade script do this?)
>
>
> I assume that once the cluster had been migrated to GD2 the glusterd startup
> script will be smart enough to start the correct version?
>

This should be the process, mostly.

The upgrade script needs to GD2 running on all nodes before it can
begin migration.
But they don't need to have a cluster formed, the script should take
care of forming the cluster.


> -Thanks
>
>
>
>
>
> On 3 November 2017 at 04:06, Kaushal M  wrote:
>>
>> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic 
>> wrote:
>> > Will the various client packages (centos in my case) be able to
>> > automatically handle the upgrade vs new install decision, or will we be
>> > required to do something manually to determine that?
>>
>> We should be able to do this with CentOS (and other RPM based distros)
>> which have well split glusterfs packages currently.
>> At this moment, I don't know exactly how much can be handled
>> automatically, but I expect the amount of manual intervention to be
>> minimal.
>> The least minimum amount of manual work needed would be enabling and
>> starting GD2 and starting the migration script.
>>
>> >
>> > It’s a little unclear that things will continue without interruption
>> > because
>> > of the way you describe the change from GD1 to GD2, since it sounds like
>> > it
>> > stops GD1.
>>
>> With the described upgrade strategy, we can ensure continuous volume
>> access to clients during the whole process (provided volumes have been
>> setup with replication or ec).
>>
>> During the migration from GD1 to GD2, any existing clients still
>> retain access, and can continue to work without interruption.
>> This is possible because gluster keeps the management  (glusterds) and
>> data (bricks and clients) parts separate.
>> So it is possible to interrupt the management parts, without
>> interrupting data access to existing clients.
>> Clients and the server side brick processes need GlusterD to start up.
>> But once they're running, they can run without GlusterD. GlusterD is
>> only required again if something goes wrong.
>> Stopping GD1 during the migration process, will not lead to any
>> interruptions for existing clients.
>> The brick process continue to run, and any connected clients continue
>> to remain connected to the bricks.
>> Any new clients which try to mount the volumes during this migration
>> will fail, as a GlusterD will not be available (either GD1 or GD2).
>>
>> > Early days, obviously, but if you could clarify if that’s what
>> > we’re used to as a rolling upgrade or how it works, that would be
>> > appreciated.
>>
>> A Gluster rolling upgrade process, allows data access to volumes
>> during the process, while upgrading the brick processes as well.
>> Rolling upgrades with uninterrupted access requires that volumes have
>> redundancy (replicate or ec).
>> Rolling upgrades involves upgrading servers belonging to a redundancy
>> set (replica set or ec set), one at a time.
>> One at a time,
>> - A server is picked from a redundancy set
>> - All Gluster processes are killed on the server, glusterd, bricks and
>> other daemons included.
>> - Gluster is upgraded and restarted on the server
>> - A heal is performed to heal new data onto the bricks.
>> - Move onto next server after heal finishes.
>>
>> Clients maintain uninterrupted access, because a full redundancy set
>> is never taken offline all at once.
>>
>> > Also clarification that we’ll be able to upgrade from 3.x
>> > (3.1x?) to 4.0, manually or automatically?
>>
>> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
>> gdeploy has playbooks to automate it.
>> At the end of this you will be left with a 4.0 cluster, but still be
>> running GD1.
>> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
>> that automates this is planned only for 4.1.
>>
>> >
>> >
>> > 
>> > From: Kaushal M 
>> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
>> > Date: November 2, 2017 at 3:56:05 AM CDT
>> > To: gluster-us...@gluster.org; Gluster Devel
>> >
>> > We're fast approaching the time for Gluster-4.0. And we would like to
>> > set out the expected upgrade strategy and try to polish it to be as
>> > user friendly as possible.
>> >
>> > We're getting this out here now, because there was quite a bit of
>> > concern and confusion regarding the upgrades between 3.x and 4.0+.
>> >
>> > ---
>> > ## Background
>> >
>> > Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> > which is backwards 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Alastair Neil
Just so I am clear the upgrade process will be as follows:

upgrade all clients to 4.0

rolling upgrade all servers to 4.0 (with GD1)

kill all GD1 daemons on all servers and run upgrade script (new clients
unable to connect at this point)

start GD2 ( necessary or does the upgrade script do this?)


I assume that once the cluster had been migrated to GD2 the glusterd
startup script will be smart enough to start the correct version?

-Thanks





On 3 November 2017 at 04:06, Kaushal M  wrote:

> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic 
> wrote:
> > Will the various client packages (centos in my case) be able to
> > automatically handle the upgrade vs new install decision, or will we be
> > required to do something manually to determine that?
>
> We should be able to do this with CentOS (and other RPM based distros)
> which have well split glusterfs packages currently.
> At this moment, I don't know exactly how much can be handled
> automatically, but I expect the amount of manual intervention to be
> minimal.
> The least minimum amount of manual work needed would be enabling and
> starting GD2 and starting the migration script.
>
> >
> > It’s a little unclear that things will continue without interruption
> because
> > of the way you describe the change from GD1 to GD2, since it sounds like
> it
> > stops GD1.
>
> With the described upgrade strategy, we can ensure continuous volume
> access to clients during the whole process (provided volumes have been
> setup with replication or ec).
>
> During the migration from GD1 to GD2, any existing clients still
> retain access, and can continue to work without interruption.
> This is possible because gluster keeps the management  (glusterds) and
> data (bricks and clients) parts separate.
> So it is possible to interrupt the management parts, without
> interrupting data access to existing clients.
> Clients and the server side brick processes need GlusterD to start up.
> But once they're running, they can run without GlusterD. GlusterD is
> only required again if something goes wrong.
> Stopping GD1 during the migration process, will not lead to any
> interruptions for existing clients.
> The brick process continue to run, and any connected clients continue
> to remain connected to the bricks.
> Any new clients which try to mount the volumes during this migration
> will fail, as a GlusterD will not be available (either GD1 or GD2).
>
> > Early days, obviously, but if you could clarify if that’s what
> > we’re used to as a rolling upgrade or how it works, that would be
> > appreciated.
>
> A Gluster rolling upgrade process, allows data access to volumes
> during the process, while upgrading the brick processes as well.
> Rolling upgrades with uninterrupted access requires that volumes have
> redundancy (replicate or ec).
> Rolling upgrades involves upgrading servers belonging to a redundancy
> set (replica set or ec set), one at a time.
> One at a time,
> - A server is picked from a redundancy set
> - All Gluster processes are killed on the server, glusterd, bricks and
> other daemons included.
> - Gluster is upgraded and restarted on the server
> - A heal is performed to heal new data onto the bricks.
> - Move onto next server after heal finishes.
>
> Clients maintain uninterrupted access, because a full redundancy set
> is never taken offline all at once.
>
> > Also clarification that we’ll be able to upgrade from 3.x
> > (3.1x?) to 4.0, manually or automatically?
>
> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
> gdeploy has playbooks to automate it.
> At the end of this you will be left with a 4.0 cluster, but still be
> running GD1.
> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
> that automates this is planned only for 4.1.
>
> >
> >
> > 
> > From: Kaushal M 
> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> > Date: November 2, 2017 at 3:56:05 AM CDT
> > To: gluster-us...@gluster.org; Gluster Devel
> >
> > We're fast approaching the time for Gluster-4.0. And we would like to
> > set out the expected upgrade strategy and try to polish it to be as
> > user friendly as possible.
> >
> > We're getting this out here now, because there was quite a bit of
> > concern and confusion regarding the upgrades between 3.x and 4.0+.
> >
> > ---
> > ## Background
> >
> > Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> > which is backwards incompatible with the GlusterD (GD1) in
> > GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> > established, rolling upgrades are not possible. This meant that
> > upgrades from 3.x to 4.0 would require a volume downtime and possible
> > client downtime.
> >
> > This was a cause of concern among many during the recently concluded
> > Gluster Summit 2017.
> >
> > We would like to keep pains experienced by our users to a 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Kaushal M
On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic  wrote:
> Will the various client packages (centos in my case) be able to
> automatically handle the upgrade vs new install decision, or will we be
> required to do something manually to determine that?

We should be able to do this with CentOS (and other RPM based distros)
which have well split glusterfs packages currently.
At this moment, I don't know exactly how much can be handled
automatically, but I expect the amount of manual intervention to be
minimal.
The least minimum amount of manual work needed would be enabling and
starting GD2 and starting the migration script.

>
> It’s a little unclear that things will continue without interruption because
> of the way you describe the change from GD1 to GD2, since it sounds like it
> stops GD1.

With the described upgrade strategy, we can ensure continuous volume
access to clients during the whole process (provided volumes have been
setup with replication or ec).

During the migration from GD1 to GD2, any existing clients still
retain access, and can continue to work without interruption.
This is possible because gluster keeps the management  (glusterds) and
data (bricks and clients) parts separate.
So it is possible to interrupt the management parts, without
interrupting data access to existing clients.
Clients and the server side brick processes need GlusterD to start up.
But once they're running, they can run without GlusterD. GlusterD is
only required again if something goes wrong.
Stopping GD1 during the migration process, will not lead to any
interruptions for existing clients.
The brick process continue to run, and any connected clients continue
to remain connected to the bricks.
Any new clients which try to mount the volumes during this migration
will fail, as a GlusterD will not be available (either GD1 or GD2).

> Early days, obviously, but if you could clarify if that’s what
> we’re used to as a rolling upgrade or how it works, that would be
> appreciated.

A Gluster rolling upgrade process, allows data access to volumes
during the process, while upgrading the brick processes as well.
Rolling upgrades with uninterrupted access requires that volumes have
redundancy (replicate or ec).
Rolling upgrades involves upgrading servers belonging to a redundancy
set (replica set or ec set), one at a time.
One at a time,
- A server is picked from a redundancy set
- All Gluster processes are killed on the server, glusterd, bricks and
other daemons included.
- Gluster is upgraded and restarted on the server
- A heal is performed to heal new data onto the bricks.
- Move onto next server after heal finishes.

Clients maintain uninterrupted access, because a full redundancy set
is never taken offline all at once.

> Also clarification that we’ll be able to upgrade from 3.x
> (3.1x?) to 4.0, manually or automatically?

Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
gdeploy has playbooks to automate it.
At the end of this you will be left with a 4.0 cluster, but still be
running GD1.
Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
that automates this is planned only for 4.1.

>
>
> 
> From: Kaushal M 
> Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> Date: November 2, 2017 at 3:56:05 AM CDT
> To: gluster-us...@gluster.org; Gluster Devel
>
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
>
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
>
> To 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amudhan P
does RIO improves folder listing and rebalance, when compared to 3.x?

if yes, do you have any performance data comparing RIO and DHT?

On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M  wrote:

> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
> > if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> > volume without any challenge?
> >
> > I am asking this because 4.0 comes with DHT2?
>
> Very short answer, yes. Your volumes will remain the same. And you
> will continue to access them the same way.
>
> RIO (as DHT2 is now known as) developers in CC can provide more
> information on this. But in short, RIO will not be replacing DHT. It
> was renamed to make this clear.
> Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
> that exist will continue to use DHT, and continue to work as they
> always have.
> You will only be able to create new RIO volumes, and will not be able
> to migrate DHT to RIO.
>
> >
> >
> >
> >
> > On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
> >>
> >> We're fast approaching the time for Gluster-4.0. And we would like to
> >> set out the expected upgrade strategy and try to polish it to be as
> >> user friendly as possible.
> >>
> >> We're getting this out here now, because there was quite a bit of
> >> concern and confusion regarding the upgrades between 3.x and 4.0+.
> >>
> >> ---
> >> ## Background
> >>
> >> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> >> which is backwards incompatible with the GlusterD (GD1) in
> >> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> >> established, rolling upgrades are not possible. This meant that
> >> upgrades from 3.x to 4.0 would require a volume downtime and possible
> >> client downtime.
> >>
> >> This was a cause of concern among many during the recently concluded
> >> Gluster Summit 2017.
> >>
> >> We would like to keep pains experienced by our users to a minimum, so
> >> we are trying to develop an upgrade strategy that avoids downtime as
> >> much as possible.
> >>
> >> ## (Expected) Upgrade strategy from 3.x to 4.0
> >>
> >> Gluster-4.0 will ship with both GD1 and GD2.
> >> For fresh installations, only GD2 will be installed and available by
> >> default.
> >> For existing installations (upgrades) GD1 will be installed and run by
> >> default. GD2 will also be installed simultaneously, but will not run
> >> automatically.
> >>
> >> GD1 will allow rolling upgrades, and allow properly setup Gluster
> >> volumes to be upgraded to 4.0 binaries, without downtime.
> >>
> >> Once the full pool is upgraded, and all bricks and other daemons are
> >> running 4.0 binaries, migration to GD2 can happen.
> >>
> >> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> >> and GD2 started instead.
> >> GD2 will not automatically form a cluster. A migration script will be
> >> provided, which will form a new GD2 cluster from the existing GD1
> >> cluster information, and migrate volume information from GD1 into GD2.
> >>
> >> Once migration is complete, GD2 will pick up the running brick and
> >> other daemon processes and continue. This will only be possible if the
> >> rolling upgrade with GD1 happened successfully and all the processes
> >> are running with 4.0 binaries.
> >>
> >> During the whole migration process, the volume would still be online
> >> for existing clients, who can still continue to work. New clients will
> >> not be possible during this time.
> >>
> >> After migration, existing clients will connect back to GD2 for
> >> updates. GD2 listens on the same port as GD1 and provides the required
> >> SunRPC programs.
> >>
> >> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> >> versions. without volume downtime, will be possible.
> >>
> >> ### FAQ and additional info
> >>
> >>  Both GD1 and GD2? What?
> >>
> >> While both GD1 and GD2 will be shipped, the GD1 shipped will
> >> essentially be the GD1 from the last 3.x series. It will not support
> >> any of the newer storage or management features being planned for 4.0.
> >> All new features will only be available from GD2.
> >>
> >>  How long will GD1 be shipped/maintained for?
> >>
> >> We plan to maintain GD1 in the 4.x series for at least a couple of
> >> releases, at least 1 LTM release. Current plan is to maintain it till
> >> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> >> then upgrade to newer releases.
> >>
> >>  Migration script
> >>
> >> The GD1 to GD2 migration script and the required features in GD2 are
> >> being planned only for 4.1. This would technically mean most users
> >> will only be able to migrate from 3.x to 4.1. But users can still
> >> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> >> improvements. They would only be missing any new features. Users who
> >> live on the edge, should be able to the migration manually in 4.0.
> >>
> >> ---
> 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amudhan P
if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
volume without any challenge?

I am asking this because 4.0 comes with DHT2?




On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:

> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
>
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
>
> To migrate to GD2, all GD1 processes in the cluster need to be killed,
> and GD2 started instead.
> GD2 will not automatically form a cluster. A migration script will be
> provided, which will form a new GD2 cluster from the existing GD1
> cluster information, and migrate volume information from GD1 into GD2.
>
> Once migration is complete, GD2 will pick up the running brick and
> other daemon processes and continue. This will only be possible if the
> rolling upgrade with GD1 happened successfully and all the processes
> are running with 4.0 binaries.
>
> During the whole migration process, the volume would still be online
> for existing clients, who can still continue to work. New clients will
> not be possible during this time.
>
> After migration, existing clients will connect back to GD2 for
> updates. GD2 listens on the same port as GD1 and provides the required
> SunRPC programs.
>
> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
> versions. without volume downtime, will be possible.
>
> ### FAQ and additional info
>
>  Both GD1 and GD2? What?
>
> While both GD1 and GD2 will be shipped, the GD1 shipped will
> essentially be the GD1 from the last 3.x series. It will not support
> any of the newer storage or management features being planned for 4.0.
> All new features will only be available from GD2.
>
>  How long will GD1 be shipped/maintained for?
>
> We plan to maintain GD1 in the 4.x series for at least a couple of
> releases, at least 1 LTM release. Current plan is to maintain it till
> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
> then upgrade to newer releases.
>
>  Migration script
>
> The GD1 to GD2 migration script and the required features in GD2 are
> being planned only for 4.1. This would technically mean most users
> will only be able to migrate from 3.x to 4.1. But users can still
> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
> improvements. They would only be missing any new features. Users who
> live on the edge, should be able to the migration manually in 4.0.
>
> ---
>
> Please note that the document above gives the expected upgrade
> strategy, and is not final, nor complete. More details will be added
> and steps will be expanded upon, as we move forward.
>
> To move forward, we need your participation. Please reply to this
> thread with any comments you have. We will try to answer and solve any
> questions or concerns. If there a good new ideas/suggestions, they
> will be integrated. If you just like it as is, let us know any way.
>
> Thanks.
>
> Kaushal and Gluster Developers.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kotresh Hiremath Ravishankar
Hi Amudhan,

Please go through the following that would clarify up-gradation concerns
from DHT to RIO in 4.0


   1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
   2. DHT volumes would not be migrated to RIO. DHT volumes would still be
   using DHT code.
   3. The new volume creation should specifically opt for RIO volume once
   RIO is in place.
   4. RIO should be perceived as another volume type which is chosed during
   volume creation
   just like replicate, EC which would avoid most of the confusions.

Shaym,

Please add if I am missing anything.

Thanks,
Kotresh HR

On Thu, Nov 2, 2017 at 4:36 PM, Amudhan P  wrote:

> does RIO improves folder listing and rebalance, when compared to 3.x?
>
> if yes, do you have any performance data comparing RIO and DHT?
>
> On Thu, Nov 2, 2017 at 4:12 PM, Kaushal M  wrote:
>
>> On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
>> > if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
>> > volume without any challenge?
>> >
>> > I am asking this because 4.0 comes with DHT2?
>>
>> Very short answer, yes. Your volumes will remain the same. And you
>> will continue to access them the same way.
>>
>> RIO (as DHT2 is now known as) developers in CC can provide more
>> information on this. But in short, RIO will not be replacing DHT. It
>> was renamed to make this clear.
>> Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
>> that exist will continue to use DHT, and continue to work as they
>> always have.
>> You will only be able to create new RIO volumes, and will not be able
>> to migrate DHT to RIO.
>>
>> >
>> >
>> >
>> >
>> > On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>> >>
>> >> We're fast approaching the time for Gluster-4.0. And we would like to
>> >> set out the expected upgrade strategy and try to polish it to be as
>> >> user friendly as possible.
>> >>
>> >> We're getting this out here now, because there was quite a bit of
>> >> concern and confusion regarding the upgrades between 3.x and 4.0+.
>> >>
>> >> ---
>> >> ## Background
>> >>
>> >> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> >> which is backwards incompatible with the GlusterD (GD1) in
>> >> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> >> established, rolling upgrades are not possible. This meant that
>> >> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> >> client downtime.
>> >>
>> >> This was a cause of concern among many during the recently concluded
>> >> Gluster Summit 2017.
>> >>
>> >> We would like to keep pains experienced by our users to a minimum, so
>> >> we are trying to develop an upgrade strategy that avoids downtime as
>> >> much as possible.
>> >>
>> >> ## (Expected) Upgrade strategy from 3.x to 4.0
>> >>
>> >> Gluster-4.0 will ship with both GD1 and GD2.
>> >> For fresh installations, only GD2 will be installed and available by
>> >> default.
>> >> For existing installations (upgrades) GD1 will be installed and run by
>> >> default. GD2 will also be installed simultaneously, but will not run
>> >> automatically.
>> >>
>> >> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> >> volumes to be upgraded to 4.0 binaries, without downtime.
>> >>
>> >> Once the full pool is upgraded, and all bricks and other daemons are
>> >> running 4.0 binaries, migration to GD2 can happen.
>> >>
>> >> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> >> and GD2 started instead.
>> >> GD2 will not automatically form a cluster. A migration script will be
>> >> provided, which will form a new GD2 cluster from the existing GD1
>> >> cluster information, and migrate volume information from GD1 into GD2.
>> >>
>> >> Once migration is complete, GD2 will pick up the running brick and
>> >> other daemon processes and continue. This will only be possible if the
>> >> rolling upgrade with GD1 happened successfully and all the processes
>> >> are running with 4.0 binaries.
>> >>
>> >> During the whole migration process, the volume would still be online
>> >> for existing clients, who can still continue to work. New clients will
>> >> not be possible during this time.
>> >>
>> >> After migration, existing clients will connect back to GD2 for
>> >> updates. GD2 listens on the same port as GD1 and provides the required
>> >> SunRPC programs.
>> >>
>> >> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> >> versions. without volume downtime, will be possible.
>> >>
>> >> ### FAQ and additional info
>> >>
>> >>  Both GD1 and GD2? What?
>> >>
>> >> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> >> essentially be the GD1 from the last 3.x series. It will not support
>> >> any of the newer storage or management features being planned for 4.0.
>> >> All new features will only be available from GD2.
>> >>
>> >>  How long will GD1 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Amar Tumballi
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:

> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?
>
>
Thanks for bringing this up. We did hear such concerns earlier too.

Multiple things here:

   - DHT2 name was bit confusing, and hence we have renamed it as 'RIO'
   (Relation Inherited Objects)
   - RIO is another way of distributing the data, like DHT. Different
   backend layout format.
   - RIO and DHT will co-exist forever, they will be different volume type
   (rather in future a distribution logic type) while creating volume.
   - The only change which would happen in future is, what would be default
   distribution type of volume? DHT in 4.0 for sure, may be RIO in 5.0, or it
   may be choosen based on the config (like if you create a volume with more
   than 128 bricks, it may be RIO etc).


Others more close to development of RIO can confirm on the other details if
there are any more confusions.

Regards,
Amar


>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improvements. They would only be missing any new features. Users who
>> live on the edge, should be able to the migration manually in 4.0.
>>
>> ---
>>
>> Please note that the document above gives the expected upgrade
>> strategy, and is not final, nor 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P  wrote:
> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?

Very short answer, yes. Your volumes will remain the same. And you
will continue to access them the same way.

RIO (as DHT2 is now known as) developers in CC can provide more
information on this. But in short, RIO will not be replacing DHT. It
was renamed to make this clear.
Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
that exist will continue to use DHT, and continue to work as they
always have.
You will only be able to create new RIO volumes, and will not be able
to migrate DHT to RIO.

>
>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M  wrote:
>>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improvements. They would only be missing any new features. Users who
>> live on the edge, should be able to the migration manually in 4.0.
>>
>> ---
>>
>> Please note that the document above gives the expected upgrade
>> strategy, and is not final, nor complete. More details will be added
>> and steps will be expanded upon, as we move forward.
>>
>> To move forward, we need your participation. Please reply to this
>> thread with any comments you have. We will try to answer and solve any
>> questions or concerns. If there a good new ideas/suggestions, they
>> will be