Re: [Gluster-users] [Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Alastair Neil
Just so I am clear the upgrade process will be as follows:

upgrade all clients to 4.0

rolling upgrade all servers to 4.0 (with GD1)

kill all GD1 daemons on all servers and run upgrade script (new clients
unable to connect at this point)

start GD2 ( necessary or does the upgrade script do this?)


I assume that once the cluster had been migrated to GD2 the glusterd
startup script will be smart enough to start the correct version?

-Thanks





On 3 November 2017 at 04:06, Kaushal M  wrote:

> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic 
> wrote:
> > Will the various client packages (centos in my case) be able to
> > automatically handle the upgrade vs new install decision, or will we be
> > required to do something manually to determine that?
>
> We should be able to do this with CentOS (and other RPM based distros)
> which have well split glusterfs packages currently.
> At this moment, I don't know exactly how much can be handled
> automatically, but I expect the amount of manual intervention to be
> minimal.
> The least minimum amount of manual work needed would be enabling and
> starting GD2 and starting the migration script.
>
> >
> > It’s a little unclear that things will continue without interruption
> because
> > of the way you describe the change from GD1 to GD2, since it sounds like
> it
> > stops GD1.
>
> With the described upgrade strategy, we can ensure continuous volume
> access to clients during the whole process (provided volumes have been
> setup with replication or ec).
>
> During the migration from GD1 to GD2, any existing clients still
> retain access, and can continue to work without interruption.
> This is possible because gluster keeps the management  (glusterds) and
> data (bricks and clients) parts separate.
> So it is possible to interrupt the management parts, without
> interrupting data access to existing clients.
> Clients and the server side brick processes need GlusterD to start up.
> But once they're running, they can run without GlusterD. GlusterD is
> only required again if something goes wrong.
> Stopping GD1 during the migration process, will not lead to any
> interruptions for existing clients.
> The brick process continue to run, and any connected clients continue
> to remain connected to the bricks.
> Any new clients which try to mount the volumes during this migration
> will fail, as a GlusterD will not be available (either GD1 or GD2).
>
> > Early days, obviously, but if you could clarify if that’s what
> > we’re used to as a rolling upgrade or how it works, that would be
> > appreciated.
>
> A Gluster rolling upgrade process, allows data access to volumes
> during the process, while upgrading the brick processes as well.
> Rolling upgrades with uninterrupted access requires that volumes have
> redundancy (replicate or ec).
> Rolling upgrades involves upgrading servers belonging to a redundancy
> set (replica set or ec set), one at a time.
> One at a time,
> - A server is picked from a redundancy set
> - All Gluster processes are killed on the server, glusterd, bricks and
> other daemons included.
> - Gluster is upgraded and restarted on the server
> - A heal is performed to heal new data onto the bricks.
> - Move onto next server after heal finishes.
>
> Clients maintain uninterrupted access, because a full redundancy set
> is never taken offline all at once.
>
> > Also clarification that we’ll be able to upgrade from 3.x
> > (3.1x?) to 4.0, manually or automatically?
>
> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
> gdeploy has playbooks to automate it.
> At the end of this you will be left with a 4.0 cluster, but still be
> running GD1.
> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
> that automates this is planned only for 4.1.
>
> >
> >
> > 
> > From: Kaushal M 
> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> > Date: November 2, 2017 at 3:56:05 AM CDT
> > To: gluster-users@gluster.org; Gluster Devel
> >
> > We're fast approaching the time for Gluster-4.0. And we would like to
> > set out the expected upgrade strategy and try to polish it to be as
> > user friendly as possible.
> >
> > We're getting this out here now, because there was quite a bit of
> > concern and confusion regarding the upgrades between 3.x and 4.0+.
> >
> > ---
> > ## Background
> >
> > Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> > which is backwards incompatible with the GlusterD (GD1) in
> > GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> > established, rolling upgrades are not possible. This meant that
> > upgrades from 3.x to 4.0 would require a volume downtime and possible
> > client downtime.
> >
> > This was a cause of concern among many during the recently concluded
> > Gluster Summit 2017.
> >
> > We would like to keep pains experienced by our users to a 

Re: [Gluster-users] Gluster Developer Conversations - Nov 28 at 15:00 UTC

2017-11-03 Thread Raghavendra Talur
I propose a talk

"Life of a gluster client process"

We will have a look at one complete life cycle of a client process
which includes:
* mount script and parsing of args
* contacting glusterd and fetching volfile
* loading and initializing the xlators
* how glusterd sends updates of volume options
* brick disconnection/reconnection
* glusterd disconnection/reconnection
* termination of mount

Raghavendra Talur



On Wed, Nov 1, 2017 at 9:43 PM, Amye Scavarda  wrote:
> Hi all!
> Based on the popularity of wanting more lightning talks at Gluster
> Summit, we'll be trying something new: Gluster Developer
> Conversations. This will be a one hour meeting on November 28th at UTC
> 15:00, with five 5 minute lightning talks and time for discussion in
> between. The meeting will be recorded, and I'll be posting the
> individual talks separately in our community channels.
>
> What would you want to talk about?
> Respond on this thread, and if I get more than five, we'll schedule
> following meetings.
> Thanks!
> - amye
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Kaushal M
On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic  wrote:
> Will the various client packages (centos in my case) be able to
> automatically handle the upgrade vs new install decision, or will we be
> required to do something manually to determine that?

We should be able to do this with CentOS (and other RPM based distros)
which have well split glusterfs packages currently.
At this moment, I don't know exactly how much can be handled
automatically, but I expect the amount of manual intervention to be
minimal.
The least minimum amount of manual work needed would be enabling and
starting GD2 and starting the migration script.

>
> It’s a little unclear that things will continue without interruption because
> of the way you describe the change from GD1 to GD2, since it sounds like it
> stops GD1.

With the described upgrade strategy, we can ensure continuous volume
access to clients during the whole process (provided volumes have been
setup with replication or ec).

During the migration from GD1 to GD2, any existing clients still
retain access, and can continue to work without interruption.
This is possible because gluster keeps the management  (glusterds) and
data (bricks and clients) parts separate.
So it is possible to interrupt the management parts, without
interrupting data access to existing clients.
Clients and the server side brick processes need GlusterD to start up.
But once they're running, they can run without GlusterD. GlusterD is
only required again if something goes wrong.
Stopping GD1 during the migration process, will not lead to any
interruptions for existing clients.
The brick process continue to run, and any connected clients continue
to remain connected to the bricks.
Any new clients which try to mount the volumes during this migration
will fail, as a GlusterD will not be available (either GD1 or GD2).

> Early days, obviously, but if you could clarify if that’s what
> we’re used to as a rolling upgrade or how it works, that would be
> appreciated.

A Gluster rolling upgrade process, allows data access to volumes
during the process, while upgrading the brick processes as well.
Rolling upgrades with uninterrupted access requires that volumes have
redundancy (replicate or ec).
Rolling upgrades involves upgrading servers belonging to a redundancy
set (replica set or ec set), one at a time.
One at a time,
- A server is picked from a redundancy set
- All Gluster processes are killed on the server, glusterd, bricks and
other daemons included.
- Gluster is upgraded and restarted on the server
- A heal is performed to heal new data onto the bricks.
- Move onto next server after heal finishes.

Clients maintain uninterrupted access, because a full redundancy set
is never taken offline all at once.

> Also clarification that we’ll be able to upgrade from 3.x
> (3.1x?) to 4.0, manually or automatically?

Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
gdeploy has playbooks to automate it.
At the end of this you will be left with a 4.0 cluster, but still be
running GD1.
Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
that automates this is planned only for 4.1.

>
>
> 
> From: Kaushal M 
> Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> Date: November 2, 2017 at 3:56:05 AM CDT
> To: gluster-users@gluster.org; Gluster Devel
>
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volumes to be upgraded to 4.0 binaries, without downtime.
>
> Once the full pool is upgraded, and all bricks and other daemons are
> running 4.0 binaries, migration to GD2 can happen.
>
> To 

Re: [Gluster-users] Memory Leakage in Gluster 3.10.2-1

2017-11-03 Thread Hans Henrik Happe
Hi,

I just filled this bug, which seems to be related:

https://bugzilla.redhat.com/show_bug.cgi?id=1509071

Cheers,
Hans Henrik

On 27-07-2017 15:53, Mohammed Rafi K C wrote:
> Are you still facing the problem ? If so, Can you please provide the
> workload , cmd_log_history file, log files , etc ?
> 
> 
> Regards
> 
> Rafi KC
> 
> 
> On 06/23/2017 02:06 PM, shridhar s n wrote:
>> Hi All,
>>
>> We are using GlusterFS 3.10.2 (upgraded from 3.7.0 last week) on
>> CentOS 7.x .
>>
>> We continue to see memory utilization going up once every 3 days. The
>> memory utilization of the server demon(glusterd) in  server is keep on
>> increasing. In about 30+ hours the Memory utilization of glusterd
>> service alone will reach 70% of memory available. Since we have alarms
>> for this threshold, we get notified and only way to stop it so far is
>> to restart the glusterd. 
>>
>> The GlusterFS is configured in the two server nodes with replica option.
>>
>> Kindly let us know how to fix this memory leakage.
>>
>> Thanks in advance,
>>
>> Shridhar S N
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users