Re: [Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-12-11 Thread Alastair Neil
Neil I  don;t know if this is adequate but I did run a simple smoke test
today on the 3.12.3-1 bits.   I installed the 3.12.3-1 but on 3 fresh
install Centos 7 VMs

created a 2G image  files and wrote a xfs files system on them on each
system

mount each under /export/brick1,  and created /export/birck1/test  on each
node.
probes the two other systems from one node (a).  abd created a replica 3
volume using the bricks at export/brick1/test on each node.

started the volume and mounted it under /mnt/gluster test on nodes a.

did some brief  tests using dd into the mount point on node a, all seemed
fine - no errors nothing unexpected.








On 23 October 2017 at 17:42, Niels de Vos  wrote:

> On Mon, Oct 23, 2017 at 02:12:53PM -0400, Alastair Neil wrote:
> > Any idea when these packages will be in the CentOS mirrors? there is no
> > sign of them on download.gluster.org.
>
> We're waiting for someone other than me to test the new packages at
> least a little. Installing the packages and run something on top of a
> Gluster volume is already sufficient, just describe a bit what was
> tested. Once a confirmation is sent that it works for someone, we can
> mark the packages for releasing to the mirrors.
>
> Getting the (unsigned) RPMs is easy, run this on your test environment:
>
>   # yum --enablrepo=centos-gluster312-test update glusterfs
>
> This does not restart the brick processes so I/O is not affected with
> the installation. Make sure to restart the processes (or just reboot)
> and do whatever validation you deem sufficient.
>
> Thanks,
> Niels
>
>
> >
> > On 13 October 2017 at 08:45, Jiffin Tony Thottan 
> > wrote:
> >
> > > The Gluster community is pleased to announce the release of Gluster
> 3.12.2
> > > (packages available at [1,2,3]).
> > >
> > > Release notes for the release can be found at [4].
> > >
> > > We still carry following major issues that is reported in the
> > > release-notes as follows,
> > >
> > > 1.) - Expanding a gluster volume that is sharded may cause file
> corruption
> > >
> > > Sharded volumes are typically used for VM images, if such volumes
> are
> > > expanded or possibly contracted (i.e add/remove bricks and rebalance)
> there
> > > are reports of VM images getting corrupted.
> > >
> > > The last known cause for corruption (Bug #1465123) has a fix with
> this
> > > release. As further testing is still in progress, the issue is
> retained as
> > > a major issue.
> > >
> > > Status of this bug can be tracked here, #1465123
> > >
> > >
> > > 2 .) Gluster volume restarts fail if the sub directory export feature
> is
> > > in use. Status of this issue can be tracked here, #1501315
> > >
> > > 3.) Mounting a gluster snapshot will fail, when attempting a FUSE based
> > > mount of the snapshot. So for the current users, it is recommend to
> only
> > > access snapshot via
> > >
> > > ".snaps" directory on a mounted gluster volume. Status of this issue
> can
> > > be tracked here, #1501378
> > >
> > > Thanks,
> > >  Gluster community
> > >
> > >
> > > [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.2/
> > > <https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.1/>
> > > [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
> > > <https://launchpad.net/%7Egluster/+archive/ubuntu/glusterfs-3.11>
> > > [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> > >
> > > [4] Release notes: https://gluster.readthedocs.
> > > io/en/latest/release-notes/3.12.2/
> > > <https://gluster.readthedocs.io/en/latest/release-notes/3.11.3/>
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >
>
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-06 Thread Alastair Neil
Ahh OK I see, thanks


On 6 November 2017 at 00:54, Kaushal M  wrote:

> On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil 
> wrote:
> > Just so I am clear the upgrade process will be as follows:
> >
> > upgrade all clients to 4.0
> >
> > rolling upgrade all servers to 4.0 (with GD1)
> >
> > kill all GD1 daemons on all servers and run upgrade script (new clients
> > unable to connect at this point)
> >
> > start GD2 ( necessary or does the upgrade script do this?)
> >
> >
> > I assume that once the cluster had been migrated to GD2 the glusterd
> startup
> > script will be smart enough to start the correct version?
> >
>
> This should be the process, mostly.
>
> The upgrade script needs to GD2 running on all nodes before it can
> begin migration.
> But they don't need to have a cluster formed, the script should take
> care of forming the cluster.
>
>
> > -Thanks
> >
> >
> >
> >
> >
> > On 3 November 2017 at 04:06, Kaushal M  wrote:
> >>
> >> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic 
> >> wrote:
> >> > Will the various client packages (centos in my case) be able to
> >> > automatically handle the upgrade vs new install decision, or will we
> be
> >> > required to do something manually to determine that?
> >>
> >> We should be able to do this with CentOS (and other RPM based distros)
> >> which have well split glusterfs packages currently.
> >> At this moment, I don't know exactly how much can be handled
> >> automatically, but I expect the amount of manual intervention to be
> >> minimal.
> >> The least minimum amount of manual work needed would be enabling and
> >> starting GD2 and starting the migration script.
> >>
> >> >
> >> > It’s a little unclear that things will continue without interruption
> >> > because
> >> > of the way you describe the change from GD1 to GD2, since it sounds
> like
> >> > it
> >> > stops GD1.
> >>
> >> With the described upgrade strategy, we can ensure continuous volume
> >> access to clients during the whole process (provided volumes have been
> >> setup with replication or ec).
> >>
> >> During the migration from GD1 to GD2, any existing clients still
> >> retain access, and can continue to work without interruption.
> >> This is possible because gluster keeps the management  (glusterds) and
> >> data (bricks and clients) parts separate.
> >> So it is possible to interrupt the management parts, without
> >> interrupting data access to existing clients.
> >> Clients and the server side brick processes need GlusterD to start up.
> >> But once they're running, they can run without GlusterD. GlusterD is
> >> only required again if something goes wrong.
> >> Stopping GD1 during the migration process, will not lead to any
> >> interruptions for existing clients.
> >> The brick process continue to run, and any connected clients continue
> >> to remain connected to the bricks.
> >> Any new clients which try to mount the volumes during this migration
> >> will fail, as a GlusterD will not be available (either GD1 or GD2).
> >>
> >> > Early days, obviously, but if you could clarify if that’s what
> >> > we’re used to as a rolling upgrade or how it works, that would be
> >> > appreciated.
> >>
> >> A Gluster rolling upgrade process, allows data access to volumes
> >> during the process, while upgrading the brick processes as well.
> >> Rolling upgrades with uninterrupted access requires that volumes have
> >> redundancy (replicate or ec).
> >> Rolling upgrades involves upgrading servers belonging to a redundancy
> >> set (replica set or ec set), one at a time.
> >> One at a time,
> >> - A server is picked from a redundancy set
> >> - All Gluster processes are killed on the server, glusterd, bricks and
> >> other daemons included.
> >> - Gluster is upgraded and restarted on the server
> >> - A heal is performed to heal new data onto the bricks.
> >> - Move onto next server after heal finishes.
> >>
> >> Clients maintain uninterrupted access, because a full redundancy set
> >> is never taken offline all at once.
> >>
> >> > Also clarification that we’ll be able to upgrade from 3.x
> >> > (3.1x?) to 4.0, manually or automatically?
> >>
> >> Rolling upgrades from 3.1x to 

Re: [Gluster-devel] [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Alastair Neil
Just so I am clear the upgrade process will be as follows:

upgrade all clients to 4.0

rolling upgrade all servers to 4.0 (with GD1)

kill all GD1 daemons on all servers and run upgrade script (new clients
unable to connect at this point)

start GD2 ( necessary or does the upgrade script do this?)


I assume that once the cluster had been migrated to GD2 the glusterd
startup script will be smart enough to start the correct version?

-Thanks





On 3 November 2017 at 04:06, Kaushal M  wrote:

> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic 
> wrote:
> > Will the various client packages (centos in my case) be able to
> > automatically handle the upgrade vs new install decision, or will we be
> > required to do something manually to determine that?
>
> We should be able to do this with CentOS (and other RPM based distros)
> which have well split glusterfs packages currently.
> At this moment, I don't know exactly how much can be handled
> automatically, but I expect the amount of manual intervention to be
> minimal.
> The least minimum amount of manual work needed would be enabling and
> starting GD2 and starting the migration script.
>
> >
> > It’s a little unclear that things will continue without interruption
> because
> > of the way you describe the change from GD1 to GD2, since it sounds like
> it
> > stops GD1.
>
> With the described upgrade strategy, we can ensure continuous volume
> access to clients during the whole process (provided volumes have been
> setup with replication or ec).
>
> During the migration from GD1 to GD2, any existing clients still
> retain access, and can continue to work without interruption.
> This is possible because gluster keeps the management  (glusterds) and
> data (bricks and clients) parts separate.
> So it is possible to interrupt the management parts, without
> interrupting data access to existing clients.
> Clients and the server side brick processes need GlusterD to start up.
> But once they're running, they can run without GlusterD. GlusterD is
> only required again if something goes wrong.
> Stopping GD1 during the migration process, will not lead to any
> interruptions for existing clients.
> The brick process continue to run, and any connected clients continue
> to remain connected to the bricks.
> Any new clients which try to mount the volumes during this migration
> will fail, as a GlusterD will not be available (either GD1 or GD2).
>
> > Early days, obviously, but if you could clarify if that’s what
> > we’re used to as a rolling upgrade or how it works, that would be
> > appreciated.
>
> A Gluster rolling upgrade process, allows data access to volumes
> during the process, while upgrading the brick processes as well.
> Rolling upgrades with uninterrupted access requires that volumes have
> redundancy (replicate or ec).
> Rolling upgrades involves upgrading servers belonging to a redundancy
> set (replica set or ec set), one at a time.
> One at a time,
> - A server is picked from a redundancy set
> - All Gluster processes are killed on the server, glusterd, bricks and
> other daemons included.
> - Gluster is upgraded and restarted on the server
> - A heal is performed to heal new data onto the bricks.
> - Move onto next server after heal finishes.
>
> Clients maintain uninterrupted access, because a full redundancy set
> is never taken offline all at once.
>
> > Also clarification that we’ll be able to upgrade from 3.x
> > (3.1x?) to 4.0, manually or automatically?
>
> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
> gdeploy has playbooks to automate it.
> At the end of this you will be left with a 4.0 cluster, but still be
> running GD1.
> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
> that automates this is planned only for 4.1.
>
> >
> >
> > 
> > From: Kaushal M 
> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> > Date: November 2, 2017 at 3:56:05 AM CDT
> > To: gluster-us...@gluster.org; Gluster Devel
> >
> > We're fast approaching the time for Gluster-4.0. And we would like to
> > set out the expected upgrade strategy and try to polish it to be as
> > user friendly as possible.
> >
> > We're getting this out here now, because there was quite a bit of
> > concern and confusion regarding the upgrades between 3.x and 4.0+.
> >
> > ---
> > ## Background
> >
> > Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> > which is backwards incompatible with the GlusterD (GD1) in
> > GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> > established, rolling upgrades are not possible. This meant that
> > upgrades from 3.x to 4.0 would require a volume downtime and possible
> > client downtime.
> >
> > This was a cause of concern among many during the recently concluded
> > Gluster Summit 2017.
> >
> > We would like to keep pains experienced by our users to a minimum, so
> > we are trying to develop an upgrade strategy that avoi

Re: [Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)

2017-10-23 Thread Alastair Neil
Any idea when these packages will be in the CentOS mirrors? there is no
sign of them on download.gluster.org.

On 13 October 2017 at 08:45, Jiffin Tony Thottan 
wrote:

> The Gluster community is pleased to announce the release of Gluster 3.12.2
> (packages available at [1,2,3]).
>
> Release notes for the release can be found at [4].
>
> We still carry following major issues that is reported in the
> release-notes as follows,
>
> 1.) - Expanding a gluster volume that is sharded may cause file corruption
>
> Sharded volumes are typically used for VM images, if such volumes are
> expanded or possibly contracted (i.e add/remove bricks and rebalance) there
> are reports of VM images getting corrupted.
>
> The last known cause for corruption (Bug #1465123) has a fix with this
> release. As further testing is still in progress, the issue is retained as
> a major issue.
>
> Status of this bug can be tracked here, #1465123
>
>
> 2 .) Gluster volume restarts fail if the sub directory export feature is
> in use. Status of this issue can be tracked here, #1501315
>
> 3.) Mounting a gluster snapshot will fail, when attempting a FUSE based
> mount of the snapshot. So for the current users, it is recommend to only
> access snapshot via
>
> ".snaps" directory on a mounted gluster volume. Status of this issue can
> be tracked here, #1501378
>
> Thanks,
>  Gluster community
>
>
> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.2/
> 
> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
> 
> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>
> [4] Release notes: https://gluster.readthedocs.
> io/en/latest/release-notes/3.12.2/
> 
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Snapshot Scheduler

2016-07-12 Thread Alastair Neil
I don't know if I did something wrong, but I found the location that the
scheduler wanted the shared storage was problematic as I recall it was
under /run/gluster/snaps.  On CentOS 7 this failed to mount on boot.  I
hacked the scheduler to use a location under /var/lib.

I also think there needs to be a way to schedule the removal of snapshots.

-Alastair


On 8 July 2016 at 06:01, Avra Sengupta  wrote:

> Hi,
>
> Snaphsots in gluster have a scheduler, which relies heavily on crontab,
> and the shared storage. I would like people using this scheduler, or for
> people to use this scheduler, and provide us feedback on it's experience.
> We are looking for feedback on ease of use, complexity of features,
> additional feature support etc.
>
> It will help us in deciding if we need to revamp the existing scheduler,
> or maybe rethink relying on crontab and re-writing our own, thus providing
> us more flexibility. Thanks.
>
> Regards,
> Avra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel