[Gluster-users] Announcing Gluster release 6.1

2019-04-22 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
6.1 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

None

Thanks,
Gluster community

[1] Packages for 6.1:
https://download.gluster.org/pub/gluster/glusterfs/6/6.1/

[2] Release notes for 6.1:
https://docs.gluster.org/en/latest/release-notes/6.1/

___
maintainers mailing list
maintain...@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] v6.0 release notes fix request

2019-04-22 Thread Shyam Ranganathan
Thanks for reporting, this is fixed now.
On 4/19/19 2:57 AM, Artem Russakovskii wrote:
> Hi,
> 
> https://docs.gluster.org/en/latest/release-notes/6.0/ currently contains
> a list of fixed bugs that's run-on and should be fixed with proper line
> breaks:
> image.png
> 
> Sincerely,
> Artem
> 
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net  | +ArtemRussakovskii
>  | @ArtemR
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster release 5.6

2019-04-18 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
5.6 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

- Release 5.x had a long standing issue where network bandwidth usage
was much higher than in prior releases. This issue has been addressed in
this release. Bug 1673058 has more details regarding the issue [3].

Thanks,
Gluster community

[1] Packages for 5.6:
https://download.gluster.org/pub/gluster/glusterfs/5/5.6/

[2] Release notes for 5.6:
https://docs.gluster.org/en/latest/release-notes/5.6/

[3] Bandwidth usage bug: https://bugzilla.redhat.com/show_bug.cgi?id=1673058
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 4.1.8

2019-04-05 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.1.8 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

None

Thanks,
Gluster community

[1] Packages for 4.1.8:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.8/

[2] Release notes for 4.1.8:
https://docs.gluster.org/en/latest/release-notes/4.1.8/



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster Release 6

2019-03-25 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of 6.0, our
latest release.

This is a major release that includes a range of code improvements and
stability fixes along with a few features as noted below.

A selection of the key features and bugs addressed are documented in
this [1] page.

Announcements:

1. Releases that receive maintenance updates post release 6 are, 4.1 and
5 [2]

2. Release 6 will receive maintenance updates around the 10th of every
month for the first 3 months post release (i.e Apr'19, May'19, Jun'19).
Post the initial 3 months, it will receive maintenance updates every 2
months till EOL. [3]

A series of features/xlators have been deprecated in release 6 as
follows, for upgrade procedures from volumes that use these features to
release 6 refer to the release 6 upgrade guide [4].

Features deprecated:
- Block device (bd) xlator
- Decompounder feature
- Crypt xlator
- Symlink-cache xlator
- Stripe feature
- Tiering support (tier xlator and changetimerecorder)

Highlights of this release are:
- Several stability fixes addressing, coverity, clang-scan, address
sanitizer and valgrind reported issues
- Removal of unused and hence, deprecated code and features
- Client side inode garbage collection
- This release addresses one of the major concerns regarding FUSE mount
process memory footprint, by introducing client side inode garbage
collection
- Performance Improvements
- "--auto-invalidation" on FUSE mounts to leverage kernel page cache
more effectively

Bugs addressed are provided towards the end, in the release notes [1]

Thank you,
Gluster community

References:
[1] Release notes: https://docs.gluster.org/en/latest/release-notes/6.0/

[2] Release schedule: https://www.gluster.org/release-schedule/

[3] Gluster release cadence and version changes:
https://lists.gluster.org/pipermail/announce/2018-July/000103.html

[4] Upgrade guide to release-6:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 5.5

2019-03-21 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
5.5 (packages available at [1]).

Release notes for the release can be found at [3].

Major changes, features and limitations addressed in this release:

- Release 5.4 introduced an incompatible change that prevented rolling
upgrades, and hence was never announced to the lists. As a result we are
jumping a release version and going to 5.5 from 5.3, that does not have
the problem.

Thanks,
Gluster community

[1] Packages for 5.5:
https://download.gluster.org/pub/gluster/glusterfs/5/5.5/

[2] Release notes for 5.5:
https://docs.gluster.org/en/latest/release-notes/5.5/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 6: Tagged and ready for packaging

2019-03-19 Thread Shyam Ranganathan
Hi,

RC1 testing is complete and blockers have been addressed. The release is
now tagged for a final round of packaging and package testing before
release.

Thanks for testing out the RC builds and reporting issues that needed to
be addressed.

As packaging and final package testing is finishing up, we would be
writing the upgrade guide for the release as well, before announcing the
release for general consumption.

Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-15 Thread Shyam Ranganathan
We created a 5.5 release tag, and it is under packaging now. It should
be packaged and ready for testing early next week and should be released
close to mid-week next week.

Thanks,
Shyam
On 3/13/19 12:34 PM, Artem Russakovskii wrote:
> Wednesday now with no update :-/
> 
> Sincerely,
> Artem
> 
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net <http://beerpla.net/> | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> <http://twitter.com/ArtemR>
> 
> 
> On Tue, Mar 12, 2019 at 10:28 AM Artem Russakovskii  <mailto:archon...@gmail.com>> wrote:
> 
> Hi Amar,
> 
> Any updates on this? I'm still not seeing it in OpenSUSE build
> repos. Maybe later today?
> 
> Thanks.
> 
> Sincerely,
> Artem
> 
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net <http://beerpla.net/> | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> <http://twitter.com/ArtemR>
> 
> 
> On Wed, Mar 6, 2019 at 10:30 PM Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
> 
> We are talking days. Not weeks. Considering already it is
> Thursday here. 1 more day for tagging, and packaging. May be ok
> to expect it on Monday.
> 
> -Amar
> 
> On Thu, Mar 7, 2019 at 11:54 AM Artem Russakovskii
> mailto:archon...@gmail.com>> wrote:
> 
> Is the next release going to be an imminent hotfix, i.e.
> something like today/tomorrow, or are we talking weeks?
> 
> Sincerely,
> Artem
> 
> --
> Founder, Android Police <http://www.androidpolice.com>, APK
> Mirror <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net <http://beerpla.net/> | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> <http://twitter.com/ArtemR>
> 
> 
> On Tue, Mar 5, 2019 at 11:09 AM Artem Russakovskii
> mailto:archon...@gmail.com>> wrote:
> 
> Ended up downgrading to 5.3 just in case. Peer status
> and volume status are OK now.
> 
> zypper install --oldpackage glusterfs-5.3-lp150.100.1
> Loading repository data...
> Reading installed packages...
> Resolving package dependencies...
> 
> Problem: glusterfs-5.3-lp150.100.1.x86_64 requires
> libgfapi0 = 5.3, but this requirement cannot be provided
>   not installable providers:
> libgfapi0-5.3-lp150.100.1.x86_64[glusterfs]
>  Solution 1: Following actions will be done:
>   downgrade of libgfapi0-5.4-lp150.100.1.x86_64 to
> libgfapi0-5.3-lp150.100.1.x86_64
>   downgrade of libgfchangelog0-5.4-lp150.100.1.x86_64 to
> libgfchangelog0-5.3-lp150.100.1.x86_64
>   downgrade of libgfrpc0-5.4-lp150.100.1.x86_64 to
> libgfrpc0-5.3-lp150.100.1.x86_64
>   downgrade of libgfxdr0-5.4-lp150.100.1.x86_64 to
> libgfxdr0-5.3-lp150.100.1.x86_64
>   downgrade of libglusterfs0-5.4-lp150.100.1.x86_64 to
> libglusterfs0-5.3-lp150.100.1.x86_64
>  Solution 2: do not install glusterfs-5.3-lp150.100.1.x86_64
>  Solution 3: break glusterfs-5.3-lp150.100.1.x86_64 by
> ignoring some of its dependencies
> 
> Choose from above solutions by number or cancel
> [1/2/3/c] (c): 1
> Resolving dependencies...
> Resolving package dependencies...
> 
> The following 6 packages are going to be downgraded:
>   glusterfs libgfapi0 libgfchangelog0 libgfrpc0
> libgfxdr0 libglusterfs0
> 
> 6 packages to downgrade.
> 
> Sincerely,
> Artem
> 
> --
> Founder, Android Police
> <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net <http://beerpla.net/> | +ArtemRussakovskii
> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR
> 

Re: [Gluster-users] [Gluster-Maintainers] Release 6: Release date update

2019-03-12 Thread Shyam Ranganathan
On 3/5/19 1:17 PM, Shyam Ranganathan wrote:
> Hi,
> 
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
> 
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.

RC1 is tagged, and will mostly be packaged for testing by tomorrow.

Expect package details in a day or two, to aid with testing the release.

> 
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
> 
> Shyam
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Release 6: Release date update

2019-03-07 Thread Shyam Ranganathan
Bug fixes are always welcome, features or big ticket changes at this
point in the release cycle are not.

I checked the patch and it is a 2 liner in readdir-ahead, and hence I
would backport it (once it gets merged into master).

Thanks for checking,
Shyam
On 3/7/19 6:33 AM, Raghavendra Gowdappa wrote:
> I just found a fix for
> https://bugzilla.redhat.com/show_bug.cgi?id=1674412. Since its a
> deadlock I am wondering whether this should be in 6.0. What do you think?
> 
> On Tue, Mar 5, 2019 at 11:47 PM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> Hi,
> 
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
> 
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.
> 
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
> 
> Shyam
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 6: Release date update

2019-03-05 Thread Shyam Ranganathan
Hi,

Release-6 was to be an early March release, and due to finding bugs
while performing upgrade testing, is now expected in the week of 18th
March, 2019.

RC1 builds are expected this week, to contain the required fixes, next
week would be testing our RC1 for release fitness before the release.

As always, request that users test the RC builds and report back issues
they encounter, to help make the release a better quality.

Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Shyam Ranganathan
On 3/4/19 10:08 AM, Atin Mukherjee wrote:
> 
> 
> On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
> 
> Thanks to those who participated.
> 
> Update at present:
> 
> We found 3 blocker bugs in upgrade scenarios, and hence have marked
> release
> as pending upon them. We will keep these lists updated about progress.
> 
> 
> I’d like to clarify that upgrade testing is blocked. So just fixing
> these test blocker(s) isn’t enough to call release-6 green. We need to
> continue and finish the rest of the upgrade tests once the respective
> bugs are fixed.

Based on fixes expected by tomorrow for the upgrade fixes, we will build
an RC1 candidate on Wednesday (6-Mar) (tagging early Wed. Eastern TZ).
This RC can be used for further testing.

> 
> 
> 
> -Amar
> 
> On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com > wrote:
> 
> > Hi all,
> >
> > We are calling out our users, and developers to contribute in
> validating
> > ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> > upgrade, stability, and performance.
> >
> > Some of the key highlights of the release are listed in release-notes
> > draft
> >
> 
> .
> > Please note that there are some of the features which are being
> dropped out
> > of this release, and hence making sure your setup is not going to
> have an
> > issue is critical. Also the default lru-limit option in fuse mount for
> > Inodes should help to control the memory usage of client
> processes. All the
> > good reason to give it a shot in your test setup.
> >
> > If you are developer using gfapi interface to integrate with other
> > projects, you also have some signature changes, so please make
> sure your
> > project would work with latest release. Or even if you are using a
> project
> > which depends on gfapi, report the error with new RPMs (if any).
> We will
> > help fix it.
> >
> > As part of test days, we want to focus on testing the latest upcoming
> > release i.e. GlusterFS-6, and one or the other gluster volunteers
> would be
> > there on #gluster channel on freenode to assist the people. Some
> of the key
> > things we are looking as bug reports are:
> >
> >    -
> >
> >    See if upgrade from your current version to 6.0rc is smooth,
> and works
> >    as documented.
> >    - Report bugs in process, or in documentation if you find mismatch.
> >    -
> >
> >    Functionality is all as expected for your usecase.
> >    - No issues with actual application you would run on production
> etc.
> >    -
> >
> >    Performance has not degraded in your usecase.
> >    - While we have added some performance options to the code, not
> all of
> >       them are turned on, as they have to be done based on usecases.
> >       - Make sure the default setup is at least same as your current
> >       version
> >       - Try out few options mentioned in release notes (especially,
> >       --auto-invalidation=no) and see if it helps performance.
> >    -
> >
> >    While doing all the above, check below:
> >    - see if the log files are making sense, and not flooding with some
> >       “for developer only” type of messages.
> >       - get ‘profile info’ output from old and now, and see if
> there is
> >       anything which is out of normal expectation. Check with us
> on the numbers.
> >       - get a ‘statedump’ when there are some issues. Try to make
> sense
> >       of it, and raise a bug if you don’t understand it completely.
> >
> >
> >
> 
> Process
> > expected on test days.
> >
> >    -
> >
> >    We have a tracker bug
> >    [0]
> >    - We will attach all the ‘blocker’ bugs to this bug.
> >    -
> >
> >    Use this link to report bugs, so that we have more metadata around
> >    given bugzilla.
> >    - Click Here
> >     
>  
> 
> >       [1]
> >    -
> >
> >    The test cases which are to be tested are listed here in this sheet
> >   
> 
> [2],
> >    please add, update, and keep it up-to-date to reduce duplicate
> efforts
> 
> -- 
> - Atin (atinm)
> 
> ___
> Gluster-devel mailing list

Re: [Gluster-users] [Gluster-Maintainers] glusterfs-6.0rc0 released

2019-02-25 Thread Shyam Ranganathan
Hi,

Release-6 RC0 packages are built (see mail below). This is a good time
to start testing the release bits, and reporting any issues on bugzilla.
Do post on the lists any testing done and feedback from the same.

We have about 2 weeks to GA of release-6 barring any major blockers
uncovered during the test phase. Please take this time to help make the
release effective, by testing the same.

Thanks,
Shyam

NOTE: CentOS StorageSIG packages for the same are still pending and
should be available in due course.
On 2/23/19 9:41 AM, Kaleb Keithley wrote:
> 
> GlusterFS 6.0rc0 is built in Fedora 30 and Fedora 31/rawhide.
> 
> Packages for Fedora 29, RHEL 8, RHEL 7, and RHEL 6* and Debian 9/stretch
> and Debian 10/buster are at
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/6.0rc0/
> 
> Packages are signed. The public key is at
> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> 
> * RHEL 6 is client-side only. Fedora 29, RHEL 7, and RHEL 6 RPMs are
> Fedora Koji scratch builds. RHEL 7 and RHEL 6 RPMs are provided here for
> convenience only, and are independent of the RPMs in the CentOS Storage SIG.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 5.3 and 4.1.7

2019-01-22 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.1.7 and 5.3 (packages available at [1] & [2]).

Release notes for the release can be found at [3] & [4].

Major changes, features and limitations addressed in this release:

- This release fixes several security vulnerabilities as listed in the
release notes.

Thanks,
Gluster community

[1] Packages for 4.1.7:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.7/

[2] Packages for 5.3:
https://download.gluster.org/pub/gluster/glusterfs/5/5.3/

[3] Release notes for 4.1.7:
https://docs.gluster.org/en/latest/release-notes/4.1.7/

[4] Release notes for 5.3:
https://docs.gluster.org/en/latest/release-notes/5.3/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster release 5.2

2018-12-13 Thread Shyam Ranganathan
On 12/13/18 2:13 PM, Lindolfo Meira wrote:
> Links for OpenSUSE Leap packages never work :/

I assume you are trying the link at [3]

IOW,
http://download.opensuse.org/repositories/home:/glusterfs:/Leap15-5/openSUSE_Leap_15/

That did not take me anywhere as well, but on backtracking the right
link seems to be,

http://download.opensuse.org/repositories/home:/glusterfs:/Leap15-5/openSUSE_Leap_15.0/

@Kaleb can you confirm? and if so we may need to modify [3] accordingly.

[3] https://download.gluster.org/pub/gluster/glusterfs/5/5.2/SUSE/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 5.2

2018-12-13 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
5.2 (packages available at [1]).

Release notes can be found at [2].

Major changes, features and limitations addressed in this release:

- Several bugs as listed in the release notes have been addressed

Thanks,
Gluster community

[1] Packages for 5.2:
https://download.gluster.org/pub/gluster/glusterfs/5/5.2/
(CentOS storage SIG packages may arrive on Monday (17th Dec-2018) or
later as per the CentOS schedules)

[2] Release notes for 5.2:
https://docs.gluster.org/en/latest/release-notes/5.2/
OR,
https://github.com/gluster/glusterfs/blob/release-5/doc/release-notes/5.2.md
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 4.1.6 and 5.1

2018-11-29 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.0.1 and 5.1 (packages available at [1] & [2]).

Release notes for the release can be found at [3] & [4].

Major changes, features and limitations addressed in this release:

- This release fixes several security vulnerabilities as listed in the
release notes.

Thanks,
Gluster community

[1] Packages for 4.1.6:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.6/

[2] Packages for 5.1:
https://download.gluster.org/pub/gluster/glusterfs/5/5.1/

[3] Release notes for 4.1.6:
https://github.com/gluster/glusterfs/blob/release-4.1/doc/release-notes/4.1.6.md

[4] Release notes for 5.1:
https://github.com/gluster/glusterfs/blob/release-5/doc/release-notes/5.1.md
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Corresponding op-version for each release?

2018-11-28 Thread Shyam Ranganathan
On 11/26/2018 02:07 PM, Gambit15 wrote:
> Hey,
>  The op-version for each release doesn't seem to be documented anywhere,
> not even in the release notes. Does anyone know where this information
> can be found?

No, there is no table in the upstream docs. I have opened an issue [1]
to get the same updated in section [2].

> 
> In this case, I've just upgraded from 3.8 to 3.12 and need to update my
> pool's compatibility version, however I'm sure it'd be useful for the
> community to have a comprehensive list somewhere...

For now, the documentation here [2] should help decide on the op-version
that you want to update to.

[1] Issue for reflecting op-version in the docs:
https://github.com/gluster/glusterdocs/issues/439

[2] op-version considerations during upgrade:
https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Consolidating Feature Requests in github

2018-11-05 Thread Shyam Ranganathan
On 11/05/2018 08:29 AM, Vijay Bellur wrote:
> Hi All,
> 
> I am triaging the open RFEs in bugzilla [1]. Since our new(er) workflow
> involves managing RFEs as github issues, I am considering migrating
> relevant open RFEs from bugzilla to github. Once migrated, a RFE in
> bugzilla would be closed with an appropriate comment. I can also update
> the external tracker to point to the respective github issue. Once the
> migration is done, all our feature requests can be further triaged and
> tracked in github.
> 
> Any objections to doing this?

None from me, I see this as needed and the way forward.

Only thing to consider maybe, how we treat bugs/questions using github
and if we want those moved out to bugzilla (during regular triage of
github issues) or not. IOW, what happens in the reverse from github to
bugzilla.

> 
> Thanks,
> Vijay
> 
> [1] https://goo.gl/7fsgTs
> 
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Gluster Release 5

2018-10-23 Thread Shyam Ranganathan
On 10/23/2018 11:13 AM, Dmitry Melekhov wrote:
> Looks like there are no major changes since 4.1...
> 
> Why major release ?

Any enhancement made post a release, is not back-ported to the existing
stable release, and every 4 months a release is made that includes these
enhancements. Thus a new release.

Also, all bug fixes made to mainline branch are not back-ported to
existing stable releases, as they may either not be a problem or not
reported against said release. Fixes that are deemed critical are
back-ported, but not all fixes. Thus every 4 months a release is made
that helps provide a more stable base for the users of, existing
features of, the project.

Hope this answers the need for a release around every 4 month mark.

For further release numbering change/context (in case that was a
question) please see this mail around the same,
https://lists.gluster.org/pipermail/announce/2018-July/000103.html

> 
> 
> 23.10.2018 17:47, Shyam Ranganathan пишет:
>> The Gluster community is pleased to announce the release of 5.0, our
>> latest release.
>>
>> This is a major release that includes a range of code improvements and
>> stability fixes with some management and standalone features as noted
>> below.
>>
>> A selection of the key features and changes are documented on this [1]
>> page.
>>
>> Announcements:
>>
>> 1. Releases that receive maintenance updates post release 5 are, 4.1 and
>> 5. (see [2])
>>
>> **NOTE:** 3.12 long term maintenance release, will reach end of life
>> (EOL) with the release of 5.0. (see [2])
>>
>> 2. Release 5 will receive maintenance updates around the 10th of every
>> month for the first 3 months post release (i.e Nov'18, Dec'18, Jan'19).
>> Post the initial 3 months, it will receive maintenance updates every 2
>> months till EOL. (see [3])
>>
>> Major changes and features:
>>
>> 1) Management:
>> GlusterD2
>>
>> IMP: GlusterD2 in Gluster-5 is still considered a preview and is
>> experimental. It should not be considered ready for production use.
>> Users should still expect some breaking changes even though all efforts
>> would be taken to ensure that these can be avoided. As GD2 is still
>> under heavy development, new features can be expected throughout the
>> Gluster 5 release.
>>
>> The following major changes have been committed to GlusterD2 since
>> v4.1.0.
>> - Volume snapshots
>> - Volume heal
>> - Tracing with Opencensus
>> - Portmap refactoring
>> - Smartvol API merged with volume create API
>> - Configure GlusterD2 with environment variables
>>
>> 2) Standalone
>> - Entry creation and handling, consistency is improved
>> - Python code in Gluster packages is Python 3 ready
>> - Quota fsck script to correct quota accounting
>> - Added noatime option in utime xlator
>> - Added ctime-invalidation option in quick-read xlator
>> - Added shard-deletion-rate option in shard xlator
>> - Removed last usage of MD5 digest in code, towards better FIPS
>> compliance
>> - Code improvements
>>
>> 3) Bugs Addressed
>> The release notes[1] also contain bugs addresses in this release.
>>
>> References:
>> [1] Release notes: https://docs.gluster.org/en/latest/release-notes/5.0/
>>
>> [2] Release schedule: https://www.gluster.org/release-schedule/
>>
>> [3] Gluster release cadence and version changes:
>> https://lists.gluster.org/pipermail/announce/2018-July/000103.html
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster Release 5

2018-10-23 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of 5.0, our
latest release.

This is a major release that includes a range of code improvements and
stability fixes with some management and standalone features as noted below.

A selection of the key features and changes are documented on this [1] page.

Announcements:

1. Releases that receive maintenance updates post release 5 are, 4.1 and
5. (see [2])

**NOTE:** 3.12 long term maintenance release, will reach end of life
(EOL) with the release of 5.0. (see [2])

2. Release 5 will receive maintenance updates around the 10th of every
month for the first 3 months post release (i.e Nov'18, Dec'18, Jan'19).
Post the initial 3 months, it will receive maintenance updates every 2
months till EOL. (see [3])

Major changes and features:

1) Management:
GlusterD2

IMP: GlusterD2 in Gluster-5 is still considered a preview and is
experimental. It should not be considered ready for production use.
Users should still expect some breaking changes even though all efforts
would be taken to ensure that these can be avoided. As GD2 is still
under heavy development, new features can be expected throughout the
Gluster 5 release.

The following major changes have been committed to GlusterD2 since v4.1.0.
- Volume snapshots
- Volume heal
- Tracing with Opencensus
- Portmap refactoring
- Smartvol API merged with volume create API
- Configure GlusterD2 with environment variables

2) Standalone
- Entry creation and handling, consistency is improved
- Python code in Gluster packages is Python 3 ready
- Quota fsck script to correct quota accounting
- Added noatime option in utime xlator
- Added ctime-invalidation option in quick-read xlator
- Added shard-deletion-rate option in shard xlator
- Removed last usage of MD5 digest in code, towards better FIPS compliance
- Code improvements

3) Bugs Addressed
The release notes[1] also contain bugs addresses in this release.

References:
[1] Release notes: https://docs.gluster.org/en/latest/release-notes/5.0/

[2] Release schedule: https://www.gluster.org/release-schedule/

[3] Gluster release cadence and version changes:
https://lists.gluster.org/pipermail/announce/2018-July/000103.html
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)

2018-10-17 Thread Shyam Ranganathan
On 10/17/2018 12:21 AM, Dmitry Melekhov wrote:
> #1637989 : data-self-heal in
> arbiter volume results in stale locks.
> 
> 
> Could you tell me, please, when 4.1 with fix will be released?

Tagging the release is on the 20th of the month [1], packages should be
available 2-3 days from then.

Also, post the first 3-4 minor releases the release schedule was changed
to releasing minor releases every 2 months [2]. This puts the next 4.1
release in November [3].

This bug seems critical enough to make a release earlier, so we may make
a out of band release next week (20th of October), after discussing the
same with the packaging team (will update the list once we make a decision).

[1] Release schedule: https://www.gluster.org/release-schedule/
[2] Release cadence announce:
https://lists.gluster.org/pipermail/announce/2018-July/000103.html
[3] Next 4.1 release in the release notes:
https://docs.gluster.org/en/latest/release-notes/4.1.5/

> 
> Thank you!
> 
> 
> 
> 
> 16.10.2018 19:41, Jiffin Tony Thottan пишет:
>>
>> The Gluster community is pleased to announce the release of Gluster
>> 3.12.15 (packages available at [1,2,3]).
>>
>> Release notes for the release can be found at [4].
>>
>> Thanks,
>> Gluster community
>>
>>
>> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.15/
>> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
>> 
>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>> [4] Release notes:
>> https://gluster.readthedocs.io/en/latest/release-notes/3.12.15/
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)

2018-10-17 Thread Shyam Ranganathan
On 10/17/2018 07:08 AM, Paolo Margara wrote:
> Hi,
> 
> this release will be the last of the 3.12.x branch prior it reach the EOL?

Yes that is true, this would be the last minor release, as release-5
comes out.

> 
> 
> Greetings,
> 
>     Paolo
> 
> Il 16/10/18 17:41, Jiffin Tony Thottan ha scritto:
>>
>> The Gluster community is pleased to announce the release of Gluster
>> 3.12.15 (packages available at [1,2,3]).
>>
>> Release notes for the release can be found at [4].
>>
>> Thanks,
>> Gluster community
>>
>>
>> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.15/
>> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
>> 
>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>> [4] Release notes:
>> https://gluster.readthedocs.io/en/latest/release-notes/3.12.15/
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS Project Update - Week 1&2 of Oct

2018-10-16 Thread Shyam Ranganathan
This is a once in 2 weeks update on activities around glusterfs
project [1]. This is intended to provide the community with updates on
progress around key initiatives and also to reiterate current goals that
the project is working towards.

This is intended to help contributors to pick and address key areas that
are in focus, and the community to help provide feedback and raise flags
that need attention.

1. Key highlights of the last 2 weeks:
- Patches merged [1]
  Key patches:
- Coverity fixes
- Python3 related fixes
- ASAN fixes (trickling in)
- Patch to handle a case of hang in arbiter
  https://review.gluster.org/21380
- Fixes in cleanup sequence
  https://review.gluster.org/21379
- Release updates:
  - Release 5 has a single blocker before GA, all other activities are
complete
- Blocker bug: https://bugzilla.redhat.com/show_bug.cgi?id=1630804
  - Release 6 scope call out to happen this week!
- Interesting devel threads
  - “Gluster performance updates”
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055484.html
  - “Update of work on fixing POSIX compliance issues in Glusterfs”
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055488.html
  - “Compile Xlator manually with lib 'glusterfs'”
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055560.html

2. Bug trends in the last 2 weeks
  - Bugs and status for the last 2 weeks [3]
- 14 bugs are still in the NEW state and need assignment

3. Key focus areas for the next 2 weeks
  - Continue coverity, clang, ASAN focus
- Coverity how to participate [4]
- Clang issues that need attention [5]
- ASAN issues:
  See https://review.gluster.org/c/glusterfs/+/21300 on how to
effectively use ASAN builds, and use the same to clear up ASAN issues
appearing in your testing.

  - Improve on bug backlog reduction (details to follow)

  - Remove unsupported xlators from the code base:
https://bugzilla.redhat.com/show_bug.cgi?id=1635688

  - Prepare xlators for classification assignment, to enable selective
volume graph topology for GCS volumes
https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/xlator-classification.md

  - Adapt all xlators (and options) to the new registration function as in,
https://review.gluster.org/c/glusterfs/+/19712

3. Next release focus areas
  - Deprecate xlators as announced in the lists
  - Complete implementation of xlator classification for all xlators
  - Cleanup sequence with brick-mux
  - Fencing infrastructure for gluster-block
  - Fuse Interrupt Syscall Support
  - Release 6 targeted enhancements [6] (Needs to be populated)

4. Longer term focus areas (possibly beyond the next release)
  - Reflink support, extended to snapshot support for gluster-block
  - Client side caching improvements

- Amar, Shyam and Xavi

Links:

[1] GlusterFS: https://github.com/gluster/glusterfs/

[2] Patches merged in the last 2 weeks:
https://review.gluster.org/q/project:glusterfs+branch:master+until:2018-10-14+since:2018-10-01+status:merged

[3] Bug status for the last 2 weeks:
https://bugzilla.redhat.com/report.cgi?x_axis_field=bug_status_axis_field=component_axis_field=_redirect=1_format=report-table_desc_type=allwordssubstr_desc==Community=GlusterFS_type=allwordssubstr=_file_loc_type=allwordssubstr_file_loc=_whiteboard_type=allwordssubstr_whiteboard=_type=allwords===_id=_id_type=anyexact=_type=greaterthaneq=substring==substring==substring==%5BBug+creation%5D==2018-10-01=2018-10-14_top=AND=component=notequals=project-infrastructure=noop=noop==table=wrap

[4] Coverity reduction and how to participate:
https://lists.gluster.org/pipermail/gluster-devel/2018-August/055155.html

[5] CLang issues needing attention:
https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/

[6] Release 6 targeted enhancements:
https://github.com/gluster/glusterfs/milestone/8
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Maintainer meeting minutes : 15th Oct, 2018

2018-10-15 Thread Shyam Ranganathan
### BJ Link
* Bridge: https://bluejeans.com/217609845
* Watch: 

### Attendance
* Nigel, Nithya, Deepshikha, Akarsha, Kaleb, Shyam, Sunny

### Agenda
* AI from previous meeting:
  - Glusto-Test completion on release-5 branch - On Glusto team
  - Vijay will take this on.
  - He will be focusing it on next week.
  - Glusto for 5 may not be happening before release, but we'll do
it right after release it looks like.

- Release 6 Scope
- Will be sending out an email today/tomorrow for scope of release 6.
- Send a biweekly email with focus on glusterfs release focus areas.

- GCS scope into release-6 scope and get issues marked against the same
- For release-6 we want a thinner stack. This means we'd be removing
xlators from the code that Amar has already sent an email about.
- Locking support for gluster-block. Design still WIP. One of the
big ticket items that should make it to release 6. Includes reflink
support and enough locking support to ensure snapshots are consistent.
- GD1 vs GD2. We've been talking about it since release-4.0. We need
to call this out and understand if we will have GD2 as default. This is
call out for a plan for when we want to make this transition.

- Round Table
- [Nigel] Minimum build and CI health for all projects (including
sub-projects).
- This was primarily driven for GCS
- But, we need this even otherwise to sustain quality of projects
- AI: Call out on lists around release 6 scope, with a possible
list of sub-projects
- [Kaleb] SELinux package status
- Waiting on testing to understand if this is done right
- Can be released when required, as it is a separate package
- Release-5 the SELinux policies are with Fedora packages
- Need to coordinate with Fedora release, as content is in 2
packages
- AI: Nigel to follow up and get updates by the next meeting

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 4.1.5 (Long Term Maintenance)

2018-09-26 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.1.5 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

- Release 4.1.0 notes incorrectly reported that all python code in
Gluster packages are python3 compliant, this is not the case and the
release note is amended accordingly. (since 4.1.3 release)

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.5/

[2] Release notes: https://docs.gluster.org/en/latest/release-notes/4.1.5/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-13 Thread Shyam Ranganathan
On 09/13/2018 09:52 AM, John Strunk wrote:
> I believe the ask is also around instructions for actually obtaining the
> new bits from the distros... what repos need to be changed (if any) and
> whether old packages need to be removed.

Agree, I was looking at that as the latter part of the question, with
the former around 3.13 missing in reference.

I do not have enough Ubuntu fu to answer that one, so leaving that out
for now.

> 
> On Thu, Sep 13, 2018 at 9:16 AM Shyam Ranganathan  <mailto:srang...@redhat.com>> wrote:
> 
> On 09/12/2018 04:05 PM, Nicolas SCHREVEL wrote:
> > But there is no info about 3.13 : 
> > https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/
> >
> > "Upgrade procedure to Gluster 4.1, from Gluster 4.0.x, 3.12.x, and
> 3.10.x
> >
> >     NOTE: Upgrade procedure remains the same as with 3.12 and 3.10
> releases"
> 
> This was because 3.13 was EOLd when 4.0 released, and hence this guide
> for 4.1 does not capture the release number 3.13 in its initial note.
> 
> The procedure to upgrade from 3.13 to 4.1 is the same as for other
> version, hence the same guide can be used to achieve the upgrade.
> 
> Shyam
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Upgrade from 3.13 ?

2018-09-13 Thread Shyam Ranganathan
On 09/12/2018 04:05 PM, Nicolas SCHREVEL wrote:
> But there is no info about 3.13 : 
> https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/
> 
> "Upgrade procedure to Gluster 4.1, from Gluster 4.0.x, 3.12.x, and 3.10.x
> 
>     NOTE: Upgrade procedure remains the same as with 3.12 and 3.10 releases"

This was because 3.13 was EOLd when 4.0 released, and hence this guide
for 4.1 does not capture the release number 3.13 in its initial note.

The procedure to upgrade from 3.13 to 4.1 is the same as for other
version, hence the same guide can be used to achieve the upgrade.

Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bug with hardlink limitation in 3.12.13 ?

2018-09-10 Thread Shyam Ranganathan
On 08/31/2018 01:06 PM, Reiner Keller wrote:
> Hello,
> 
> Am 31.08.2018 um 13:59 schrieb Shyam Ranganathan:
>> I suspect you have hit this:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c5
>>
>> I further suspect your older setup was 3.10 based and not 3.12 based.
>>
>> There is an additional feature added in 3.12 that stores GFID to path
>> conversion details using xattrs (see "GFID to path" in
>> https://docs.gluster.org/en/latest/release-notes/3.12.0/#major-changes-and-features
>> )
>>
>> Due to which xattr storage limit is reached/breached on ext4 based bricks.
>>
>> To check if you are facing similar issue to the one in the bug provided
>> above, I would check if the brick logs throw up the no space error on a
>> gfid2path set failure.
> 
> thanks for the hint.
> 
> From log output (= no gfid2path errors) it seems to be not the problem
> although the old
> gluster volume was setup with version 3.10.x (or even 3.8.x i think).
> 
> I wrote I could reproduce it on new ext4  and on old xfs gluster volumes
> with version
> 3.12.13 while it was running fine with ~ 3.12.8 (half year ago) without
> problems.
> 
> But just saw that my old main volume wasn't/isn't xfs but also ext4.
> Digging into logs I could see that I was running in January still 3.10.8
> / 3.10.9
> and initial switched in April to 3.12.9 / 3.12 version branch.
> 
> From entry sizes/differences your suggestion would fit:
> 
>     https://manpages.debian.org/testing/manpages/xattr.7.en.html or
>     http://man7.org/linux/man-pages/man5/attr.5.html
> 
>   In the current ext2, ext3, and ext4 filesystem implementations, the
>total bytes used by the names and values of all of a file's extended
>attributes must fit in a single filesystem block (1024, 2048 or 4096
>bytes, depending on the block size specified when the filesystem was
>created).
> 
> because I can see differences by volume setup type:



So in short, the inode size limits in ext4 impacts the hard link counts
that can be created in Gluster, which is the limitation that you hit,
would that be a correct summary?

> 
> 
>> To check if you are facing similar issue to the one in the bug provided
>> above, I would check if the brick logs throw up the no space error on a
>> gfid2path set failure.
> 
> Is there some parameter to get more detailed error logging ? But from
> docu it looks like it has default good settings:

The error logs posted are from the client (FUSE mount) logs, the log
lines with the gfid2path that I was mentioning is on the bricks.

There is no further logging level that needs to change to see the said
errors as these are warning and above.

>>
>>> My search for documentation found only the parameter
>>> "storage.max-hardlinks" with default of 100 for version 4.0.
>>> I checked it in my gluster 3.12.13 but here the parameter is not yet
>>> implemented.
> 
> If this problem is backend filesystem related it would be good to have
> it documented also for 4.0 that the storage.max-hardlinks parameter
> would work only if the backend is e.g. xfs and has enough inode space
> for it (best with a reference/short example howto calculate it) ?

Fair point, raised a github issue around the same here [1]
(contributions welcome :) ).

Regards,
Shyam

[1] Gluster documentation github issue for hardlink and ext4
limitations: https://github.com/gluster/glusterdocs/issues/418
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bug with hardlink limitation in 3.12.13 ?

2018-08-31 Thread Shyam Ranganathan
On 08/31/2018 07:15 AM, Reiner Keller wrote:
> Hello,
> 
> I got yesterday unexpected error "No space left on device" on my new
> gluster volume caused by too many hardlinks.
> This happened while I done "rsync --aAHXxv ..." replication from old
> gluster to new gluster servers - each running latest version 3.12.13
> (for changing volume schema from 2x2 to 3x1 with quorum and a fresh
> Debian Stretch setup instead Jessie).

I suspect you have hit this:
https://bugzilla.redhat.com/show_bug.cgi?id=1602262#c5

I further suspect your older setup was 3.10 based and not 3.12 based.

There is an additional feature added in 3.12 that stores GFID to path
conversion details using xattrs (see "GFID to path" in
https://docs.gluster.org/en/latest/release-notes/3.12.0/#major-changes-and-features
)

Due to which xattr storage limit is reached/breached on ext4 based bricks.

To check if you are facing similar issue to the one in the bug provided
above, I would check if the brick logs throw up the no space error on a
gfid2path set failure.

To get around the problem, I would suggest using xfs as the backing FS
for the brick (considering you have close to 250 odd hardlinks to a
file). I would not attempt to disable the gfid2path feature, as that is
useful in getting to the real file just given a GFID and is already part
of core on disk Gluster metadata (It can be shut off, but I would
refrain from it).

> 
> When I deduplicated it around half a year ago with "rdfind" hardlinking
> was working fine (I think that was glusterfs around version 3.12.8 -
> 3.12.10 ?)
> 
> My search for documentation found only the parameter
> "storage.max-hardlinks" with default of 100 for version 4.0.
> I checked it in my gluster 3.12.13 but here the parameter is not yet
> implemented.
> 
> I tested/proofed it by running my small test on underlaying ext4
> filesystem brick directly and on gluster volume using same ext4
> filesystem of the brick:
> 
> Testline for it:
>             mkdir test; cd test; echo "hello" > test; for I in $(seq 1
> 100); do ln test test-$I ; done
> 
> * on ext4 fs (old brick: xfs) I could do 100 hardlinks without problems
> (from documentation I found ext has 65.000 hardlinks compiled in )
> * on actual GlusterFS (same on my old and new gluster volumes) I could
> do only up to 45 hardlinks now
> 
> But from deduplication around 6 months ago I could find e.g. a file with
> 240 hardlinks setup and there is no problem using these referenced files
> (caused by multiple languages / multiple uploads per language ,
> production/staging system cloned... ).
> 
> My actual workaround has to be using duplicated content but it would be
> great if this could be fixed in next versions ;)
> 
> (Saltstack didn't support yet successful setup of glusterfs 4.0
> peers/volumes; something in output of "gluster --xml --mode=script" call
> must be weird but I haven't seen any differences so far)
> 
> Bests
> 
> 
> Reiner
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster release 4.1.3 (Long Term Maintenance)

2018-08-29 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.1.3 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

- Bug #1601356 titled "Problem with SSL/TLS encryption", is not yet
fixed with this release. Patch to fix the same is in progress and can be
tracked here [3]

- Release 4.1.0 notes incorrectly reported that all python code in
Gluster packages are python3 compliant, this is not the case and the
release note is amended accordingly.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.3/

[2] Release notes: https://docs.gluster.org/en/latest/release-notes/4.1.3/

[3] SSL/TLS bug and patch:
  - https://bugzilla.redhat.com/show_bug.cgi?id=1601356
  - https://review.gluster.org/c/glusterfs/+/20993
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 4.1.3?

2018-08-24 Thread Shyam Ranganathan
On 08/23/2018 07:18 PM, wkmail wrote:
> Weren't we supposed to get a new 4.1.x  this month on the 20th? 

Yes, most of the tagging preparation is done, it was staggered
considering 3.12 release delays (Note to self: should let the lists know)

> 
> In particular I am interested in the client Memory leak fix as I have an
> VM cluster I need to put into production and don't want to immediately
> turn around and do the upgrade.
> Any word on that?

The leak in 3.12 was identified to the patches in this [1] bug, and
these patches are already part of 4.1 since 4.1.0. So, unless there is a
newer leak in 4.1 there are no fixes in the queue to address the same.

Could you provide more context on this, like mail threads or bugs that
you are looking for?

[1] Bug fixed recently in 3.12, but already in 4.1.0:
https://bugzilla.redhat.com/show_bug.cgi?id=1550078
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Possibly missing two steps in upgrade to 4.1 guide

2018-08-21 Thread Shyam Ranganathan
On 08/21/2018 09:33 AM, mabi wrote:
> Hello,
> 
> I just upgraded from 4.0.2 to 4.1.2 using the official  documentation:
> 
> https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.1/
> 
> I noticed that this documentation might be missing the following two 
> additional steps:
> 
> 1) restart the glustereventsd service

Updates to the document submitted here [1], reviews welcome.

Thanks for reporting this.

Shyam

[1] https://github.com/gluster/glusterdocs/pull/410
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 4.1.2 (Long Term Maintenance)

2018-07-30 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.0.1 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

- Release 4.1.0 notes incorrectly reported that all python code in
Gluster packages are python3 compliant, this is not the case and the
release note is amended accordingly.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.2/

[2] Release notes:
https://github.com/gluster/glusterfs/blob/v4.1.2/doc/release-notes/4.1.2.md
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Release cadence and version changes

2018-07-03 Thread Shyam Ranganathan
This announcement is to publish changes to the upstream release cadence
from quarterly (every 3 months), to every 4 months and to have all
releases maintained (no more LTM/STM releases), based on the maintenance
and EOL schedules for the same.

Further, it is to start numbering releases with just a MAJOR version
rather than a "x.y.z" format that we currently employ.

1. Release cadence (Major releases)
- Release every 4 months
- Makes up 3 releases each year
- Each release is maintained till n+3 is released (IOW, for a year, and
n is the release version, thus retaining EOL time for a release as it
stands currently)
- Retain backward compatibility across releases, for ease of
migrations/upgrades

2. Update releases (minor update release containing bug fixes against a
major release)
- First 3 update releases will be done every month
- Further update releases will be made once every 2 months till EOL of
the major release
- Out of band releases for critical issues or vulnerabilities will be
done on demand

3. Release versions
- Releases will be version-ed using a monotonic increasing number
starting at 5
- Hence future releases would be, release-5, release-6, and so on
- Use minor numbers for update releases, like 5.x or 6.x, x
monotonically increasing every update
- RPM versions would look like .-..

4. Note on op-version
- op-versions were tied to release versions, and may undergo a change in
description to make it release agnostic

Expect the Gluster release web page to undergo an update within a week.
( https://www.gluster.org/release-schedule/ )

Thanks,
Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing GlusterFS release 4.1.0 (Long Term Maintenance)

2018-06-20 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of 4.1, our
latest long term supported release.

This is a major release that includes a range of features enhancing
management, performance, monitoring, and providing newer functionality
like thin arbiters, cloud archival, time consistency. It also contains
several bug fixes.

A selection of the important features and changes are documented on this
[1] page.

Announcements:

1. As 4.0 was a short term maintenance release, features which have been
included in that release are available with 4.1.0 as well. These
features may be of interest to users upgrading to 4.1.0 from older than
4.0 releases. The 4.0 release notes captures the list of features that
were introduced with 4.0.

NOTE: As 4.0 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.1.0. See, [2]

2. Releases that receive maintenance updates post 4.1 release are, 3.12,
and 4.1 (reference)

NOTE: 3.10 long term maintenance release, will reach end of life (EOL)
with the release of 4.1.0.  See, [2]

3. Continuing with this release, the CentOS storage SIG will not build
server packages for CentOS6. Server packages will be available for
CentOS7 only. For ease of migrations, client packages on CentOS6 will be
published and maintained. See, [3]

4. Minor updates for this release would be on the 20th of every month

References:
[1] Release notes: https://docs.gluster.org/en/latest/release-notes/4.1.0/

[2] Release schedule: https://www.gluster.org/release-schedule/

[3] CentOS6 server package deprecation:
http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release delays!

2018-05-31 Thread Shyam Ranganathan
Hi,

Gluster 4.1 release has been delayed due to regressions in the brick
multiplexing feature. Hence, instead of early June when 4.1 was to
release, it would be released by mid-June.

In preparation for 4.1, releases 4.0 and 3.10 would be EOL'd, as a
result their expected update releases in the month of May (slated on the
20th and 30th) were canceled.

Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster release 3.12.10 (Long Term Maintenance) Canceled for 10th of May, 2018

2018-05-10 Thread Shyam Ranganathan
Hi,

Since the release of 3.12.9 till the 10th of May, 2018, there has been
only one fix backported to the release branch. As a result we are not
releasing the next minor update for the 3.12 branch, which falls on the
10th of every month.

The next 3.12 update would be around the 10th of June, 2018.

Thanks,
Shyam

On 04/30/2018 12:56 PM, Shyam Ranganathan wrote:
> The Gluster community is pleased to announce the release of Gluster
> 3.12.9 (packages available at [1]).
> 
> Release notes for the release can be found at [2].
> 
> This release contains fixes for CVE-2018-1088 and CVE-2018-1112, among
> other fixes. Please use the release notes to check on the fix list.
> 
> Thanks,
> Gluster community
> 
> [1] Packages:
> https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.9/
> 
> [2] Release notes:
> http://docs.gluster.org/en/latest/release-notes/3.12.9/ (link may not be
> active yet!)
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 4.0.2 (Short Term Maintenance)

2018-04-30 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
4.0.2 (packages available at [1]).

Release notes for the release can be found at [2].

This release contains fixes for CVE-2018-1088 and CVE-2018-1112, among
other fixes. Please use the release notes to check on the fix list.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/4.0/4.0.2/

[2] Release notes:
http://docs.gluster.org/en/latest/release-notes/4.0.2/ (link may not be
active yet!)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.12.9 (Long Term Maintenance)

2018-04-30 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.12.9 (packages available at [1]).

Release notes for the release can be found at [2].

This release contains fixes for CVE-2018-1088 and CVE-2018-1112, among
other fixes. Please use the release notes to check on the fix list.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.9/

[2] Release notes:
http://docs.gluster.org/en/latest/release-notes/3.12.9/ (link may not be
active yet!)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.12 (Long Term Maintenance)

2018-04-30 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.10.12 (packages available at [1]).

Release notes for the release can be found at [2].

This release contains fixes for CVE-2018-1088 and CVE-2018-1112, among
other fixes. Please use the release notes to check on the fix list.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.12/

[2] Release notes:
http://docs.gluster.org/en/latest/release-notes/3.10.12/ (link may not
be active yet!)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-13 Thread Shyam Ranganathan
On 04/12/2018 06:49 AM, Marco Lorenzo Crociani wrote:
> On 09/04/2018 21:36, Shyam Ranganathan wrote:
>> On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
>>> On 06/04/2018 19:33, Shyam Ranganathan wrote:
>>>> Hi,
>>>>
>>>> We postponed this and I did not announce this to the lists. The number
>>>> of bugs fixed against 3.10.12 is low, and I decided to move this to the
>>>> 30th of Apr instead.
>>>>
>>>> Is there a specific fix that you are looking for in the release?
>>>>
>>>
>>> Hi,
>>> yes, it's this: https://review.gluster.org/19730
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1442983
>>
>> We will roll out 3.10.12 including this fix in a few days, we have a
>> 3.12 build and release tomorrow, hence looking to get 3.10 done by this
>> weekend.
>>
>> Thanks for your patience!
>>
> 
> Hi,
> ok thanks, stand by for the release!

This is pushed out 1 more week, as we are still finishing up 3.12.8.

Expect this closer to end of next week (Apr 20th, 2018).

Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-09 Thread Shyam Ranganathan
On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
> On 06/04/2018 19:33, Shyam Ranganathan wrote:
>> Hi,
>>
>> We postponed this and I did not announce this to the lists. The number
>> of bugs fixed against 3.10.12 is low, and I decided to move this to the
>> 30th of Apr instead.
>>
>> Is there a specific fix that you are looking for in the release?
>>
> 
> Hi,
> yes, it's this: https://review.gluster.org/19730
> https://bugzilla.redhat.com/show_bug.cgi?id=1442983

We will roll out 3.10.12 including this fix in a few days, we have a
3.12 build and release tomorrow, hence looking to get 3.10 done by this
weekend.

Thanks for your patience!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-06 Thread Shyam Ranganathan
Hi,

We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
30th of Apr instead.

Is there a specific fix that you are looking for in the release?

Thanks,
Shyam

On 04/06/2018 11:47 AM, Marco Lorenzo Crociani wrote:
> Hi,
> are there any news for 3.10.12 release?
> 
> Regards,
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 4.0.0 (Short Term Maintenance)

2018-03-13 Thread Shyam Ranganathan
The Gluster community celebrates 13 years of development with this
latest release, Gluster 4.0. This release enables improved integration
with containers, an enhanced user experience, and a next-generation
management framework. The 4.0 release helps cloud-native app developers
choose Gluster as the default scale-out distributed file system.

We’re highlighting some of the announcements, major features and changes
here, but our release notes[1] have announcements, expanded major
changes and features, and bugs addressed in this release.

Major enhancements:

- Management
GlusterD2 (GD2) is a new management daemon for Gluster-4.0. It is a
complete rewrite, with all new internal core frameworks, that make it
more scalable, easier to integrate with and has lower maintenance
requirements. This replaces GlusterD.

A quick start guide [6] is available to get started with GD2.

Although GD2 is in tech preview for this release, it is ready to use for
forming and managing new clusters.

- Monitoring
With this release, GlusterFS enables a lightweight method to access
internal monitoring information.

- Performance
There are several enhancements to performance in the disperse translator
and in the client side metadata caching layers.

- Other enhancements of note
This release adds: ability to run Gluster on FIPS compliant systems,
ability to force permissions while creating files/directories, and
improved consistency in distributed volumes.

- Developer related
New on-wire protocol version and full type encoding of internal
dictionaries on the wire, Global translator to handle per-daemon
options, improved translator initialization structure, among a few other
improvements, that help streamline development of newer translators.

Release packages (or where to get them) are available at [2] and are
signed with [3]. The upgrade guide for this release can be found at [4]

Related announcements:

- As 3.13 was a short term maintenance release, it will reach end of
life (EOL) with the release of 4.0.0 [5].

- Releases that receive maintenance updates post 4.0 release are, 3.10,
3.12, 4.0 [5].

- With this release, the CentOS storage SIG will not build server
packages for CentOS6. Server packages will be available for CentOS7
only. For ease of migrations, client packages on CentOS6 will be
published and maintained, as announced here [7].

References:
[1] Release notes:
https://docs.gluster.org/en/latest/release-notes/4.0.0.md/
[2] Packages: https://download.gluster.org/pub/gluster/glusterfs/4.0/
[3] Packages signed with:
https://download.gluster.org/pub/gluster/glusterfs/4.0/rsa.pub
[4] Upgrade guide:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_4.0/
[5] Release schedule: https://www.gluster.org/release-schedule/
[6] GD2 quick start:
https://github.com/gluster/glusterd2/blob/master/doc/quick-start-user-guide.md
[7] CentOS Storage SIG CentOS6 support announcement:
http://lists.gluster.org/pipermail/gluster-users/2018-January/033212.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster release 3.10.11 (Long Term Maintenance)

2018-03-11 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.10.11 (packages available at [1]).

Release notes for the release can be found at [2].

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.11/

[2] Release notes: http://docs.gluster.org/en/latest/release-notes/3.10.11/
___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0 released

2018-03-09 Thread Shyam Ranganathan
On 03/08/2018 06:17 PM, Shyam Ranganathan wrote:
> Forwarding to the devel and users groups as well.
> 
> We have tagged 4.0.0 branch as GA, and are in the process of building
> packages.
> 
> It would a good time to run final install/upgrade tests if you get a
> chance on these packages (I am running off to do the same now).

We uncovered a backward compatibility bug during rolling upgrade testing
and are in the process of addressing the same.

For the curious, see https://bugzilla.redhat.com/show_bug.cgi?id=1551112#c3

> 
> Thanks,
> Shyam
> 
>  Forwarded Message 
> Subject: Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0
> released
> Date: Thu, 8 Mar 2018 18:06:40 -0500
> From: Kaleb S. KEITHLEY <kkeit...@redhat.com>
> To: GlusterFS Maintainers <maintain...@gluster.org>, packag...@gluster.org
> 
> 
> On 03/06/2018 10:25 AM, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.sha512sum
>>
>> This release is made off jenkins-release-45
>>
> 
> Status update:
> 
> I've made some progress on Debian packaging of glusterd2. Debian golang
> packaging using dh-make-golang is strongly biased toward downloading the
> $HEAD from github and building from that. I haven't been able to find
> anything for building from a source (-vendor or not) tarball. Mostly
> it's trial and error trying to defeat the dh-helper voodoo magic. If
> anyone knows a better way, please speak up. Have I mentioned that I hate
> Debian packaging?
> 
> In the mean time——
> 
> glusterfs-4.0.0 packages for:
> 
> * Fedora 26, 27, and 28 are on download.gluster.org at [1]. Fedora 29
> are in the Fedora Rawhide repo. Use `dnf` to install.
> 
> * Debian Stretch/9 and Buster/10(Sid) are on download.gluster.org at [1]
> 
> * Xenial/16.04, Artful/17.10, and Bionic/18.04 are on Launchpad at [2]
> 
> * SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
> at [3].
> 
> * RHEL/CentOS el7 and el6 (el6 client-side only) in CentOS Storage SIG
> at [4].
> 
> 
> glusterd2-4.0.0 packages for:
> 
> * Fedora 26, 27, 28, and 29 are on download.gluster.org at [5].
> Eventually rpms will be available in Fedora (29 probably) pending
> completion of package review.
> 
> * RHEL/CentOS el7 in CentOS Storage SIG at [4].
> 
> * SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
> at [3]. glusterd2 rpms are in the same repos with the matching glusterfs
> rpms.
> 
> All the LATEST and STM-4.0 symlinks have been created or updated to
> point to the 4.0.0 release.
> 
> Please test the CentOS packages and give feedback so that packages can
> be tagged for release.
> 
> And of course the Debian and Ubuntu glusterfs packages are usable
> without glusterd2, so go ahead and start using them now.
> 
> [1] https://download.gluster.org/pub/gluster/glusterfs/4.0
> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-4.0
> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
> [4] https://buildlogs.centos.org/centos/7/storage/$arch/gluster-4.0
> [5] https://download.gluster.org/pub/gluster/glusterd2/4.0
> 
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0 released

2018-03-08 Thread Shyam Ranganathan
Forwarding to the devel and users groups as well.

We have tagged 4.0.0 branch as GA, and are in the process of building
packages.

It would a good time to run final install/upgrade tests if you get a
chance on these packages (I am running off to do the same now).

Thanks,
Shyam

 Forwarded Message 
Subject: Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-4.0.0
released
Date: Thu, 8 Mar 2018 18:06:40 -0500
From: Kaleb S. KEITHLEY <kkeit...@redhat.com>
To: GlusterFS Maintainers <maintain...@gluster.org>, packag...@gluster.org


On 03/06/2018 10:25 AM, jenk...@build.gluster.org wrote:
> SRC: 
> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.tar.gz
> HASH: 
> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.sha512sum
> 
> This release is made off jenkins-release-45
> 

Status update:

I've made some progress on Debian packaging of glusterd2. Debian golang
packaging using dh-make-golang is strongly biased toward downloading the
$HEAD from github and building from that. I haven't been able to find
anything for building from a source (-vendor or not) tarball. Mostly
it's trial and error trying to defeat the dh-helper voodoo magic. If
anyone knows a better way, please speak up. Have I mentioned that I hate
Debian packaging?

In the mean time——

glusterfs-4.0.0 packages for:

* Fedora 26, 27, and 28 are on download.gluster.org at [1]. Fedora 29
are in the Fedora Rawhide repo. Use `dnf` to install.

* Debian Stretch/9 and Buster/10(Sid) are on download.gluster.org at [1]

* Xenial/16.04, Artful/17.10, and Bionic/18.04 are on Launchpad at [2]

* SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
at [3].

* RHEL/CentOS el7 and el6 (el6 client-side only) in CentOS Storage SIG
at [4].


glusterd2-4.0.0 packages for:

* Fedora 26, 27, 28, and 29 are on download.gluster.org at [5].
Eventually rpms will be available in Fedora (29 probably) pending
completion of package review.

* RHEL/CentOS el7 in CentOS Storage SIG at [4].

* SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
at [3]. glusterd2 rpms are in the same repos with the matching glusterfs
rpms.

All the LATEST and STM-4.0 symlinks have been created or updated to
point to the 4.0.0 release.

Please test the CentOS packages and give feedback so that packages can
be tagged for release.

And of course the Debian and Ubuntu glusterfs packages are usable
without glusterd2, so go ahead and start using them now.

[1] https://download.gluster.org/pub/gluster/glusterfs/4.0
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-4.0
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] https://buildlogs.centos.org/centos/7/storage/$arch/gluster-4.0
[5] https://download.gluster.org/pub/gluster/glusterd2/4.0

___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-03-07 Thread Shyam Ranganathan
On 03/05/2018 09:05 AM, Javier Romero wrote:
>> I am about halfway through my own upgrade testing (using centOS7
>> containers), and it is patterned around this [1], in case that helps.
> Taking a look at this.
> 
> 


Thanks for confirming the install of the bits.

On the upgrade front, I did find some issues that are since fixed. We
are in the process for rolling out the GA (general availability)
packages for 4.0.0, and if you have not started on the upgrades, I would
wait till these are announced, before testing them out.

We usually test the upgrades (and package sanity all over again on the
GA bits) before announcing the release.

Thanks again,
Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 4.0: GA tagging status and blockers

2018-03-02 Thread Shyam Ranganathan
Hi,

Here are the list of patches that we are waiting on before tagging GA
for Gluster 4.0. (packages and release announcement comes in about 2-5
days post this).

We hope to be ready to tag the release on Monday 5th March, and will
keep the list posted on a daily basis on any slips.

1) Patches waiting regression votes before merge:
- https://review.gluster.org/#/c/19663/
- https://review.gluster.org/#/c/19654/

2) Patches awaiting backport (Kaushal)
- https://review.gluster.org/#/c/19657/

3) Packaging requirements (Packaging team, are we good here?)
- GD2 package alongwith Gluster packages?
- CentOS6 client only packages?

4) *All*, anything else you think we should be waiting for?

Other (core) release activities that are pending (Shyam)
- Finalize release notes
- Write/update upgrade guide

Thanks,
Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-03-01 Thread Shyam Ranganathan
On 02/28/2018 07:25 AM, Javier Romero wrote:
> This one fails 
> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
> 
> # yum install -y
> https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/glusterfs-4.0.0-0.1.rc1.el7.x86_64.rpm

Thanks Javier.

Isn't this what you intended though?

yum upgrade --enablerepo=centos-gluster40-test glusterfs-server

Basically the test repository is the on with the bits at this point in
time, and it looks like you have an older version installed, hence
attempting to upgrade.

If you could try this and let us know what you see, it would be helpful.
Also, I guess I missed the instruction to use the test repository for
this operation, my bad.

I am about halfway through my own upgrade testing (using centOS7
containers), and it is patterned around this [1], in case that helps.

Thanks,
Shyam

[1] Upgrade/package testing:
https://hackmd.io/GYIwTADCDsDMCGBaArAUxAY0QFhBAbIgJwCMySIwJmAJvGMBvNEA#
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 4.0: RC1 tagged

2018-02-26 Thread Shyam Ranganathan
Hi,

RC1 is tagged in the code, and the request for packaging the same is on
its way.

We should have packages as early as today, and request the community to
test the same and return some feedback.

We have about 3-4 days (till Thursday) for any pending fixes and the
final release to happen, so shout out in case you face any blockers.

The RC1 packages should land here:
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/
and like so for CentOS,
CentOS7:
  # yum install
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
  # yum install glusterfs-server

Thanks,
Gluster community
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 4.0: RC0 packages

2018-02-05 Thread Shyam Ranganathan
Hi,

We have tagged and created RC0 packages for the 4.0 release of Gluster.
Details of the packages are given below.

We request community feedback from the RC stages, so that the end
release can be better, towards this please test and direct any feedback
to the lists for us to take a look at.

CentOS packages:

CentOS7:
  # yum install
http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
  # yum install glusterfs-server
  (only the testing repository is enabled with this c-r-gluster40)

There is no glusterd2 packaged for the Storage SIG *yet*.

CentOS6:
CentOS-6 builds will not be done until the glusterfs-server sub-package
can be prevented from getting created (BZ 1074947). CentOS-6 will *only*
get the glusterfs-client packages from 4.0 release onwards.

Other distributions:

Packages for Fedora 27 and Fedora 28/rawhide are at [1].

Packages for Debian stretch/9 and buster/10 are coming soon. They will
also be at [1].

GlusterFS 4.0 packages, including these 4.0RC0 packages, are signed with
a new signing key. The public key is at [2].

Packages for glusterd2 will be added later.

Thanks,
Gluster community

[1] https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc0/
[2]
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc0/rsa.pub
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.10 (Long Term Maintenance)

2018-02-02 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.10.10 (packages available at [1]).

Release notes for the release can be found at [2].

The corruption issue when sharded volumes are rebalanced, is fixed with
this release.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.10/

[2] Release notes: http://docs.gluster.org/en/latest/release-notes/3.10.10/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.13.2 (Short Term Maintenance)

2018-01-23 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.13.2 (packages available at [1]).

Release notes for the release can be found at [2].

* FIXED: Expanding a gluster volume that is sharded may cause file
corruption

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.13/3.13.2/

[2] Release notes:
https://github.com/gluster/glusterfs/blob/v3.13.2/doc/release-notes/3.13.2.md

___
maintainers mailing list
maintain...@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community NetBSD regression tests EOL'd

2018-01-11 Thread Shyam Ranganathan
Gluster Users,

Gluster community is deprecating running regression tests for every
commit on NetBSD, and in the future continue only build sanity (and
handling any build breakages) on FreeBSD.

We lack contributors that can help us keep the *BSD infrastructure and
functionality up to date and hence are retaining some sanity on FreeBSD
(that gluster at least builds on this platform, essentially), and
dropping the NetBSD regression tests.

We plan to EOL the test runs around end of January, 2018 (or earlier).
Regression test runs for all currently supported gluster releases (3.10
till 3.13) will also be EOL'd as a result of this decision.

If there are interested contributors who would like to participate and
keep this going, do let us know. We can start a conversation around what
it would take to keep the lights brighter for gluster in the *BSD world.

If you have questions, send to Amye Scavarda 

Thanks,
Gluster Maintainers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] IMP: Release 4.0: CentOS 6 packages will not be made available

2018-01-11 Thread Shyam Ranganathan
Gluster Users,

This is to inform you that from the 4.0 release onward, packages for
CentOS 6 will not be built by the gluster community. This also means
that the CentOS SIG will not receive updates for 4.0 gluster packages.

Gluster release 3.12 and its predecessors will receive CentOS 6 updates
till Release 4.3 of gluster (which is slated around Dec, 2018).

The decision is due to the following,
- Glusterd2 which is golang based meets its dependencies on CentOS 7
only, and is not built on CentOS 6 (yet)

- Gluster community regression machines and runs are going to be CentOS
7 based going forward, and so determinism of quality on CentOS 7 would
be better than on CentOS 6

If you have questions, send to Amye Scavarda 

Regards,
Gluster maintainers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.9 (Long Term Maintenance)

2018-01-08 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.10.9 (packages available at [1]).

Release notes for the release can be found at [2].

We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.


* Expanding a gluster volume that is sharded may cause file corruption
 - Sharded volumes are typically used for VM images, if such volumes
are expanded or possibly contracted (i.e add/remove bricks and
rebalance) there are reports of VM images getting corrupted.
 - The last known cause for corruption #1498081 is still pending,
and not yet a part of this release.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.9/

[2] Release notes:
https://github.com/gluster/glusterfs/blob/v3.10.9/doc/release-notes/3.10.9.md
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] So how badly will Gluster be affected by the Intel 'fix'

2018-01-04 Thread Shyam Ranganathan
On 01/04/2018 12:58 PM, WK wrote:
> I'm reading that the new kernel will slow down context switches. That is
> of course a big deal with FUSE mounts.
> 
> Has anybody installed the new kernels yet and observed any performance
> degradation?

We are in the process of testing the same out. Hopefully later next week
we would be able to post the numbers that we observe.

Other results in the interim are welcome as well!

> 
> -wk
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] 2018 - Plans and Expectations on Gluster Community

2018-01-03 Thread Shyam Ranganathan
On 01/03/2018 08:05 AM, Kaleb S. KEITHLEY wrote:
> On 01/02/2018 11:03 PM, Vijay Bellur wrote:
>> ...     The people who were writing storhaug never finished it. Keep
>> using
>>     3.10 until storhaug gets finished.
>>
>>
>>
>> Since 3.10 will be EOL in approximately 2 months from now, what would
>> be our answer for NFS HA if storahug is not finished by then?

Correction, 3.10 will not be EOL when 4.0 is released, as 4.0 is an STM.
The oldest LTM will be EOL'd when the next LTM releases, which is 4.1.
Hence, we have another 5 months before which 3.10 is EOL'd.

>>
>>   -   Use ctdb
>>   -   Restore nfs.ganesha CLI support
>>   -   Something else?
>>
>> Have we already documented upgrade instructions for those users
>> utilizing nfs.ganesha CLI in 3.8? If not already done so, it would be
>> useful to have them listed somewhere.
>>
> 
> I have a pretty high degree of confidence that I can have storhaug
> usable by or before 4.0. The bits I have on my devel box are almost
> ready to post on github.
> 
> I'd like to abandon the github repo at
> https://github.com/linux-ha-storage/storhaug; and create a new repo
> under https://github.com/gluster/storhaug. I dare say there are other
> Linux storage solutions besides gluster+ganesha+samba that storhaug
> doesn't handle.
> 
> And upgrade instructions for what? Upgrading/switching from legacy
> glusterd to storhaug? No, not yet. Doesn't make sense since there's no
> (usable) storhaug yet.
> 
> -- 
> 
> Kaleb
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Syntax for creating arbiter volumes in gluster 4.0

2017-12-20 Thread Shyam Ranganathan
On 12/20/2017 05:14 AM, Ravishankar N wrote:
> Hi,
> 
> The existing syntax in the gluster CLI for creating arbiter volumes is
> `gluster volume create  replica 3 arbiter 1 ` .
> It means (or at least intended to mean) that out of the 3 bricks, 1
> brick is the arbiter.
> There has been some feedback while implementing arbiter support in
> glusterd2 for glusterfs-4.0 that we should change this to `replica 2
> arbiter 1` , meaning that there are 2 replica (data) bricks and the 3rd
> one is the arbiter (which only holds meta data).

I had an additional comment here,

Can we have arbiter 2/3/n? If so the count after the arbiter makes sense
(i.e retaining the '1'), if not then we should drop the number there,
and just call this an arbiter (or some such) volume type. If retained
for future multiple arbiter cases, then it is fine.

> 
> See [1] for some discussions. What does everyone feel is more user
> friendly and intuitive?
> 
> Thanks,
> Ravi
> 
> [1] https://github.com/gluster/glusterd2/pull/480
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrading from Gluster 3.8 to 3.12

2017-12-20 Thread Shyam Ranganathan
On 12/20/2017 01:01 AM, Hari Gowtham wrote:
> Yes Atin. I'll take a look.

Once we have a root cause and a way around, please document this in the
upgrade procedure in our docs as well. That way future problems have a
documented solution (outside of the lists as well).

Thanks!

> 
> On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee  wrote:
>> Looks like a bug as I see tier-enabled = 0 is an additional entry in the
>> info file in shchhv01. As per the code, this field should be written into
>> the glusterd store if the op-version is >= 30706 . What I am guessing is
>> since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on
>> op-version bump up" in 3.8.4 while bumping up the op-version the info and
>> volfiles were not regenerated which caused the tier-enabled entry to be
>> missing in the info file.
>>
>> For now, you can copy the info file for the volumes where the mismatch
>> happened from shchhv01 to shchhv02 and restart glusterd service on shchhv02.
>> That should fix up this temporarily. Unfortunately this step might need to
>> be repeated for other nodes as well.
>>
>> @Hari - Could you help in debugging this further.
>>
>>
>>
>> On Wed, Dec 20, 2017 at 10:44 AM, Gustave Dahl 
>> wrote:
>>>
>>> I was attempting the same on a local sandbox and also have the same
>>> problem.
>>>
>>>
>>> Current: 3.8.4
>>>
>>> Volume Name: shchst01
>>> Type: Distributed-Replicate
>>> Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 4 x 3 = 12
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: shchhv01-sto:/data/brick3/shchst01
>>> Brick2: shchhv02-sto:/data/brick3/shchst01
>>> Brick3: shchhv03-sto:/data/brick3/shchst01
>>> Brick4: shchhv01-sto:/data/brick1/shchst01
>>> Brick5: shchhv02-sto:/data/brick1/shchst01
>>> Brick6: shchhv03-sto:/data/brick1/shchst01
>>> Brick7: shchhv02-sto:/data/brick2/shchst01
>>> Brick8: shchhv03-sto:/data/brick2/shchst01
>>> Brick9: shchhv04-sto:/data/brick2/shchst01
>>> Brick10: shchhv02-sto:/data/brick4/shchst01
>>> Brick11: shchhv03-sto:/data/brick4/shchst01
>>> Brick12: shchhv04-sto:/data/brick4/shchst01
>>> Options Reconfigured:
>>> cluster.data-self-heal-algorithm: full
>>> features.shard-block-size: 512MB
>>> features.shard: enable
>>> performance.readdir-ahead: on
>>> storage.owner-uid: 9869
>>> storage.owner-gid: 9869
>>> server.allow-insecure: on
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: enable
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> cluster.self-heal-daemon: on
>>> nfs.disable: on
>>> performance.io-thread-count: 64
>>> performance.cache-size: 1GB
>>>
>>> Upgraded shchhv01-sto to 3.12.3, others remain at 3.8.4
>>>
>>> RESULT
>>> =
>>> Hostname: shchhv01-sto
>>> Uuid: f6205edb-a0ea-4247-9594-c4cdc0d05816
>>> State: Peer Rejected (Connected)
>>>
>>> Upgraded Server:  shchhv01-sto
>>> ==
>>> [2017-12-20 05:02:44.747313] I [MSGID: 101190]
>>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with
>>> index 1
>>> [2017-12-20 05:02:44.747387] I [MSGID: 101190]
>>> [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with
>>> index 2
>>> [2017-12-20 05:02:44.749087] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk]
>>> 0-management: RPC_CLNT_PING notify failed
>>> [2017-12-20 05:02:44.749165] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk]
>>> 0-management: RPC_CLNT_PING notify failed
>>> [2017-12-20 05:02:44.749563] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk]
>>> 0-management: RPC_CLNT_PING notify failed
>>> [2017-12-20 05:02:54.676324] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received
>>> RJT
>>> from uuid: 546503ae-ba0e-40d4-843f-c5dbac22d272, host: shchhv02-sto, port:
>>> 0
>>> [2017-12-20 05:02:54.690237] I [MSGID: 106163]
>>> [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack]
>>> 0-management:
>>> using the op-version 30800
>>> [2017-12-20 05:02:54.695823] I [MSGID: 106490]
>>> [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req]
>>> 0-glusterd:
>>> Received probe from uuid: 546503ae-ba0e-40d4-843f-c5dbac22d272
>>> [2017-12-20 05:02:54.696956] E [MSGID: 106010]
>>> [glusterd-utils.c:3370:glusterd_compare_friend_volume] 0-management:
>>> Version
>>> of Cksums shchst01-sto differ. local cksum = 4218452135, remote cksum =
>>> 2747317484 on peer shchhv02-sto
>>> [2017-12-20 05:02:54.697796] I [MSGID: 106493]
>>> [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd:
>>> Responded to shchhv02-sto (0), ret: 0, op_ret: -1
>>> [2017-12-20 05:02:55.033822] I [MSGID: 106493]
>>> [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received
>>> RJT
>>> from uuid: 3de22cb5-c1c1-4041-a1e1-eb969afa9b4b, host: shchhv03-sto, port:

[Gluster-users] Announcing GlusterFS release 3.13.0 (Short Term Maintenance)

2017-12-07 Thread Shyam Ranganathan
This is a major release that includes a range of features enhancing 
usability; enhancements to GFAPI for developers and a set of bug fixes.


  * Addition of summary option to the heal info CLI
  * Addition of checks for allowing lookups in AFR and removal of 
'cluster.quorum-reads' volume option

  * Support for max-port range in glusterd.vol
  * Prevention of other processes accessing the mounted brick snapshots
  * Enabling thin client
  * Ability to reserve back-end storage space
  * List all the connected clients for a brick and also exported 
bricks/snapshots from each brick process

  * Improved write performance with Disperse xlator
  * Disperse xlator now supports discard operations

The features, changes and bug fixes, are documented in the release notes 
[1].



The packages can be downloaded from [2].

Upgrade guide for release-3.13 can be found here [3].

Releases that will continue to be updated in the future as of this
release are: 3.13, 3.12, 3.10 (see [4])

Thanks,
Gluster community

[1] Release notes: 
https://github.com/gluster/glusterfs/blob/release-3.13/doc/release-notes/3.13.0.md


[2] Packages: https://download.gluster.org/pub/gluster/glusterfs/3.13/

[3] Upgrade guide: 
http://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_3.13/


[4] Release schedule: https://www.gluster.org/release-schedule/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.8 (Long Term Maintenance)

2017-12-05 Thread Shyam Ranganathan

The Gluster community is pleased to announce the release of Gluster
3.10.8 (packages available at [1]).

Release notes for the release can be found at [2].

We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.


* Expanding a gluster volume that is sharded may cause file corruption
 - Sharded volumes are typically used for VM images, if such volumes
are expanded or possibly contracted (i.e add/remove bricks and
rebalance) there are reports of VM images getting corrupted.
 - The last known cause for corruption #1498081 is still pending,
and not yet a part of this release.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.8/

[2] Release notes:
https://github.com/gluster/glusterfs/blob/v3.10.8/doc/release-notes/3.10.8.md

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] RIO scope in release 4.0 (Was: Request for Comments: Upgrades from 3.x to 4.0+)

2017-11-02 Thread Shyam Ranganathan

On 11/02/2017 08:10 AM, Kotresh Hiremath Ravishankar wrote:

Hi Amudhan,

Please go through the following that would clarify up-gradation concerns 
from DHT to RIO in 4.0


 1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
 2. DHT volumes would not be migrated to RIO. DHT volumes would still be
using DHT code.
 3. The new volume creation should specifically opt for RIO volume once
RIO is in place.
 4. RIO should be perceived as another volume type which is chosed
during volume creation
just like replicate, EC which would avoid most of the confusions.
   5. RIO will be alpha quality (in terms of features and 
functionality) when it releases with 4.0, it is a tech preview to get 
feedback from the community.
   6. RIO is not a blocker for releasing 4.0, so if said alpha tasks 
are not met, it may not be part of 4.0 as well


Hope this clarifies volume compatibility concerns from a distribute 
layer perspective in 4.0.


Thanks,
Shyam


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.7 (Long Term Maintenance)

2017-11-01 Thread Shyam Ranganathan

The Gluster community is pleased to announce the release of Gluster
3.10.7 (packages available at [1]).

Release notes for the release can be found at [2].

We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.


* Expanding a gluster volume that is sharded may cause file corruption
 - Sharded volumes are typically used for VM images, if such volumes
are expanded or possibly contracted (i.e add/remove bricks and
rebalance) there are reports of VM images getting corrupted.
 - The last known cause for corruption #1498081 is still pending,
and not yet a part of this release.

Reminder: Since GlusterFS 3.9 the Fedora RPM and Debian .deb public
signing key is, e.g., for 3.10, at
https://download.gluster.org/pub/gluster/glusterfs/3.10/rsa.pub. If you
have an old /etc/yum.repos.d/glusterfs-fedora.repo file with a link to
https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub then
you need to fix your .repo
file to point to the correct location of the public key. This is a
safety feature to help prevent unintended updates from earlier versions.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.7/

[2] Release notes:
https://github.com/gluster/glusterfs/blob/v3.10.7/doc/release-notes/3.10.7.md
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case

2017-11-01 Thread Shyam Ranganathan

On 10/31/2017 08:36 PM, Ben Turner wrote:

* Erasure coded volumes with sharding - seen as a good fit for VM disk
storage

I am working on this with a customer, we have been able to do 400-500 MB / sec 
writes!  Normally things max out at ~150-250.  The trick is to use multiple 
files, create the lvm stack and use native LVM striping.  We have found that 
4-6 files seems to give the best perf on our setup.  I don't think we are using 
sharding on the EC vols, just multiple files and LVM striping.  Sharding may be 
able to avoid the LVM striping, but I bet dollars to doughnuts you won't see 
this level of perf:)   I am working on a blog post for RHHI and RHEV + RHS 
performance where I am able to in some cases get 2x+ the performance out of VMs 
/ VM storage.  I'd be happy to share my data / findings.



Ben, we would like to hear more, so please do share your thoughts 
further. There are a fair number of users in the community who have this 
use-case and may have some interesting questions around the proposed method.


Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] Release 3.12.2 : Scheduled for the 10th of October

2017-10-12 Thread Shyam Ranganathan

On 10/12/2017 08:09 AM, Jiffin Tony Thottan wrote:


[2] : https://review.gluster.org/#/c/18489/

[1] : https://review.gluster.org/18506
<https://review.gluster.org/18506>


Hi,

Both issues looks like a regression. Master patch [2] got merged in 
master but [1] is still pending.

@Rafi : Can you get the reviews done ASAP and merge it on master.
I hope both can be make it in 3.12 before the time deadline. If not 
please let me know.


Yes, both issues are a regression, but the release deadline is past, it 
can always land in the next release or in an async release (3.12.2-2) if 
there is a need from users to absolutely have these fixes earlier in the 
builds. Further these have existed since 3.12.0 (If I read this right).


I suggest the release train continue on schedule and not wait in the 
future. Think about what we would have done if we found these issues an 
hour after release tagging (or the release announcement for that matter).


Thanks,
Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.6 (Long Term Maintenance)

2017-10-06 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster 
3.10.6 (packages available at [1]).


Release notes for the release can be found at [2].

We are still working on a further fix for the corruption issue when 
sharded volumes are rebalanced, details as below.



* Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.
- The last known cause for corruption #1498081 is still pending, 
and not yet a part of this release.


Reminder: Since GlusterFS 3.9 the Fedora RPM and Debian .deb public 
signing key is, e.g., for 3.10, at 
https://download.gluster.org/pub/gluster/glusterfs/3.10/rsa.pub. If you 
have an old /etc/yum.repos.d/glusterfs-fedora.repo file with a link to 
https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub then 
you need to fix your .repo
file to point to the correct location of the public key. This is a 
safety feature to help prevent unintended updates from earlier versions.


Thanks,
Gluster community

[1] Packages: 
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.6/


[2] Release notes: 
https://github.com/gluster/glusterfs/blob/v3.10.6/doc/release-notes/3.10.6.md

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-10-06 Thread Shyam Ranganathan

On 10/04/2017 06:17 AM, Niels de Vos wrote:

On Wed, Oct 04, 2017 at 09:44:44AM +, ismael mondiu wrote:

Hello,

I'd like to test if 3.10.6 version fixes the problem  . I'm wondering which is 
the correct way to upgrade from 3.10.5 to 3.10.6.

It's hard to find upgrade guides for a minor release. Can you help me please ?


Packages for GlusterFS 3.10.6 are available in the testing repository of
the CentOS Storage SIG. In order to test these packages on a CentOS 7
system, follow these steps:

   # yum install centos-release-gluster310
   # yum --enablerepo=centos-gluster310-test install 
glusterfs-server-3.10.6-1.el7

Make sure to restart any running Gluster binaries before running your
tests.

When someone reports back about the 3.10.6 release, and it is not worse
than previous versions, I'll mark the packages stable so that they get
sync'd to the CentOS mirrors the days afterwards.


Tested installing 3.10.6 and performing some basic tests as detailed in 
https://hackmd.io/GYIwTADCDsDMCGBaArAUxAY0QFhBAbIgJwCMySIwJmAJvGMBvNEA# 
things work as expected.


There is nothing systemd specific that I did, but otherwise from a 
3.10.6 perceptive the packages work.




Thanks,
Niels






Thanks in advance


Ismael



De : Atin Mukherjee 
Envoyé : dimanche 17 septembre 2017 14:56
À : ismael mondiu
Cc : Niels de Vos; gluster-users@gluster.org; Gaurav Yadav
Objet : Re: [Gluster-users] Glusterd not working with systemd in redhat 7

The backport just got merged few minutes back and this fix should be available 
in next update of 3.10.

On Fri, Sep 15, 2017 at 2:08 PM, ismael mondiu 
> wrote:

Hello Team,

Do you know when the backport to 3.10 will be available ?

Thanks




De : Atin Mukherjee >
Envoyé : vendredi 18 août 2017 10:53
À : Niels de Vos
Cc : ismael mondiu; 
gluster-users@gluster.org; Gaurav Yadav
Objet : Re: [Gluster-users] Glusterd not working with systemd in redhat 7



On Fri, Aug 18, 2017 at 2:01 PM, Niels de Vos 
> wrote:
On Fri, Aug 18, 2017 at 12:22:33PM +0530, Atin Mukherjee wrote:

You're hitting a race here. By the time glusterd tries to resolve the
address of one of the remote bricks of a particular volume, the n/w
interface is not up by that time. We have fixed this issue in mainline and
3.12 branch through the following commit:


We still maintain 3.10 for at least 6 months. It probably makes sense to
backport this? I would not bother with 3.8 though, the last update for
this version has already been shipped.

Agreed. Gaurav is backporting the fix in 3.10 now.


Thanks,
Niels




commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a
Author: Gaurav Yadav >
Date:   Tue Jul 18 16:23:18 2017 +0530

 glusterd : glusterd fails to start when  peer's network interface is
down

 Problem:
 glusterd fails to start on nodes where glusterd tries to come up even
 before network is up.

 Fix:
 On startup glusterd tries to resolve brick path which is based on
 hostname/ip, but in the above scenario when network interface is not
 up, glusterd is not able to resolve the brick path using ip_address or
 hostname With this fix glusterd will use UUID to resolve brick path.

 Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710
 BUG: 1472267
 Signed-off-by: Gaurav Yadav >
 Reviewed-on: https://review.gluster.org/17813
 Smoke: Gluster Build System 
>
 Reviewed-by: Prashanth Pai >
 CentOS-regression: Gluster Build System 
>
 Reviewed-by: Atin Mukherjee 
>



Note : 3.12 release is planned by end of this month.

~Atin

On Thu, Aug 17, 2017 at 2:45 PM, ismael mondiu 
> wrote:


Hi Team,

I noticed that glusterd is never starting when i reboot my Redhat 7.1
server.

Service is enabled but don't works.

I tested with gluster 3.10.4 & gluster 3.10.5 and the problem still exists.


When i started the service manually this works.

I'va also tested on Redhat 6.6 server and gluster 3.10.4 and this works
fine.

The problem seems to be related to Redhat 7.1


This is à known issue ? if yes , can you tell me what's is the workaround?


Thanks


Some logs here


[root@~]# systemctl status  glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled;
vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-08-17 11:04:00 CEST;
2min 9s ago
   Process: 851 

Re: [Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-11 Thread Shyam Ranganathan

Here are my results:

Summary: I am not able to reproduce the problem, IOW I get relatively 
equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0


Next steps:
- Could you pass along your volfile (both for a brick and also the 
client vol file (from 
/var/lib/glusterd/vols//patchy.tcp-fuse.vol and a brick vol 
file from the same place)
  - I want to check what options are in use in your setup as compared 
to mine and see if that makes a difference


- Is it possible for you to run the IOZone test, as below (it needs more 
clarification in case you have not used IOZone before, so reach out in 
that case) in your setup and report the results?


- Details:

Test: IOZone iozone -+m / -+h  -C -w -c -e -i 
0 -+n -r 256k -s 2g -t 4/8/16/32
  - Sequential IOZone write tests, with -t number of threads (files) 
per client, across 4 clients, and using a 256k record size

  - Essentially 8/16/32 threads per client, which are 4 in total

Volume: 6x(4+2) disperse, on 36 disks, each disk is a SAS 10k JBOD, 
configured with the defaults, when creating the volume using 3.10.5 as a 
start point.


Server network saturation expectation: The brick nodes will get 1.5 
times the data that the client generated (4+2). As a result, aggregate 
IOZone results should be seen as, 1.5XAggregate/4 per server network 
utilization.


Results:
Threads/client  3.10.5  3.12.0
 (bytes/sec aggregate)
8   1938564 1922249
16  2044677 2082817
32  2465990 2435669

The brick nodes (which are separate from the client nodes) have a 
(greater than) 10G interface.


At best (32 threads/client case), I see the server link getting utilized as,
3.10.5: (2465990*1.5)/(4*1024) = 903MB/sec
3.12.0: (2465990*1.5)/(4*1024) = 892MB/sec

Shyam
On 09/07/2017 12:07 AM, Serkan Çoban wrote:

It is sequential write with file size 2GB. Same behavior observed with
3.11.3 too.

On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srang...@redhat.com> wrote:

On 09/06/2017 05:48 AM, Serkan Çoban wrote:


Hi,

Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
100 clients are writing each has 5 threads total 500 threads.
With 3.10.5 each server has 800MB/s network traffic, cluster total is
32GB/s
With 3.12.0 each server has 200MB/s network traffic, cluster total is
8GB/s
I did not change any volume options in both configs.



I just performed some *basic* IOZone tests on a 6 x (4+2) disperse volume
and compared this against 3.10.5 and 3.12.0. The tests are no where near
your capacity, but I do not see anything alarming in the results. (4 server,
4 clients, 4 worker thread per client)

I do notice a 6% drop in Sequential and random write performance, and gains
in the sequential and random reads.

I need to improve the test to do larger files and for a longer duration,
hence not reporting any numbers as yet.

Tests were against 3.10.5 and then a down server upgrade to 3.12.0 and
remounting on the clients (after the versions were upgraded there as well).

I guess your test can be characterized as a sequential write workload
(ingestion of data). What is the average file size being ingested? I can
mimic something equivalent to that to look at this further.

I would like to ensure there are no evident performance regressions as you
report.

Shyam

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can I use 3.7.11 server with 3.10.5 client?

2017-09-08 Thread Shyam Ranganathan

On 09/08/2017 01:32 PM, Serkan Çoban wrote:

Any suggestions?

On Thu, Sep 7, 2017 at 4:35 PM, Serkan Çoban  wrote:

Hi,

Is it safe to use 3.10.5 client with 3.7.11 server with read-only data
move operation?


The normal upgrade, and hence tested, procedure is older clients and 
newer server, IOW upgrade servers first and then the clients. See [1]


As a result, we would not be able to provide guidance on the request. 
Further 3.7 is EOL for some time now, so issues that appear may also not 
get enough attention or suggestions.


[1] Upgrade procedure: 
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.12/



Client will have 3.10.5 glusterfs-client packages. It will mount one
volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read
from 3.7.11 and write to 3.10.5.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing GlusterFS release 3.12.0 (Long Term Maintenance)

2017-09-06 Thread Shyam Ranganathan

On 09/05/2017 02:07 PM, Serkan Çoban wrote:

For rpm packages you can use [1], just installed without any problems.
It is taking time packages to land in Centos storage SIG repo...


Thank you for reporting this. The SIG does take a while to get updated 
with the latest bits. We are looking at ways to improve that in the future.




[1] https://buildlogs.centos.org/centos/6/storage/x86_64/gluster-3.12/



On Tue, Sep 5, 2017 at 8:11 PM, Shyam Ranganathan <srang...@redhat.com> wrote:

This is a major Gluster release that includes, features and bug fixes.
Notable feature highlights are,

 * Ability to mount sub-directories using the Gluster native protocol
(FUSE)

 * Brick multiplexing enhancements that help scale to larger brick counts
per node

 * Enhancements to gluster get-state CLI enabling better understanding of
various bricks and nodes participation/roles in the cluster

 * Ability to resolve GFID split-brain using existing CLI

 * Easier GFID to real path mapping thus enabling diagnostics and
correction for reported GFID issues (healing among other uses where GFID is
the only available source for identifying a file)

The features and changes are documented in the release notes [1]. A full
list of bugs that have been addressed is included in the release notes as
well [1].

The packages can be downloaded from [2] and are signed with [3].

Further, as 3.11 release is a short term maintenance release, features
included in 3.11 are available with 3.12 as well, and could be of interest
to users upgrading to 3.12 from older than 3.11 releases. The 3.11 release
notes [2] captures the list of features that were introduced with 3.11.

Upgrade guide for release-3.12 can be found here [4].

Releases that will no longer receive updates (or are reaching EOL) with this
release are: 3.11, 3.8 (see [5])

Releases that will continue to be updated in the future as of this release
are: 3.12, 3.10 (see [5])

[1] 3.12.0 release notes:
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.0.md

[2] Packages: https://download.gluster.org/pub/gluster/glusterfs/3.12/

[3] Packages signed with:
https://download.gluster.org/pub/gluster/glusterfs/3.12/rsa.pub

[4] Upgrade guide to release 3.12:
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.12/

[5] Release schedule: https://www.gluster.org/release-schedule/
___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-06 Thread Shyam Ranganathan

On 09/06/2017 05:48 AM, Serkan Çoban wrote:

Hi,

Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
100 clients are writing each has 5 threads total 500 threads.
With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s
I did not change any volume options in both configs.


I just performed some *basic* IOZone tests on a 6 x (4+2) disperse 
volume and compared this against 3.10.5 and 3.12.0. The tests are no 
where near your capacity, but I do not see anything alarming in the 
results. (4 server, 4 clients, 4 worker thread per client)


I do notice a 6% drop in Sequential and random write performance, and 
gains in the sequential and random reads.


I need to improve the test to do larger files and for a longer duration, 
hence not reporting any numbers as yet.


Tests were against 3.10.5 and then a down server upgrade to 3.12.0 and 
remounting on the clients (after the versions were upgraded there as well).


I guess your test can be characterized as a sequential write workload 
(ingestion of data). What is the average file size being ingested? I can 
mimic something equivalent to that to look at this further.


I would like to ensure there are no evident performance regressions as 
you report.


Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing GlusterFS release 3.12.0 (Long Term Maintenance)

2017-09-05 Thread Shyam Ranganathan
This is a major Gluster release that includes, features and bug fixes. 
Notable feature highlights are,


* Ability to mount sub-directories using the Gluster native 
protocol (FUSE)


* Brick multiplexing enhancements that help scale to larger brick 
counts per node


* Enhancements to gluster get-state CLI enabling better 
understanding of various bricks and nodes participation/roles in the cluster


* Ability to resolve GFID split-brain using existing CLI

* Easier GFID to real path mapping thus enabling diagnostics and 
correction for reported GFID issues (healing among other uses where GFID 
is the only available source for identifying a file)


The features and changes are documented in the release notes [1]. A full 
list of bugs that have been addressed is included in the release notes 
as well [1].


The packages can be downloaded from [2] and are signed with [3].

Further, as 3.11 release is a short term maintenance release, features 
included in 3.11 are available with 3.12 as well, and could be of 
interest to users upgrading to 3.12 from older than 3.11 releases. The 
3.11 release notes [2] captures the list of features that were 
introduced with 3.11.


Upgrade guide for release-3.12 can be found here [4].

Releases that will no longer receive updates (or are reaching EOL) with 
this release are: 3.11, 3.8 (see [5])


Releases that will continue to be updated in the future as of this 
release are: 3.12, 3.10 (see [5])


[1] 3.12.0 release notes: 
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.0.md


[2] Packages: https://download.gluster.org/pub/gluster/glusterfs/3.12/

[3] Packages signed with: 
https://download.gluster.org/pub/gluster/glusterfs/3.12/rsa.pub


[4] Upgrade guide to release 3.12: 
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.12/


[5] Release schedule: https://www.gluster.org/release-schedule/
___
Announce mailing list
annou...@gluster.org
http://lists.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Gluster documentation search

2017-08-29 Thread Shyam Ranganathan

On 08/28/2017 07:14 AM, Nigel Babu wrote:

Hello folks,

I spend some time today mucking about trying to figure out how to make 
our documentation search a better experience. The short answer is, 
search kind of works now.


Awesome! thank you.



Long answer: mkdocs creates a client side file which is used for search. 
RTD overrides this by referring people to Elasticsearch. However, that 
doesn't clear out stale entries and we're plagued with a whole lot of 
stale entries. I've made some changes that other consumers of RTD have 
done to override our search to use the JS file rather than Elasticsearch.


--
nigelb


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.8 Upgrade to 3.10

2017-08-25 Thread Shyam Ranganathan

On 08/25/2017 09:17 AM, Lindsay Mathieson wrote:
Currently running 3.8.12, planning to rolling upgrade it to 3.8.15 this 
weekend.


  * debian 8
  * 3 nodes
  * Replica 3
  * Sharded
  * VM Hosting only

The release notes strongly recommend upgrading to 3.10

  * Is there any downside to staying on 3.8.15 for a while longer?


3.8 will receive no further updates post 3.12 is released, if the 
cluster is stable and you are not waiting on any fixes, then staying on 
a bit longer may not hurt.



  * I didn't see anything I had to have in 3.10, but ongoing updates are
always good :(


Yes, issues raised on 3.8 will *possibly* receive an answer which would 
be, "please upgrade and let us know if the problem persists".




This mildly concerned me:

  * Expanding a gluster volume that is sharded may cause file corruption

But I have no plans to expand or change the volume, so shouldn't be a issue?


It should not given your usage, the problem should persist in 3.8 
version as well, and in latest 3.10 releases, the last known problems 
regarding this corruption are fixed and we are just awaiting 
confirmation from users to remove the note.





Upgrading

  * Can I go straight from 3.8 to 3.10?
  * Do I need to offline the volume first?


The upgrade notes [1] covers this, in short an offline upgrade is not 
needed in your setup, as disperse is not a part of the stack.


Shyam

[1] 3.10 upgrade guide: 
https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.11.3 (Short Term Maintenance)

2017-08-24 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster 
3.11.3 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

***Reminder Number One***: 3.11.3 is the last release of the 3.11 STM 
series. 3.11 will be EOL when 3.12 is released in a couple of weeks.


***Reminder Number Two***: Since GlusterFS 3.9 the Fedora RPM and Debian 
.deb public signing key is, e.g., for 3.10, at 
https://download.gluster.org/pub/gluster/glusterfs/3.10/rsa.pub. If you 
have an old /etc/yum.repos/glusterfs-fedora.repo file with a link to 
https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub then 
you need to fix your .repo file to point to the correct location. This 
is a safety feature to help prevent unintended updates, e.g., from 3.10 
to 3.11.


We still carry a major issue that is reported in the release-notes as 
follows,


- Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123

Thanks,
Gluster community

[1] https://download.gluster.org/pub/gluster/glusterfs/3.11/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.11
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.11.3/

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.5 (Long Term Maintenance)

2017-08-16 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster 
3.10.5 (packages available at [1]).


Release notes for the release can be found at [2].

We are still awaiting feedback on a major issue that is reported in the 
release-notes as follows, all known issues resulting in this corruption 
are fixed in this build.


* Expanding a gluster volume that is sharded may cause file corruption
  - Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.
  - The last known cause for corruption 
[#1467010](https://bugzilla.redhat.com/show_bug.cgi?id=1467010) has a 
fix with this release.


Testing feedback on this issue can be sent to the gluster lists or 
updated in the mentioned bug above.


Thanks,
Gluster community

[1] Packages: 
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.5/


[2] Release notes: 
https://github.com/gluster/glusterfs/blob/v3.10.5/doc/release-notes/3.10.5.md

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.12: RC0 build is available for testing!

2017-08-09 Thread Shyam Ranganathan

Hi,

3.12 release has been tagged RC0 and the builds are available here [1] 
(signed with [2]).


3.12 comes with a set of new features as listed in the release notes [3].

We welcome any testing feedback on the release.

If you find bugs, we request a bug report for the same at [4]. If it is 
deemed as a blocker add it to the release tracker (or just drop a note 
on the bug itself) [5].


Thanks,
Jiffin and Shyam.

[1] builds avaiable at: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.12.0rc0/


[2] Signing key for the builds: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.12.0rc0/rsa.pub


[3] Release notes for 3.12: 
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.0.md


[4] File a bug on 3.12: 
https://bugzilla.redhat.com/enter_bug.cgi?version=3.12=GlusterFS


[5] Mark a bug a blocker for 3.12: 
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.0


"Releases are made better together"
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.11.2 (Short Term Maintenance)

2017-07-28 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster 
3.11.2 (packages available at [1]).


Release notes for the release can be found at [2].

We still carry a major issue that is reported in the release-notes as 
follows,


- Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123

Thanks,
Gluster community

[1] Packages: 
https://download.gluster.org/pub/gluster/glusterfs/3.11/3.11.2/


[2] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.11.2/

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.11.1 (Short Term Maintenance)

2017-06-28 Thread Shyam
The Gluster community is pleased to announce the release of Gluster 
3.11.1 (packages available at [1]).


Major changes and features (complete release notes can be found @ [2])

- Improved disperse (EC) volume performance
- Group settings for enabling negative lookup caching are provided
- Gluster fuse now implements "-oauto_unmount" feature

We still carry a major issue that is reported in the release-notes as 
follows,


- Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


Status of this bug can be tracked here, #1465123

Thanks,
Gluster community

[1] Packages: 
https://download.gluster.org/pub/gluster/glusterfs/3.11/3.11.1/


[2] Complete release notes: 
https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.1.md

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June

2017-06-21 Thread Shyam

On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote:



On Tue, Jun 20, 2017 at 7:37 PM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>> wrote:

Hi,

Release tagging has been postponed by a day to accommodate a fix for
a regression that has been introduced between 3.11.0 and 3.11.1 (see
[1] for details).

As a result 3.11.1 will be tagged on the 21st June as of now
(further delays will be notified to the lists appropriately).


The required patches landed upstream for review and are undergoing
review. Could we do the tagging tomorrow? We don't want to rush the
patches to make sure we don't introduce any new bugs at this time.


Agreed, considering the situation we would be tagging the release 
tomorrow (June-22nd 2017).






Thanks,
Shyam

[1] Bug awaiting fix:
https://bugzilla.redhat.com/show_bug.cgi?id=1463250
<https://bugzilla.redhat.com/show_bug.cgi?id=1463250>

"Releases are made better together"

On 06/06/2017 09:24 AM, Shyam wrote:

Hi,

It's time to prepare the 3.11.1 release, which falls on the 20th of
each month [4], and hence would be June-20th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.11.1? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to
this
mail

2) Pending reviews in the 3.11 dashboard will be part of the
release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.11 and get
these going

3) Empty release notes are posted here [3], if there are any
specific
call outs for 3.11 beyond bugs, please update the review, or leave a
comment in the review, for us to pick it up

    Thanks,
Shyam/Kaushal

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1
<https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1>

[2] 3.11 review dashboard:

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard

<https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard>


[3] Release notes WIP: https://review.gluster.org/17480
<https://review.gluster.org/17480>

[4] Release calendar:
https://www.gluster.org/community/release-schedule/
<https://www.gluster.org/community/release-schedule/>
___
Gluster-devel mailing list
gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel
<http://lists.gluster.org/mailman/listinfo/gluster-devel>

___
maintainers mailing list
maintain...@gluster.org <mailto:maintain...@gluster.org>
http://lists.gluster.org/mailman/listinfo/maintainers
<http://lists.gluster.org/mailman/listinfo/maintainers>




--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Release 3.11.1: Scheduled for 20th of June

2017-06-20 Thread Shyam

Hi,

Release tagging has been postponed by a day to accommodate a fix for a 
regression that has been introduced between 3.11.0 and 3.11.1 (see [1] 
for details).


As a result 3.11.1 will be tagged on the 21st June as of now (further 
delays will be notified to the lists appropriately).


Thanks,
Shyam

[1] Bug awaiting fix: https://bugzilla.redhat.com/show_bug.cgi?id=1463250

"Releases are made better together"

On 06/06/2017 09:24 AM, Shyam wrote:

Hi,

It's time to prepare the 3.11.1 release, which falls on the 20th of
each month [4], and hence would be June-20th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.11.1? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.11 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.11 and get
these going

3) Empty release notes are posted here [3], if there are any specific
call outs for 3.11 beyond bugs, please update the review, or leave a
comment in the review, for us to pick it up

Thanks,
Shyam/Kaushal

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1

[2] 3.11 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard


[3] Release notes WIP: https://review.gluster.org/17480

[4] Release calendar: https://www.gluster.org/community/release-schedule/
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-20 Thread Shyam

On 06/20/2017 08:41 AM, Pranith Kumar Karampuri wrote:



On Tue, Jun 6, 2017 at 6:54 PM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>> wrote:

Hi,

It's time to prepare the 3.11.1 release, which falls on the 20th of
each month [4], and hence would be June-20th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.11.1? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail


I added https://bugzilla.redhat.com/show_bug.cgi?id=1463250 as blocker
just now for this release. We just completed the discussion about
solution on gluster-devel. We are hoping to get the patch in by EOD
tomorrow IST. This is a geo-rep regression we introduced because of
changing node-uuid behavior. My mistake :-(


I am postponing tagging the release till this regression is fixed, and 
from the looks of it, tagging will hence be done tomorrow.






2) Pending reviews in the 3.11 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.11 and get
these going

3) Empty release notes are posted here [3], if there are any specific
call outs for 3.11 beyond bugs, please update the review, or leave a
comment in the review, for us to pick it up

Thanks,
Shyam/Kaushal

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1
<https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1>

[2] 3.11 review dashboard:

https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard

<https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard>

[3] Release notes WIP: https://review.gluster.org/17480
<https://review.gluster.org/17480>

[4] Release calendar:
https://www.gluster.org/community/release-schedule/
<https://www.gluster.org/community/release-schedule/>
___
maintainers mailing list
maintain...@gluster.org <mailto:maintain...@gluster.org>
http://lists.gluster.org/mailman/listinfo/maintainers
<http://lists.gluster.org/mailman/listinfo/maintainers>




--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.11.1: Scheduled for 20th of June

2017-06-06 Thread Shyam

Hi,

It's time to prepare the 3.11.1 release, which falls on the 20th of
each month [4], and hence would be June-20th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.11.1? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.11 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.11 and get
these going

3) Empty release notes are posted here [3], if there are any specific
call outs for 3.11 beyond bugs, please update the review, or leave a
comment in the review, for us to pick it up

Thanks,
Shyam/Kaushal

[1] Release bug tracker: 
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.1


[2] 3.11 review dashboard: 
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-11-dashboard


[3] Release notes WIP: https://review.gluster.org/17480

[4] Release calendar: https://www.gluster.org/community/release-schedule/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Backport for "Add back socket for polling of events immediately..."

2017-06-05 Thread Shyam

On 05/30/2017 08:44 PM, Zhang Huan wrote:

* Are there any existing users who need this enhancement?
https://bugzilla.redhat.com/show_bug.cgi?id=1358606#c27

Though not sure what branch Zhang Huan is on. @Zhang your inputs are
needed
here.

We are currently on 3.8. Thus the performance number is based on 3.8.
If you need more details, please let me know.


Thanks Zhang. The question was more on the lines whether you need
backport of the fix to 3.8.


Actually, we really need this backported to 3.8. I have seen the
backport of it to 3.8.
https://review.gluster.org/#/c/15046/
Once it gets merged, we will rebase to it and test it as a whole.


@Zang and @list, as this is a performance improvement feature and we do 
not backport features into releases (as a rule) that are already out in 
the field, hence we will not be backporting this to 3.8.


Further, 3.8 will EOL (end of life) from a maintenance standpoint when 
3.12 is released (scheduled around Aug 30th).


We would be merging this into 3.11.1 to provide early access for tests 
and such (Release date of June 20th), and this feature would be made 
generally available with 3.12.


We regret any inconvenience.




Can you upgrade to recent releases (say 3.11.x or 3.10.x)?


Sorry, I am afraid not. Gusterfs is one of the key components in our
product. An upgrade alone would break the whole thing.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-06-05 Thread Shyam
Just to be clear, the release notes still carry the warning about this, 
and the code to use force when doing rebalance is still in place.


As we have received the feedback that this works, these will be removed 
in the subsequent minor release for the various streams as appropriate.


Thanks,
Shyam

On 06/05/2017 07:36 AM, Gandalf Corvotempesta wrote:

Great, thanks!

Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhan...@redhat.com
<mailto:kdhan...@redhat.com>> ha scritto:

The fixes are already available in 3.10.2, 3.8.12 and 3.11.0

-Krutika

On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com
<mailto:gandalf.corvotempe...@gmail.com>> wrote:

Great news.
Is this planned to be published in next release?

Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhan...@redhat.com
<mailto:kdhan...@redhat.com>> ha scritto:

Thanks for that update. Very happy to hear it ran fine
without any issues. :)

Yeah so you can ignore those 'No such file or directory'
errors. They represent a transient state where DHT in the
client process is yet to figure out the new location of the
file.

-Krutika


On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan
<mahdi.ad...@outlook.com <mailto:mahdi.ad...@outlook.com>>
wrote:

Hello,


Yes, i forgot to upgrade the client as well.

I did the upgrade and created a new volume, same options
as before, with one VM running and doing lots of IOs. i
started the rebalance with force and after it completed
the process i rebooted the VM, and it did start normally
without issues.

I repeated the process and did another rebalance while
the VM running and everything went fine.

But the logs in the client throwing lots of warning
messages:


[2017-05-29 13:14:59.416382] W [MSGID: 114031]
[client-rpc-fops.c:2928:client3_3_lookup_cbk]
2-gfs_vol2-client-2: remote operation failed. Path:

/50294ed6-db7a-418d-965f-9b44c69a83fd/images/d59487fe-f3a9-4bad-a607-3a181c871711/aa01c3a0-5aa0-432d-82ad-d1f515f1d87f
(93c403f5-c769-44b9-a087-dc51fc21412e) [No such file or
directory]
[2017-05-29 13:14:59.416427] W [MSGID: 114031]
[client-rpc-fops.c:2928:client3_3_lookup_cbk]
2-gfs_vol2-client-3: remote operation failed. Path:

/50294ed6-db7a-418d-965f-9b44c69a83fd/images/d59487fe-f3a9-4bad-a607-3a181c871711/aa01c3a0-5aa0-432d-82ad-d1f515f1d87f
(93c403f5-c769-44b9-a087-dc51fc21412e) [No such file or
directory]
[2017-05-29 13:14:59.808251] W [MSGID: 114031]
[client-rpc-fops.c:2928:client3_3_lookup_cbk]
2-gfs_vol2-client-2: remote operation failed. Path:

/50294ed6-db7a-418d-965f-9b44c69a83fd/images/d59487fe-f3a9-4bad-a607-3a181c871711/aa01c3a0-5aa0-432d-82ad-d1f515f1d87f
(93c403f5-c769-44b9-a087-dc51fc21412e) [No such file or
directory]
[2017-05-29 13:14:59.808287] W [MSGID: 114031]
[client-rpc-fops.c:2928:client3_3_lookup_cbk]
2-gfs_vol2-client-3: remote operation failed. Path:

/50294ed6-db7a-418d-965f-9b44c69a83fd/images/d59487fe-f3a9-4bad-a607-3a181c871711/aa01c3a0-5aa0-432d-82ad-d1f515f1d87f
(93c403f5-c769-44b9-a087-dc51fc21412e) [No such file or
directory]



Although the process went smooth, i will run another
extensive test tomorrow just to be sure.

--

Respectfully*
**Mahdi A. Mahdi*



*From:* Krutika Dhananjay <kdhan...@redhat.com
<mailto:kdhan...@redhat.com>>
*Sent:* Monday, May 29, 2017 9:20:29 AM

*To:* Mahdi Adnan
*Cc:* gluster-user; Gandalf Corvotempesta; Lindsay
Mathieson; Kevin Lemonnier
*Subject:* Re: Rebalance + VM corruption - current
status and request for feedback

Hi,

I took a look at your logs.
It very much seems like an issue that is caused by a
mismatch in glusterfs client and server packages.
So your client (mount) seems to be still running 3.7.20,
as confirmed by the occurrence of the following log message:

[2017-05-26 08:58:23.647458] I [MSGID:

[Gluster-users] Release 3.12: Scope and calendar!

2017-06-01 Thread Shyam

Hi,

Here are some top reminders for the 3.12 release:

1) When 3.12 is released 3.8 will be EOL'd, hence users are encouraged 
to prepare for the same as per the calendar posted here.


2) 3.12 is a long term maintenance (LTM) release, and potentially the 
last in the 3.x line of Gluster!


3) From this release onward, the feature freeze date is moved ~45 days 
in advance, before the release. Hence, for this one release you will 
have lesser time to get your features into the release.


Release calendar:

- Feature freeze, or branching date: July 17th, 2017
   - All feature post this date need exceptions granted to make it into 
the 3.12 release


- Release date: August 30th, 2017

Release owners:

- Shyam
-  Any volunteers?

Features and major changes process in a nutshell:
1) Open a github issue

2) Refer the issue # in the commit messages of all changes against the 
feature (specs, code, tests, docs, release notes) (refer to the issue as 
"updates gluster/glusterfs#N" where N is the issue)


3) We will ease out release-notes updates form this release onward. 
Still thinking how to get that done, but the intention is that a 
contributor can update release notes before/on/after completion of the 
feature and not worry about branching dates etc. IOW, you can control 
when you are done, than the release dates controlling the same for you.


Thanks,
Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing GlusterFS release 3.11.0 (Short Term Maintenance)

2017-05-30 Thread Shyam
The Gluster community is pleased to announce the release of Gluster 
3.11.0 (packages available at [1]).


This is a short term maintenance (STM) Gluster release that includes 
some substantial changes. he features revolve around, improvements to 
small file workloads, SE Linux support, Halo replication enhancement 
from Facebook, some usability and performance improvements, among other 
bug fixes.


The most notable features and changes are documented on the full release 
notes.


Moving forward, Gluster versions 3.11, 3.10 and 3.8 are actively maintained.

With the release of 3.12 in the future, active maintenance of this 
(3.11) STM release will be terminated.


Major changes and features (complete release notes can be found @ [2])

- Switched to storhaug for ganesha and samba high availability
- Added SELinux support for Gluster Volumes
- Several memory leaks are fixed in gfapi during graph switches
- get-state CLI is enhanced to provide client and brick capacity related 
information

- Ability to serve negative lookups from cache has been added
- New xlator to help developers detecting resource leaks has been added
- Feature for metadata-caching/small file performance is production ready
- "Parallel Readdir" feature introduced in 3.10.0 is production ready
- Object versioning is enabled only if bitrot is enabled
- Distribute layer provides more robust transactions during directory 
namespace operations

- gfapi extended readdirplus API has been added
- Improved adoption of standard refcounting functions across the code
- Performance improvements to rebalance have been made
- Halo Replication feature in AFR has been introduced
- FALLOCATE support with EC

[1] Packages: 
https://download.gluster.org/pub/gluster/glusterfs/3.11/3.11.0/


[2] Complete release notes: 
https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Release 3.11: Rc1 and release tagging dates

2017-05-17 Thread Shyam

Hi,

We have had a steady stream of backports to 3.11 and hence will tag RC1 
around May-22nd and will do the final release tagging around May-29th.


Post RC1 that gives exactly one week to get any critical bug fixes into 
the code base, so be aware of those dates, and also mark any blockers 
against the release tracking bug [1].


Thanks,
Shyam

[1] Tracker BZ for 3.11.0 blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Release 3.10.2: Scheduled for the 30th of April

2017-05-15 Thread Shyam

Thanks Talur!

Further, for 3.10.3 as well Talur will be leading the release management 
work and we intend to release it on time, this time :)


Shyam

On 05/14/2017 04:32 PM, Raghavendra Talur wrote:

Glusterfs 3.10.2 has been tagged.

Packages for the various distributions will be available in a few days,
and with that a more formal release announcement will be made.

- Tagged code: https://github.com/gluster/glusterfs/tree/v3.10.2
- Release notes:
https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.2.md

Thanks,
Raghavendra Talur

NOTE: Tracker bug for 3.10.2 will be closed in a couple of days and
tracker for 3.10.3 will be opened, and an announcement for 3.10.3 will
be sent with the details



On Wed, May 3, 2017 at 3:46 PM, Raghavendra Talur <rta...@redhat.com> wrote:

I had previously announced that we would be releasing 3.10.2 today.
This is to update the 3.10.2 release is now delayed. We are waiting
for a bug[1] to be fixed.
If you are waiting for 3.10.2 release for a particular bug fix, please
let us know.

I will update with expected release date by tomorrow.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1447608

Thanks,
Raghavendra Talur

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Release 3.11: RC0 has been tagged

2017-05-15 Thread Shyam

Hi,

Packages are available for testing from the following locations,

3.11.0rc0 packages are available in Fedora Rawhide (f27)

Packages for fedora-26, fedora-25, epel-7, and epel-6 are available now 
from [5]


Packages for Stretch/9 and Jessie are at [5]

Reminder, 3.11.0rcX packages are (still) signed with 3.10 signing key.

There will be a new signing key for the 3.11.0 GA and all following 
3.11.X packages.


Request testing feedback from the community, that can help us catch any 
major issue before the release.


Tracker BZ is at [2] and release notes are at [3]

Thanks,
Shyam

[5] Packages at download.gluster.org: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.11.0rc0/


On 05/08/2017 02:40 PM, Shyam wrote:

Hi,

Pending features for 3.11 have been merged (and those that did not make
it have been moved out of the 3.11 release window). Thus, leading to
creating 3.11 RC0 tag in the gluster repositories.

Packagers have been notified via mail, and packages for the different
distributions will be made available soon.

We would like to, at this point of the release, encourage users and the
development community, to *test 3.11* and provide feedback on the lists,
or raise bugs [1].

If any bug you raise, is a blocker for the release, please add it to the
release tracker as well [2].

The scratch version of the release notes can be found here [3], and
request all developers who added features to 3.11, to send in their
respective commits for updating the release notes with the required
information (please use the same github issue# as the feature, when
posting commits against the release-notes, that way the issue also gets
updated with a reference to the commit).

This is also a good time for developers to edit gluster documentation,
to add details regarding the features added to 3.11 [4].

Thanks,
Shyam and Kaushal

[1] File a bug: https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

[2] Tracker BZ for 3.11.0 blockers:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0

[3] Release notes:
https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md


[4] Gluster documentation repository:
https://github.com/gluster/glusterdocs
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Shyam

Talur,

Please wait for this fix before releasing 3.10.2.

We will take in the change to either prevent add-brick in 
sharded+distrbuted volumes, or throw a warning and force the use of 
--force to execute this.


Let's get a bug going, and not wait for someone to report it in 
bugzilla, and also mark it as blocking 3.10.2 release tracker bug.


Thanks,
Shyam

On 05/02/2017 06:20 AM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 9:16 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:

Yeah it is a good idea. I asked him to raise a bug and we can move
forward with it.


+Raghavendra/Nitya who can help with the fix.



On Mon, May 1, 2017 at 9:07 PM, Joe Julian <j...@julianfamily.org
<mailto:j...@julianfamily.org>> wrote:


On 04/30/2017 01:13 AM, lemonni...@ulrar.net
<mailto:lemonni...@ulrar.net> wrote:

So I was a little but luck. If I has all the hardware
part, probably i
would be firesd after causing data loss by using a
software marked as stable

Yes, we lost our data last year to this bug, and it wasn't a
test cluster.
We still hear from it from our clients to this day.

Is known that this feature is causing data loss and
there is no evidence or
no warning in official docs.

I was (I believe) the first one to run into the bug, it
happens and I knew it
was a risk when installing gluster.
But since then I didn't see any warnings anywhere except
here, I agree
with you that it should be mentionned in big bold letters on
the site.

Might even be worth adding a warning directly on the cli
when trying to
add bricks if sharding is enabled, to make sure no-one will
destroy a
whole cluster for a known bug.


I absolutely agree - or, just disable the ability to add-brick
with sharding enabled. Losing data should never be allowed.
___
Gluster-devel mailing list
gluster-de...@gluster.org <mailto:gluster-de...@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel
<http://lists.gluster.org/mailman/listinfo/gluster-devel>




--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>




--
Pranith


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add single server

2017-05-01 Thread Shyam

On 05/01/2017 02:55 PM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 12:20 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com
<mailto:gandalf.corvotempe...@gmail.com>> wrote:

2017-05-01 20:43 GMT+02:00 Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>>:
> I do agree that for the duration a brick is replaced its replication count
> is down by 1, is that your concern? In which case I do note that without 
(a)
> above, availability is at risk during the operation. Which needs other
> strategies/changes to ensure tolerance to errors/faults.

Oh, yes, i've forgot this too.

I don't know Ceph, but Lizard, when moving chunks across the cluster,
does a copy, not a movement
During the whole operation you'll end with some files/chunks
replicated more than the requirement.


Replace-brick as a command is implemented with the goal of replacing a
disk that went bad. So the availability was already less. In 2013-2014 I
proposed that we do it by adding brick to just the replica set and
increase its replica-count just for that set once heal is complete we
could remove this brick. But at the point I didn't see any benefit to
that approach, because availability was already down by 1. But with all
of this discussion it seems like a good time to revive this idea. I saw
that Shyam suggested the same in the PR he mentioned before.


Ah! I did not know this, thanks. Yes, in essence this is what I suggest, 
but at that time (13-14) I guess we did not have EC, so in the current 
proposal I include EC and also on ways to deal with pure-distribute only 
environments, using the same/similar scheme.






If you have a replica 3, during the movement, some file get replica 4
In Gluster the same operation will bring you replica 2.

IMHO, this isn't a viable/reliable solution

Any change to change "replace-brick" to increase the replica count
during the operation ?

It can be done. We just need to find time to do this.


Agreed, to add to this point, and to reiterate. We are looking at "+1 
scaling", this discussion helps in attempting to converge on a lot of 
why's for the same at least, if not necessarily the how's.


So, Gandalf, it will be part of the roadmap, just when we maybe able to 
pick and deliver this is not clear yet (as Pranith puts it as well).





--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add single server

2017-05-01 Thread Shyam

On 05/01/2017 02:47 PM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 12:14 AM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>> wrote:

On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 12:07 AM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>
<mailto:srang...@redhat.com <mailto:srang...@redhat.com>>> wrote:

On 05/01/2017 02:23 PM, Pranith Kumar Karampuri wrote:



    On Mon, May 1, 2017 at 11:43 PM, Shyam
<srang...@redhat.com <mailto:srang...@redhat.com>
<mailto:srang...@redhat.com <mailto:srang...@redhat.com>>
<mailto:srang...@redhat.com <mailto:srang...@redhat.com>
<mailto:srang...@redhat.com <mailto:srang...@redhat.com>>>> wrote:

On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:

Splitting the bricks need not be a post factum
decision, we can
start with larger brick counts, on a given
node/disk
count, and
hence spread these bricks to newer
nodes/bricks as
they are
added.


Let's say we have 1 disk, we format it with say
XFS and that
becomes a
brick at the moment. Just curious, what will be the
relationship
between
brick to disk in this case(If we leave out LVM
for this
example)?


I would assume the relation is brick to provided FS
directory (not
brick to disk, we do not control that at the moment,
other than
providing best practices around the same).


Hmmm... as per my understanding, if we do this then 'df'
I guess
will
report wrong values? available-size/free-size etc will be
counted more
than once?


This is true even today, if anyone uses 2 bricks from the
same mount.


That is the reason why documentation is the way it is as far as
I can
remember.



I forgot a converse though, we could take a disk and
partition it
(LVM thinp volumes) and use each of those partitions as bricks,
avoiding the problem of df double counting. Further thinp
will help
us expand available space to other bricks on the same disk,
as we
destroy older bricks or create new ones to accommodate the
moving
pieces (needs more careful thought though, but for sure is a
nightmare without thinp).

I am not so much a fan of large number of thinp partitions,
so as
long as that is reasonably in control, we can possibly still
use it.
The big advantage though is, we nuke a thinp volume when the
brick
that uses that partition, moves out of that disk, and we get the
space back, rather than having or to something akin to rm
-rf on the
backend to reclaim space.


Other way to achieve the same is to leverage the quota
functionality of
counting how much size is used under a directory.


Yes, I think this is the direction to solve the 2 bricks on a single
FS as well. Also, IMO, the weight of accounting at each directory
level that quota brings in seems/is heavyweight to solve just *this*
problem.


I saw some github issues where Sanoj is exploring XFS-quota integration.
Project Quota ideas which are a bit less heavy would be nice too.
Actually all these issues are very much interlinked.


Yes, while discussing DHT2, Quota-2 [1] was discussed and project quotas 
and how to leverage the design in gluster was also discussed. IMO 
(again), this would be the right way forward for quota (orthogonal to 
this discussion, but still).


[1] Quota-2 discussion: 
http://lists.gluster.org/pipermail/gluster-devel/2015-December/047443.html




It all seems to point that we basically need to increase granularity of
brick and solve problems that come up as we go along.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add single server

2017-05-01 Thread Shyam

On 05/01/2017 02:42 PM, Joe Julian wrote:



On 05/01/2017 11:36 AM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com
<mailto:gandalf.corvotempe...@gmail.com>> wrote:

2017-05-01 20:30 GMT+02:00 Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>>:
> Yes, as a matter of fact, you can do this today using the CLI
and creating
> nx2 instead of 1x2. 'n' is best decided by you, depending on the
growth
> potential of your cluster, as at some point 'n' wont be enough
if you grow
> by some nodes.
>
> But, when a brick is replaced we will fail to address "(a)
ability to retain
> replication/availability levels" as we support only homogeneous
replication
> counts across all DHT subvols. (I could be corrected on this
when using
> replace-brick though)


Yes, but this is error prone.


Why?



Because it's done by humans.


Fair point. If Gandalf concurs, we will add this to our "+1 scaling" 
feature effort (not yet on github as an issue).






I'm still thinking that saving (I don't know where, I don't know how)
a mapping between
files and bricks would solve many issues and add much more
flexibility.




--
Pranith


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add single server

2017-05-01 Thread Shyam

On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 12:07 AM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>> wrote:

On 05/01/2017 02:23 PM, Pranith Kumar Karampuri wrote:



On Mon, May 1, 2017 at 11:43 PM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>
<mailto:srang...@redhat.com <mailto:srang...@redhat.com>>> wrote:

On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:

Splitting the bricks need not be a post factum
decision, we can
start with larger brick counts, on a given node/disk
count, and
hence spread these bricks to newer nodes/bricks as
they are
added.


Let's say we have 1 disk, we format it with say XFS and that
becomes a
brick at the moment. Just curious, what will be the
relationship
between
brick to disk in this case(If we leave out LVM for this
example)?


I would assume the relation is brick to provided FS
directory (not
brick to disk, we do not control that at the moment, other than
providing best practices around the same).


Hmmm... as per my understanding, if we do this then 'df' I guess
will
report wrong values? available-size/free-size etc will be
counted more
than once?


This is true even today, if anyone uses 2 bricks from the same mount.


That is the reason why documentation is the way it is as far as I can
remember.



I forgot a converse though, we could take a disk and partition it
(LVM thinp volumes) and use each of those partitions as bricks,
avoiding the problem of df double counting. Further thinp will help
us expand available space to other bricks on the same disk, as we
destroy older bricks or create new ones to accommodate the moving
pieces (needs more careful thought though, but for sure is a
nightmare without thinp).

I am not so much a fan of large number of thinp partitions, so as
long as that is reasonably in control, we can possibly still use it.
The big advantage though is, we nuke a thinp volume when the brick
that uses that partition, moves out of that disk, and we get the
space back, rather than having or to something akin to rm -rf on the
backend to reclaim space.


Other way to achieve the same is to leverage the quota functionality of
counting how much size is used under a directory.


Yes, I think this is the direction to solve the 2 bricks on a single FS 
as well. Also, IMO, the weight of accounting at each directory level 
that quota brings in seems/is heavyweight to solve just *this* problem.










Today, gluster takes in a directory on host as a brick, and
assuming
we retain that, we would need to split this into multiple
sub-dirs
and use each sub-dir as a brick internally.

All these sub-dirs thus created are part of the same volume
(due to
our current snapshot mapping requirements).




--
Pranith




--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add single server

2017-05-01 Thread Shyam

On 05/01/2017 02:36 PM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com
<mailto:gandalf.corvotempe...@gmail.com>> wrote:

2017-05-01 20:30 GMT+02:00 Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>>:
> Yes, as a matter of fact, you can do this today using the CLI and creating
> nx2 instead of 1x2. 'n' is best decided by you, depending on the growth
> potential of your cluster, as at some point 'n' wont be enough if you grow
> by some nodes.
>
> But, when a brick is replaced we will fail to address "(a) ability to 
retain
> replication/availability levels" as we support only homogeneous 
replication
> counts across all DHT subvols. (I could be corrected on this when using
> replace-brick though)


Yes, but this is error prone.


Why?


To add to Pranith's question, (and to touch a raw nerve, my apologies) 
there is no rebalance in this situation (yet), if you notice.


I do agree that for the duration a brick is replaced its replication 
count is down by 1, is that your concern? In which case I do note that 
without (a) above, availability is at risk during the operation. Which 
needs other strategies/changes to ensure tolerance to errors/faults.






I'm still thinking that saving (I don't know where, I don't know how)
a mapping between
files and bricks would solve many issues and add much more flexibility.




--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Add single server

2017-05-01 Thread Shyam

On 05/01/2017 02:23 PM, Pranith Kumar Karampuri wrote:



On Mon, May 1, 2017 at 11:43 PM, Shyam <srang...@redhat.com
<mailto:srang...@redhat.com>> wrote:

On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:

Splitting the bricks need not be a post factum decision, we can
start with larger brick counts, on a given node/disk count, and
hence spread these bricks to newer nodes/bricks as they are
added.


Let's say we have 1 disk, we format it with say XFS and that
becomes a
brick at the moment. Just curious, what will be the relationship
between
brick to disk in this case(If we leave out LVM for this example)?


I would assume the relation is brick to provided FS directory (not
brick to disk, we do not control that at the moment, other than
providing best practices around the same).


Hmmm... as per my understanding, if we do this then 'df' I guess will
report wrong values? available-size/free-size etc will be counted more
than once?


This is true even today, if anyone uses 2 bricks from the same mount.

I forgot a converse though, we could take a disk and partition it (LVM 
thinp volumes) and use each of those partitions as bricks, avoiding the 
problem of df double counting. Further thinp will help us expand 
available space to other bricks on the same disk, as we destroy older 
bricks or create new ones to accommodate the moving pieces (needs more 
careful thought though, but for sure is a nightmare without thinp).


I am not so much a fan of large number of thinp partitions, so as long 
as that is reasonably in control, we can possibly still use it. The big 
advantage though is, we nuke a thinp volume when the brick that uses 
that partition, moves out of that disk, and we get the space back, 
rather than having or to something akin to rm -rf on the backend to 
reclaim space.






Today, gluster takes in a directory on host as a brick, and assuming
we retain that, we would need to split this into multiple sub-dirs
and use each sub-dir as a brick internally.

All these sub-dirs thus created are part of the same volume (due to
our current snapshot mapping requirements).




--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


  1   2   >