On Tue, Jun 24, 2014 at 03:15:58AM -0700, Gluster Build System wrote:
> 
> 
> SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1.tar.gz
> 
> This release is made off jenkins-release-73

Many thanks to everyone how tested the glusterfs-3.5.1 beta releases and 
gave feedback. There were no regressions reported compared to the 3.5.0 
release.

Many bugs have been fixed, and documentation for all new features in 3.5 
should be included now. Thanks to all the reporters, developers and 
testers for improving the 3.5 stable series.

Below you will find the release notes in MarkDown format for 
glusterfs-3.5.1, these are included in the tar.gz as
doc/release-notes/3.5.1.md. The mirror repository on GitHub provides 
a nicely rendered version:
- https://github.com/gluster/glusterfs/blob/v3.5.1/doc/release-notes/3.5.1.md

Packages for different Linux distributions will follow shortly.  
Notifications are normally sent to this list when the packages are 
available for download, and/or have reached the distributions update 
infrastructure.

Changes for a new 3.5.2 release are now being accepted. The list of 
proposed fixes is already growing:
- 
https://bugzilla.redhat.com/showdependencytree.cgi?hide_resolved=0&id=glusterfs-3.5.2

Anyone is free to request a bugfix or backport for the 3.5.2 release. In 
order to do so, file a bug and set the 'blocked' field to 
'glusterfs-3.5.2' so that we can track the requests. Use this link to 
make it a little easier for yourself:
- 
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&version=3.5.1&blocked=glusterfs-3.5.2

Cheers,
Niels



## Release Notes for GlusterFS 3.5.1

This is mostly a bugfix release. The [Release Notes for 3.5.0](3.5.0.md)
contain a listing of all the new features that were added.

There are two notable changes that are not only bug fixes, or documentation
additions:

1. a new volume option `server.manage-gids` has been added
   This option should be used when users of a volume are in more than
   approximately 93 groups (Bug [1096425](https://bugzilla.redhat.com/1096425))
2. Duplicate Request Cache for NFS has now been disabled by default, this may
   reduce performance for certain workloads, but improves the overall stability
   and memory footprint for most users

### Bugs Fixed:

* [765202](https://bugzilla.redhat.com/765202): lgetxattr called with invalid 
keys on the bricks
* [833586](https://bugzilla.redhat.com/833586): inodelk hang from 
marker_rename_release_newp_lock
* [859581](https://bugzilla.redhat.com/859581): self-heal process can sometimes 
create directories instead of symlinks for the root gfid file in .glusterfs
* [986429](https://bugzilla.redhat.com/986429): Backupvolfile server option 
should work internal to GlusterFS framework
* [1039544](https://bugzilla.redhat.com/1039544): [FEAT] "gluster volume heal 
info" should list the entries that actually required to be healed.
* [1046624](https://bugzilla.redhat.com/1046624): Unable to heal symbolic Links
* [1046853](https://bugzilla.redhat.com/1046853): AFR : For every file 
self-heal there are warning messages reported in glustershd.log file
* [1063190](https://bugzilla.redhat.com/1063190): Volume was not accessible 
after server side quorum was met
* [1064096](https://bugzilla.redhat.com/1064096): The old Python Translator 
code (not Glupy) should be removed
* [1066996](https://bugzilla.redhat.com/1066996): Using sanlock on a gluster 
mount with replica 3 (quorum-type auto) leads to a split-brain
* [1071191](https://bugzilla.redhat.com/1071191): [3.5.1] Sporadic SIGBUS with 
mmap() on a sparse file created with open(), seek(), write()
* [1078061](https://bugzilla.redhat.com/1078061): Need ability to heal 
mismatching user extended attributes without any changelogs
* [1078365](https://bugzilla.redhat.com/1078365): New xlators are linked as 
versioned .so files, creating <xlator>.so.0.0.0
* [1086743](https://bugzilla.redhat.com/1086743): Add documentation for the 
Feature: RDMA-connection manager (RDMA-CM)
* [1086748](https://bugzilla.redhat.com/1086748): Add documentation for the 
Feature: AFR CLI enhancements
* [1086749](https://bugzilla.redhat.com/1086749): Add documentation for the 
Feature: Exposing Volume Capabilities
* [1086750](https://bugzilla.redhat.com/1086750): Add documentation for the 
Feature: File Snapshots in GlusterFS
* [1086751](https://bugzilla.redhat.com/1086751): Add documentation for the 
Feature: gfid-access
* [1086752](https://bugzilla.redhat.com/1086752): Add documentation for the 
Feature: On-Wire Compression/Decompression
* [1086754](https://bugzilla.redhat.com/1086754): Add documentation for the 
Feature: Quota Scalability
* [1086755](https://bugzilla.redhat.com/1086755): Add documentation for the 
Feature: readdir-ahead
* [1086756](https://bugzilla.redhat.com/1086756): Add documentation for the 
Feature: zerofill API for GlusterFS
* [1086758](https://bugzilla.redhat.com/1086758): Add documentation for the 
Feature: Changelog based parallel geo-replication
* [1086760](https://bugzilla.redhat.com/1086760): Add documentation for the 
Feature: Write Once Read Many (WORM) volume
* [1086762](https://bugzilla.redhat.com/1086762): Add documentation for the 
Feature: BD Xlator - Block Device translator
* [1086766](https://bugzilla.redhat.com/1086766): Add documentation for the 
Feature: Libgfapi
* [1086774](https://bugzilla.redhat.com/1086774): Add documentation for the 
Feature: Access Control List - Version 3 support for Gluster NFS
* [1086781](https://bugzilla.redhat.com/1086781): Add documentation for the 
Feature: Eager locking
* [1086782](https://bugzilla.redhat.com/1086782): Add documentation for the 
Feature: glusterfs and  oVirt integration
* [1086783](https://bugzilla.redhat.com/1086783): Add documentation for the 
Feature: qemu 1.3 - libgfapi integration
* [1088848](https://bugzilla.redhat.com/1088848): Spelling errors in 
rpc/rpc-transport/rdma/src/rdma.c
* [1089054](https://bugzilla.redhat.com/1089054): gf-error-codes.h is missing 
from source tarball
* [1089470](https://bugzilla.redhat.com/1089470): SMB: Crash on brick process 
during compile kernel.
* [1089934](https://bugzilla.redhat.com/1089934): list dir with more than N 
files results in Input/output error
* [1091340](https://bugzilla.redhat.com/1091340): Doc: Add glfs_fini known 
issue to release notes 3.5
* [1091392](https://bugzilla.redhat.com/1091392): glusterfs.spec.in: minor/nit 
changes to sync with Fedora spec
* [1095256](https://bugzilla.redhat.com/1095256): Excessive logging from 
self-heal daemon, and bricks
* [1095595](https://bugzilla.redhat.com/1095595): Stick to IANA standard while 
allocating brick ports
* [1095775](https://bugzilla.redhat.com/1095775): Add support in libgfapi to 
fetch volume info from glusterd.
* [1095971](https://bugzilla.redhat.com/1095971): Stopping/Starting a Gluster 
volume resets ownership
* [1096040](https://bugzilla.redhat.com/1096040): AFR : self-heal-daemon not 
clearing the change-logs of all the sources after self-heal
* [1096425](https://bugzilla.redhat.com/1096425): i/o error when one user tries 
to access RHS volume over NFS with 100+ GIDs
* [1099878](https://bugzilla.redhat.com/1099878): Need support for handle based 
Ops to fetch/modify extended attributes of a file
* [1101647](https://bugzilla.redhat.com/1101647): gluster volume heal volname 
statistics heal-count not giving desired output.
* [1102306](https://bugzilla.redhat.com/1102306): license: 
xlators/features/glupy dual license GPLv2 and LGPLv3+
* [1103413](https://bugzilla.redhat.com/1103413): Failure in gf_log_init 
reopening stderr
* [1104592](https://bugzilla.redhat.com/1104592): heal info may give Success 
instead of transport end point not connected when a brick is down.
* [1104915](https://bugzilla.redhat.com/1104915): glusterfsd crashes while 
doing stress tests
* [1104919](https://bugzilla.redhat.com/1104919): Fix memory leaks in 
gfid-access xlator.
* [1104959](https://bugzilla.redhat.com/1104959): Dist-geo-rep : some of the 
files not accessible on slave after the geo-rep sync from master to slave.
* [1105188](https://bugzilla.redhat.com/1105188): Two instances each, of brick 
processes, glusterfs-nfs and quotad seen after glusterd restart
* [1105524](https://bugzilla.redhat.com/1105524): Disable nfs.drc by default
* [1107937](https://bugzilla.redhat.com/1107937): quota-anon-fd-nfs.t fails 
spuriously
* [1109832](https://bugzilla.redhat.com/1109832): I/O fails for for glusterfs 
3.4 AFR clients accessing servers upgraded to glusterfs 3.5
* [1110777](https://bugzilla.redhat.com/1110777): glusterfsd OOM - using all 
memory when quota is enabled

### Known Issues:

- The following configuration changes are necessary for qemu and samba
  integration with libgfapi to work seamlessly:

   1. `gluster volume set <volname> server.allow-insecure on`
   2. restarting the volume is necessary
       ~~~
       gluster volume stop <volname>
       gluster volume start <volname>
       ~~~
   3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
       ~~~
       option rpc-auth-allow-insecure on
       ~~~
   4. restarting glusterd is necessary
       ~~~
       service glusterd restart
       ~~~

   More details are also documented in the Gluster Wiki on the [Libgfapi with 
qemu 
libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt)
 page.

- For Block Device translator based volumes open-behind translator at the 
client side needs to be disabled.

- libgfapi clients calling `glfs_fini` before a successfull `glfs_init` will 
cause the client to
  hang has been [reported by QEMU 
developers](https://bugs.launchpad.net/bugs/1308542).
  The workaround is NOT to call `glfs_fini` for error cases encountered before 
a successfull
  `glfs_init`. Follow [Bug 1091335](https://bugzilla.redhat.com/1091335) to get 
informed when a
  release is made available that contains a final fix.

- After enabling `server.manage-gids`, the volume needs to be stopped and
  started again to have the option enabled in the brick processes
  ~~~
  gluster volume stop <volname>
  gluster volume start <volname>
  ~~~

_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to