[Gluster-users] Gluster 3.12.1 OOM issue with volume which stores large amount of files

2017-11-01 Thread Jyri Palis
Hi,

I have been struggling with this OOM issue and so far nothing has helped.
So we are running 10TB archive volume which stores bit more than 7M files.
The problem is that due to the way we are managing this archive, we are
forced to run daily "full scans" of file system to discover new
uncompressed files. I know, i know, this is not optimal solution but it is
as it is right now. So this scan causes glusterfsd to grow it's memory
usage till kernels OOM protection kills this process.

Each brick host has 20GB of RAM

Volume Name: logarch
Type: Distributed-Replicate
Volume ID: f5c109a0-704b-411b-9a76-be16f5e936db
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: lfs-n1-vdc1:/data/logarch/brick1/brick
Brick2: lfs-n1-vdc2:/data/logarch/brick1/brick
Brick3: lfs-n1-vdc1:/data/logarch/brick2/brick
Brick4: lfs-n1-vdc2:/data/logarch/brick2/brick
Brick5: lfs-n1-vdc1:/data/logarch/brick3/brick
Brick6: lfs-n1-vdc2:/data/logarch/brick3/brick
Options Reconfigured:
cluster.halo-enabled: yes
auth.allow: 10.10.10.1, 10.10.10.2
network.inode-lru-limit: 100
performance.md-cache-timeout: 60
performance.cache-invalidation: on
performance.cache-samba-metadata: off
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.client-io-threads: on
cluster.shd-max-threads: 4
performance.parallel-readdir: off
transport.address-family: inet
nfs.disable: on
cluster.lookup-optimize: on
server.event-threads: 16
client.event-threads: 16
server.manage-gids: off
performance.nl-cache: on
performance.rda-cache-limit: 64MB
features.bitrot: on
features.scrub: Active
cluster.brick-multiplex: on

PS. All affected clients use FUSE type mounts.

J.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Monthly Newsletter, October 2017

2017-11-01 Thread Marcin Dulak
Hi,

On Wed, Nov 1, 2017 at 5:32 PM, Amye Scavarda  wrote:

> Welcome to an exceptionally busy month for Gluster!
>
> Gluster Summit
> Thanks to all who attended this year's Gluster Summit. As you can see,
> conversations are continuing to happen, with notes from the Birds of a
> Feather sessions starting to come into the mailing lists. If something
> sparked your interest, please post about it!
> We'll be posting recordings of the talks as we're able to, and slides
> as our speakers send them in.
> In the meantime, if you have photos from the event that you'd like to
> share, please add them to our Flickr group:
> https://www.flickr.com/groups/3740055@N23/pool/with/38091522691/
>
> Community Survey now open through November 28!
> http://www.gluster.org/gluster-community-survey-october-2017/
>
> Gluster Developer Conversations
> We had a lot of interest in lightning talks at Gluster Summit, so
> we're starting something new: an hour meeting with five 5 minute
> lightning talks and time for discussion. Our first is on November 28
> at 15:00 UTC, propose your talks on the gluster-users mailing list
> thread here:
> http://lists.gluster.org/pipermail/gluster-users/2017-November/032799.html
>
> Noteworthy threads:
> [Gluster-users] Gluster CLI Feedback
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032657.html
> [Gluster-users] FOSDEM Call for Participation: Software Defined Storage
> devroom
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032673.html
> [Gluster-users] gluster-block v0.3 is alive!
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032694.html
> [Gluster-users] Gluster Health Report tool
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032758.html
> [Gluster-users] Community Meeting: How to make it more active
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032792.html
> [Gluster-users] Documentation BoF
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032793.html
> [Gluster-users] BoF - Gluster for VM store use case
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032794.html
>
> Gluster-devel:
> [Gluster-devel] Enabling core and supporting Gluster services to start
> by default?
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053744.html
> [Gluster-devel] Report ESTALE as ENOENT
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053779.html
> [Gluster-devel] Note on github issue updates from gluster-ant
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053772.html
> [Gluster-devel] Locating blocker bugs for a release (and what is a
> blocker anyway)
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053777.html
> [Gluster-devel] New coding standard
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053806.html
> [Gluster-devel] R.Talur heketi maintainer.
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053828.html
> [Gluster-devel] Identifying tests to automate with Glusto
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053845.html
> [Gluster-devel] Quarterly Infra Updates
> http://lists.gluster.org/pipermail/gluster-devel/2017-October/053850.html
>
> Upcoming CFPs:
> DevConf
> https://devconf.cz/ - CfP closes November 17, 2017
>

I know this is not the fully right place to make this comment, but if
anybody is working with the organizers of devconf
and could influence the quality of the videos produced at devconf, please
consider the following:

- use at least a http://www.rode.com/accessories/boompole stick
to amplify the questions from the audience. Always use a microphone
for the speaker. Ideally improve the video quality, many presentations are
simply unreadable.

- always give meaningful names to the videos published at
https://www.youtube.com/channel/UCmYAQDZIQGm_kPvemBc_qwg/videos
A title like "DevConf 2017 - Day 2 - 13:00 - 16:00 - Workshops - A113"
tells me nothing,
and I would have to start each video to the point until the author shows
the title slide,
in order to know what that video is about.
Can you actually also name the channel instead of having
UCmYAQDZIQGm_kPvemBc_qwg?

Cheers,

Marcin


> Software Defined Storage devroom at FOSDEM: CfP closes November 26
> http://lists.gluster.org/pipermail/gluster-users/2017-October/032673.html
>
> Top Contributing Companies:  Red Hat,  Gluster, Inc.,  Facebook
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case

2017-11-01 Thread Alex K
Yes, I would be interested to hear more on the findings. Let us know once
you have them.

On Nov 1, 2017 13:10, "Shyam Ranganathan"  wrote:

> On 10/31/2017 08:36 PM, Ben Turner wrote:
>
>> * Erasure coded volumes with sharding - seen as a good fit for VM disk
>>> storage
>>>
>> I am working on this with a customer, we have been able to do 400-500 MB
>> / sec writes!  Normally things max out at ~150-250.  The trick is to use
>> multiple files, create the lvm stack and use native LVM striping.  We have
>> found that 4-6 files seems to give the best perf on our setup.  I don't
>> think we are using sharding on the EC vols, just multiple files and LVM
>> striping.  Sharding may be able to avoid the LVM striping, but I bet
>> dollars to doughnuts you won't see this level of perf:)   I am working on a
>> blog post for RHHI and RHEV + RHS performance where I am able to in some
>> cases get 2x+ the performance out of VMs / VM storage.  I'd be happy to
>> share my data / findings.
>>
>>
> Ben, we would like to hear more, so please do share your thoughts further.
> There are a fair number of users in the community who have this use-case
> and may have some interesting questions around the proposed method.
>
> Shyam
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Gluster release 3.10.7 (Long Term Maintenance)

2017-11-01 Thread Ludwig Gamache
fred

On utilise 3.10

Ludwig

On Wed, Nov 1, 2017 at 7:16 AM, Shyam Ranganathan 
wrote:

> The Gluster community is pleased to announce the release of Gluster
> 3.10.7 (packages available at [1]).
>
> Release notes for the release can be found at [2].
>
> We are still working on a further fix for the corruption issue when
> sharded volumes are rebalanced, details as below.
>
>
> * Expanding a gluster volume that is sharded may cause file corruption
>  - Sharded volumes are typically used for VM images, if such volumes
> are expanded or possibly contracted (i.e add/remove bricks and
> rebalance) there are reports of VM images getting corrupted.
>  - The last known cause for corruption #1498081 is still pending,
> and not yet a part of this release.
>
> Reminder: Since GlusterFS 3.9 the Fedora RPM and Debian .deb public
> signing key is, e.g., for 3.10, at
> https://download.gluster.org/pub/gluster/glusterfs/3.10/rsa.pub. If you
> have an old /etc/yum.repos.d/glusterfs-fedora.repo file with a link to
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub then
> you need to fix your .repo
> file to point to the correct location of the public key. This is a
> safety feature to help prevent unintended updates from earlier versions.
>
> Thanks,
> Gluster community
>
> [1] Packages:
> https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.7/
>
> [2] Release notes:
> https://github.com/gluster/glusterfs/blob/v3.10.7/doc/releas
> e-notes/3.10.7.md
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Ludwig Gamache
IT Director - Element AI
4200 St-Laurent, suite 1200
514-704-0564
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Monthly Newsletter, October 2017

2017-11-01 Thread Amye Scavarda
Welcome to an exceptionally busy month for Gluster!

Gluster Summit
Thanks to all who attended this year's Gluster Summit. As you can see,
conversations are continuing to happen, with notes from the Birds of a
Feather sessions starting to come into the mailing lists. If something
sparked your interest, please post about it!
We'll be posting recordings of the talks as we're able to, and slides
as our speakers send them in.
In the meantime, if you have photos from the event that you'd like to
share, please add them to our Flickr group:
https://www.flickr.com/groups/3740055@N23/pool/with/38091522691/

Community Survey now open through November 28!
http://www.gluster.org/gluster-community-survey-october-2017/

Gluster Developer Conversations
We had a lot of interest in lightning talks at Gluster Summit, so
we're starting something new: an hour meeting with five 5 minute
lightning talks and time for discussion. Our first is on November 28
at 15:00 UTC, propose your talks on the gluster-users mailing list
thread here:
http://lists.gluster.org/pipermail/gluster-users/2017-November/032799.html

Noteworthy threads:
[Gluster-users] Gluster CLI Feedback
http://lists.gluster.org/pipermail/gluster-users/2017-October/032657.html
[Gluster-users] FOSDEM Call for Participation: Software Defined Storage devroom
http://lists.gluster.org/pipermail/gluster-users/2017-October/032673.html
[Gluster-users] gluster-block v0.3 is alive!
http://lists.gluster.org/pipermail/gluster-users/2017-October/032694.html
[Gluster-users] Gluster Health Report tool
http://lists.gluster.org/pipermail/gluster-users/2017-October/032758.html
[Gluster-users] Community Meeting: How to make it more active
http://lists.gluster.org/pipermail/gluster-users/2017-October/032792.html
[Gluster-users] Documentation BoF
http://lists.gluster.org/pipermail/gluster-users/2017-October/032793.html
[Gluster-users] BoF - Gluster for VM store use case
http://lists.gluster.org/pipermail/gluster-users/2017-October/032794.html

Gluster-devel:
[Gluster-devel] Enabling core and supporting Gluster services to start
by default?
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053744.html
[Gluster-devel] Report ESTALE as ENOENT
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053779.html
[Gluster-devel] Note on github issue updates from gluster-ant
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053772.html
[Gluster-devel] Locating blocker bugs for a release (and what is a
blocker anyway)
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053777.html
[Gluster-devel] New coding standard
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053806.html
[Gluster-devel] R.Talur heketi maintainer.
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053828.html
[Gluster-devel] Identifying tests to automate with Glusto
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053845.html
[Gluster-devel] Quarterly Infra Updates
http://lists.gluster.org/pipermail/gluster-devel/2017-October/053850.html

Upcoming CFPs:
DevConf
https://devconf.cz/ - CfP closes November 17, 2017
Software Defined Storage devroom at FOSDEM: CfP closes November 26
http://lists.gluster.org/pipermail/gluster-users/2017-October/032673.html

Top Contributing Companies:  Red Hat,  Gluster, Inc.,  Facebook
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Developer Conversations - Nov 28 at 15:00 UTC

2017-11-01 Thread Amye Scavarda
Hi all!
Based on the popularity of wanting more lightning talks at Gluster
Summit, we'll be trying something new: Gluster Developer
Conversations. This will be a one hour meeting on November 28th at UTC
15:00, with five 5 minute lightning talks and time for discussion in
between. The meeting will be recorded, and I'll be posting the
individual talks separately in our community channels.

What would you want to talk about?
Respond on this thread, and if I get more than five, we'll schedule
following meetings.
Thanks!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing Gluster release 3.10.7 (Long Term Maintenance)

2017-11-01 Thread Shyam Ranganathan

The Gluster community is pleased to announce the release of Gluster
3.10.7 (packages available at [1]).

Release notes for the release can be found at [2].

We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.


* Expanding a gluster volume that is sharded may cause file corruption
 - Sharded volumes are typically used for VM images, if such volumes
are expanded or possibly contracted (i.e add/remove bricks and
rebalance) there are reports of VM images getting corrupted.
 - The last known cause for corruption #1498081 is still pending,
and not yet a part of this release.

Reminder: Since GlusterFS 3.9 the Fedora RPM and Debian .deb public
signing key is, e.g., for 3.10, at
https://download.gluster.org/pub/gluster/glusterfs/3.10/rsa.pub. If you
have an old /etc/yum.repos.d/glusterfs-fedora.repo file with a link to
https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub then
you need to fix your .repo
file to point to the correct location of the public key. This is a
safety feature to help prevent unintended updates from earlier versions.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.7/

[2] Release notes:
https://github.com/gluster/glusterfs/blob/v3.10.7/doc/release-notes/3.10.7.md
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case

2017-11-01 Thread Shyam Ranganathan

On 10/31/2017 08:36 PM, Ben Turner wrote:

* Erasure coded volumes with sharding - seen as a good fit for VM disk
storage

I am working on this with a customer, we have been able to do 400-500 MB / sec 
writes!  Normally things max out at ~150-250.  The trick is to use multiple 
files, create the lvm stack and use native LVM striping.  We have found that 
4-6 files seems to give the best perf on our setup.  I don't think we are using 
sharding on the EC vols, just multiple files and LVM striping.  Sharding may be 
able to avoid the LVM striping, but I bet dollars to doughnuts you won't see 
this level of perf:)   I am working on a blog post for RHHI and RHEV + RHS 
performance where I am able to in some cases get 2x+ the performance out of VMs 
/ VM storage.  I'd be happy to share my data / findings.



Ben, we would like to hear more, so please do share your thoughts 
further. There are a fair number of users in the community who have this 
use-case and may have some interesting questions around the proposed method.


Shyam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users