[Gluster-devel] Disabling use of anonymous fds in open-behind

2018-06-11 Thread Raghavendra Gowdappa
All,

This is an option in open-behind, which lets fuse native mounts to use
anonymous fds. The reasoning being since anonymous fds are stateless,
overhead of open is avoided and hence better performance. However, bugs
filed [1][2] seemed to indicate contrary results.

Also, using anonymous fds affects other xlators which rely on per fd state
[3].

So, this brings to the point do anonymous-fds actually improve performance
on native fuse mounts? If not, we can disable them. May be they are useful
for light weight metadata operations like fstat, but the workload should
only be limited to them. Note that anonymous fds are used by open-behind by
only two fops - readv and fstat. But, [1] has shown that they actually
regress performance for sequential reads.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1419807
[2] https://bugzilla.redhat.com/1489513, "read-ahead underperrforms
expectations"
  open-behind without patch (MiB/s) with patch (MiB/s)
  on  132.87133.51
  off 139.70139.77

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1084508

PS: Anonymous fds are stateless fds, where a client like native fuse mount
doesn't do an explicit open. Instead, bricks do the open on-demand during
fops which need an fd (like readv, fstat etc).

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Any mature(better) solution(way) to handle slow performance on 'ls -l, '.

2018-06-11 Thread Yanfei Wang
Thanks for you kind reply, which is very helpful for me, I use 3.11,

Thanks a lot.  :-)

- Fei
On Thu, Jun 7, 2018 at 6:59 PM Poornima Gurusiddaiah
 wrote:
>
> If you are not using applications that rely on 100% metadata consistency, 
> like Databases, Kafka, AMQ etc. you can use the below mentioned volume 
> options:
>
> # gluster volume set  group metadata-cache
>
> # gluster volume set  network.inode-lru-limit 20
>
> # gluster volume set  performance.readdir-ahead on
>
> # gluster volume set  performance.parallel-readdir on
>
> For more information refer to [1]
>
> Also, which version og Gluster are you using? Its preferred to use 3.11 or 
> above for these perf enhancements.
> Note that parallel-readdir is going to help in increasing the ls -l 
> performance drastically in your case, but there are few corner case known 
> issues.
>
> Regards,
> Poornima
>
> [1] 
> https://github.com/gluster/glusterdocs/pull/342/files#diff-62f536ad33b2c2210d023b0cffec2c64
>
> On Wed, May 30, 2018, 8:29 PM Yanfei Wang  wrote:
>>
>> Hi experts on glusterFS,
>>
>> In our testbed, we found that the ' ls -l' performance is pretty slow.
>> Indeed from the prospect of glusterFS design space, we need to avoid
>> 'ls ' directory which will traverse all bricks sequentially in our
>> current knowledge.
>>
>> We use generic setting for our testbed:
>>
>> ```
>> Volume Name: gv0
>> Type: Distributed-Replicate
>> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 19 x 3 = 57
>> Transport-type: tcp
>> Bricks:
>> ...
>> Options Reconfigured:
>> features.inode-quota: off
>> features.quota: off
>> cluster.quorum-reads: on
>> cluster.quorum-count: 2
>> cluster.quorum-type: fixed
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.server-quorum-ratio: 51%
>>
>> ```
>
>
>>
>> Carefully consulting docs, the NFS client is preferred client solution
>> for better 'ls' performance. However, this better performance comes
>> from caching meta info locally, I think, and the caching mechanism
>> will cause the penalty  of data coherence, right?
>>
>> I want to know what's the best or mature way to trade-off the 'ls '
>> performance with data coherence in in reality? Any comments are
>> welcome.
>>
>> Thanks.
>>
>> -Fei
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] copy_file_range() syscall for offloading copying of files?

2018-06-11 Thread Amar Tumballi
Thanks for the email Niels!

At present we were not actively looking at this, and surely would be great
to get some activity going on this front. Happy to re-initiate the
discussions if there are any takers for the feature. From my team, I don't
see any sponsors for the feature in next 6 months, at least :-/

-Amar


On Thu, Jun 7, 2018 at 6:27 AM, Niels de Vos  wrote:

> Hi Pranith and Amar,
>
> The copy_file_range() syscall can support reflinks on the (local)
> filesystem. This is something I'd really like to see in Gluster soonish.
> There is https://github.com/gluster/glusterfs/issues/349 which discusses
> some of the technical bits, but there has not been an updated since the
> beginning of April.
>
> If we can support a copy_file_range() FOP in Gluster, support for
> reflinks can then be made transarant. The actual data copying will be
> done in the bricks, without transporting the data back and forth between
> client and server. Distribution of the data might not be optimal, but I
> think that is acceptible for many use-cases where the performance of
> 'file cloning' is important. Many of these environments will not have
> distributed volumes in any case.
>
> Note that copy_file_range() does not guarantee that reflinks are used.
> This depends on the support and implementation of the backend
> filesystem. XFS in Fedora already supports reflinks (needs special mkfs
> options), and we could really benefit of this for large files like VM
> disk-images.
>
> Please provide an updated status by replying to this email, and ideally
> adding a note to the GitHub issue.
>
> Thanks!
> Niels
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-06-11-319aa4b0 (master branch)

2018-06-11 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-06-11-319aa4b0
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel