[Gluster-devel] copy_file_range() syscall for offloading copying of files?

2018-06-07 Thread Niels de Vos
Hi Pranith and Amar,

The copy_file_range() syscall can support reflinks on the (local)
filesystem. This is something I'd really like to see in Gluster soonish.
There is https://github.com/gluster/glusterfs/issues/349 which discusses
some of the technical bits, but there has not been an updated since the
beginning of April.

If we can support a copy_file_range() FOP in Gluster, support for
reflinks can then be made transarant. The actual data copying will be
done in the bricks, without transporting the data back and forth between
client and server. Distribution of the data might not be optimal, but I
think that is acceptible for many use-cases where the performance of
'file cloning' is important. Many of these environments will not have
distributed volumes in any case.

Note that copy_file_range() does not guarantee that reflinks are used.
This depends on the support and implementation of the backend
filesystem. XFS in Fedora already supports reflinks (needs special mkfs
options), and we could really benefit of this for large files like VM
disk-images.

Please provide an updated status by replying to this email, and ideally
adding a note to the GitHub issue.

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Any mature(better) solution(way) to handle slow performance on 'ls -l, '.

2018-06-07 Thread Poornima Gurusiddaiah
If you are not using applications that rely on 100% metadata consistency,
like Databases, Kafka, AMQ etc. you can use the below mentioned volume
options:

# gluster volume set  group metadata-cache

# gluster volume set  network.inode-lru-limit 20

# gluster volume set  performance.readdir-ahead on

# gluster volume set  performance.parallel-readdir on

For more information refer to [1]

Also, which version og Gluster are you using? Its preferred to use 3.11 or
above for these perf enhancements.
Note that parallel-readdir is going to help in increasing the ls -l
performance drastically in your case, but there are few corner case known
issues.

Regards,
Poornima

[1]
https://github.com/gluster/glusterdocs/pull/342/files#diff-62f536ad33b2c2210d023b0cffec2c64

On Wed, May 30, 2018, 8:29 PM Yanfei Wang  wrote:

> Hi experts on glusterFS,
>
> In our testbed, we found that the ' ls -l' performance is pretty slow.
> Indeed from the prospect of glusterFS design space, we need to avoid
> 'ls ' directory which will traverse all bricks sequentially in our
> current knowledge.
>
> We use generic setting for our testbed:
>
> ```
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 19 x 3 = 57
> Transport-type: tcp
> Bricks:
> ...
> Options Reconfigured:
> features.inode-quota: off
> features.quota: off
> cluster.quorum-reads: on
> cluster.quorum-count: 2
> cluster.quorum-type: fixed
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.server-quorum-ratio: 51%
>
> ```
>


> Carefully consulting docs, the NFS client is preferred client solution
> for better 'ls' performance. However, this better performance comes
> from caching meta info locally, I think, and the caching mechanism
> will cause the penalty  of data coherence, right?
>
> I want to know what's the best or mature way to trade-off the 'ls '
> performance with data coherence in in reality? Any comments are
> welcome.
>
> Thanks.
>
> -Fei
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-06-07-d788cc59 (master branch)

2018-06-07 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-06-07-d788cc59
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel