Hi,
Please provide the log for the mount process from the node on which you
have mounted the volume. This should be in /var/log/glusterfs and the name
of the file will the the hyphenated path of the mount point. For e.g., If
the volume in mounted at /mnt/glustervol, the log file will be
Adding gluster-users.
On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan wrote:
> Hi,
>
> here is the output from virt3 - problematic host:
>
> [root@virt3 ~]# gluster volume status
> Status of volume: data
> Gluster process TCP Port RDMA Port
Hi,
This might be because of:
https://github.com/gluster/glusterfs/blob/release-3.13/doc/release-notes/3.13.0.md#ability-to-reserve-back-end-storage-space
Please try running the following and see if it solves the problem:
gluster volume set storage.reserve 0
Regards,
Nithya
On 4 February
Thanks for the report Artem,
Looks like the issue is about cache warming up. Specially, I suspect rsync
doing a 'readdir(), stat(), file operations' loop, where as when a find or
ls is issued, we get 'readdirp()' request, which contains the stat
information along with entries, which also makes
You mounting it to the local bricks?
struggling with same performance issues
try using this volume setting
http://lists.gluster.org/pipermail/gluster-users/2018-January/033397.html
performance.stat-prefetch: on might be it
seems like when it gets to cache it is fast - those stat fetch which
seem
Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
# cat /etc/centos-release
CentOS release 6.9 (Final)
# glusterfs --version
glusterfs 3.12.3
[root@master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
[root@master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status:
I have 2 data centers in two different region, each DC have 3 severs, I
have created glusterfs volume with 4 replica, this is glusterfs volume info
output:
Volume Name: test-halo
Type: Replicate
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1:
An update, and a very interesting one!
After I started stracing rsync, all I could see was lstat calls, quite slow
ones, over and over, which is expected.
For example: lstat("uploads/2016/10/nexus2cee_DSC05339_thumb-161x107.jpg",
{st_mode=S_IFREG|0664, st_size=4043, ...}) = 0
I googled around
Hi,
I have been working on setting up a 4 replica gluster with over a million
files (~250GB total), and I've seen some really weird stuff happen, even
after trying to optimize for small files. I've set up a 4-brick replicate
volume (gluster 3.13.2).
It took almost 2 days to rsync the data from
10 matches
Mail list logo