- Original Message -
> From: "Niklas Hambüchen"
> To: "Ben Turner"
> Cc: "Gluster Users"
> Sent: Monday, September 18, 2017 11:27:33 AM
> Subject: Re: [Gluster-users] Confusing lstat() performance
>
> On 18/09/17 17:23, Ben
On 18/09/17 17:23, Ben Turner wrote:
> Do you want tuned or untuned? If tuned I'd like to try one of my tunings for
> metadata, but I will use yours if you want.
(Re-CC'd list)
I would be interested in both, if possible: To confirm that it's not
only my machines that exhibit this behaviour
On 18/09/17 16:51, Ben Turner wrote:
> I wouldn't mind, but I don't have your dataset.
Oh sorry, I thought I had posted that here but in fact I did so in a
different issue regarding getdents() performance (bug 1478411).
My benchmarking data set is trivial: 100k empty files.
In a directory on
- Original Message -
> From: "Niklas Hambüchen"
> To: "Ben Turner"
> Cc: gluster-users@gluster.org
> Sent: Sunday, September 17, 2017 9:49:10 PM
> Subject: Re: [Gluster-users] Confusing lstat() performance
>
> Hi Ben,
>
> do you know if the smallfile
Thanks Milind,
Yes I’m hanging out for CentOS’s Storage / Gluster SIG to release the packages
for 3.12.1, I can see the packages were built a week ago but they’re still not
on the repo :(
--
Sam
> On 18 Sep 2017, at 9:57 pm, Milind Changire wrote:
>
> Sam,
> You might
Sam,
You might want to give glusterfs-3.12.1 a try instead.
On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod
wrote:
> Howdy,
>
> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have
> having issues with glusterd.log and glustershd.log both being
Any quick suggestion.?
On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL
wrote:
> Hi Team,
>
> As you can see permission for the glusterfs logs in /var/log/glusterfs is
> 600.
>
> drwxr-xr-x 3 root root 140 Jan 1 00:00 ..
> *-rw--- 1 root root0 Jan 3
Hi Team,
As you can see permission for the glusterfs logs in /var/log/glusterfs is
600.
drwxr-xr-x 3 root root 140 Jan 1 00:00 ..
*-rw--- 1 root root0 Jan 3 20:21 cmd_history.log*
drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks
drwxr-xr-x 3 root root 100 Jan 3 20:21 .
*-rw--- 1
On Thu, Sep 14, 2017 at 12:58 AM, Ben Werthmann wrote:
> I ran into something like this in 3.10.4 and filed two bugs for it:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1491059
> https://bugzilla.redhat.com/show_bug.cgi?id=1491060
>
> Please see the above bugs for full
Hey,
How does distributed volume works with samba?
I've got 2 servers with glusterfs. Each server is about 130 TB and acts
like a brick for distributed volume.
I've got samba up and running with vfs module on first server. Last night
I've added the second server as a brick and rebalanced the
Hi Ben,
do you know if the smallfile benchmark also does interleaved getdents()
and lstat, which is what I found as being the key difference that
creates the performance gap (further down this thread)?
Also, wouldn't `--threads 8` change the performance numbers by factor 8
versus the plain `ls`
I attached my strace output for you to look at:
Smallfile stat:
files/sec = 2270.307299
% time seconds usecs/call callserrors syscall
-- --- --- - -
84.48 272.3244123351 81274 1141 stat
10.20 32.880871
I did a quick test on one of my lab clusters with no tuning except for quota
being enabled:
[root@dell-per730-03 ~]# gluster v info
Volume Name: vmstore
Type: Replicate
Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
I found the reason now, at least for this set of lstat()s I was looking at.
bup first does all getdents(), obtaining all file names in the
directory, and then stat()s them.
Apparently this destroys some of gluster's caching, making stat()s ~100x
slower.
What caching could this be, and how could
On 17/09/17 18:03, Niklas Hambüchen wrote:
> So far the only difference between `ls` and `bup index` I could observe
> is that `bup index` chdir()s into the directory to index, ls doesn't.
>
> But when I `cd` into the dir and run `ls` without directory argument, it
> is still much faster than bup
On 15/09/17 03:46, Niklas Hambüchen wrote:
>> Out of interest have you tried testing performance
>> with performance.stat-prefetch enabled?
I have now tested with `performance.stat-prefetch: on` but am not
observing a difference.
So far the only difference between `ls` and `bup index` I could
Found that this specific gfid was not pointing to any file.
Checked this with gfid resolver script
https://gist.github.com/semiosis/4392640
Moved the gfid out of gluster and all ok now.
Thanx,
Alex
On Sun, Sep 17, 2017 at 11:31 AM, Alex K wrote:
> I am using gluster
The backport just got merged few minutes back and this fix should be
available in next update of 3.10.
On Fri, Sep 15, 2017 at 2:08 PM, ismael mondiu wrote:
> Hello Team,
>
> Do you know when the backport to 3.10 will be available ?
>
> Thanks
>
>
>
>
>
I am using gluster 3.8.12, the default on Centos 7.3
(I will update to 3.10 at some moment)
On Sun, Sep 17, 2017 at 11:30 AM, Alex K wrote:
> Hi all,
>
> I have a replica 3 with 1 arbiter.
>
> I see the last days that one file at a volume is always showing as needing
>
Hi all,
I have a replica 3 with 1 arbiter.
I see the last days that one file at a volume is always showing as needing
healing:
gluster volume heal vms info
Brick gluster0:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster1:/gluster/vms/brick
Status: Connected
Number of
hi all,
I want to know some more detail about glusterfs georeplication, more about
syncdeamon, if 'file A' was mirorred in slave volume , a change happen to
'file A', then how the syncdeamon act?
1. transfer the whole 'file A' to slave
2. transfer the changes of file A to slave
thx lot
Hello all fellow GlusterFriends,
I would like you to comment / correct my upgrade procedure steps on replica 2
volume of 3.7.x gluster.
Than I would like to change replica 2 to replica 3 in order to correct quorum
issue that Infrastructure currently has.
Infrastructure setup:
- all clients
22 matches
Mail list logo