Excellent description, thank you.
With performance.write-behind-trickling-writes ON (default):
## 4k randwrite
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test
--filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite
test: (g=0): rw=randwrite, bs=(R)
On Tue, Mar 20, 2018 at 8:57 AM, Sam McLeod
wrote:
> Hi Raghavendra,
>
>
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa
> wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
>
Hi Raghavendra,
> On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa wrote:
>
> Aggregating large number of small writes by write-behind into large writes
> has been merged on master:
> https://github.com/gluster/glusterfs/issues/364
>
On Tue, Mar 20, 2018 at 1:55 AM, TomK wrote:
> On 3/19/2018 10:52 AM, Rik Theys wrote:
>
>> Hi,
>>
>> On 03/19/2018 03:42 PM, TomK wrote:
>>
>>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
>>> Removing NFS or NFS Ganesha from the equation, not very impressed on my
>>> own
Howdy all,
Sorry in Australia so most of your replies came in over night for me.
Note: At the end of this reply is a listing of all our volume settings (gluster
get volname all).
Note 2: I really wish Gluster used Discourse for this kind of community
troubleshooting an analysis, using a
On 3/19/2018 10:52 AM, Rik Theys wrote:
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either. For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems bottle-necked via a single execution core somewhere trying
to facilitate read / writes to
Hi,
As I posted in my previous emails - glusterfs can never match NFS (especially
async one) performance of small files/latency. That's given by the design.
Nothing you can do about it.
Ondrej
-Original Message-
From: gluster-users-boun...@gluster.org
Hi,
I've done some similar tests and experience similar performance issues
(see my 'gluster for home directories?' thread on the list).
If I read your mail correctly, you are comparing an NFS mount of the
brick disk against a gluster mount (using the fuse client)?
Which options do you have set
Hi Victor,
On Sun, Mar 18, 2018 at 3:47 AM, Victor T
wrote:
>
> *No. After bringing up one brick and before stopping the next one, you
> need to be sure that there are no damaged files. You shouldn't reboot a
> node if "gluster volume heal info" shows damaged
11 matches
Mail list logo