It worries me how many threads talk about low performance. I'm about to
build out a replica 3 setup and run Ovirt with a bunch of Windows VMs.
Are the issues Tony is experiencing "normal" for Gluster? Does anyone here
have a system with windows VMs and have good performance?
*Vincent Royer*
There is the ability to notify the client already. If you developed against
libgfapi you could do it (I think).
On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:
>Hey,
>
>I thought about it a while back, haven't actually done it but I assume
>using inotify on the brick should work, at
Hey,
I thought about it a while back, haven't actually done it but I assume
using inotify on the brick should work, at least in replica volumes
(disperse probably wouldn't, you wouldn't get all events or you'd need
to make sure your inotify runs on every brick). Then from there you
could notify
Update:
We already have more than 20 unique votes for this change. We will keep
this open for another 2 weeks (till next maintainer's meeting), and if
there are no concerns by that time from anyone, prefer to merge the patch.
Regards,
Amar
On Thu, Apr 19, 2018 at 11:06 AM, Amar Tumballi
https://github.com/libfuse/libfuse/wiki/Fsnotify-and-FUSE
On May 3, 2018 8:33:30 AM PDT, lejeczek wrote:
>hi guys
>
>will we have gluster with inotify? some point / never?
>
>thanks, L.
>___
>Gluster-users mailing list
hi guys
will we have gluster with inotify? some point / never?
thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Hi,
We need some more information in order to debug this.
The version of Gluster you were running before the upgrade
The output of gluster volume info
The brick logs for the volume when the operation is performed.
Regards,
Nithya
On 2 May 2018 at 15:19, Hoggins! wrote:
There are also free inodes on the disks of all the machines... don't
where to look to solve this. Any idea ?
Le 02/05/2018 à 12:39, Hoggins! a écrit :
> Oh, and *there is* space on the device where the brick's data is located.
>
> /dev/mapper/fedora-home 942G 868G 74G 93% /export
>
>