I have received but haven’t got a chance to look at them. I can only come
back on this sometime early next week based on my schedule.
On Fri, 18 Jan 2019 at 16:52, Amudhan P wrote:
> Hi Atin,
>
> I have sent files to your email directly in other mail. hope you have
> received.
>
> regards
>
I actually have 4 bricks with no arbiters. Fixed quorum count of 1 assures
the files will be accessible even if all but 1 brick go down. Performance
is good enough, though it can always be better, of course.
On Fri, Jan 18, 2019, 5:37 AM Andreas Davour On Fri, 18 Jan 2019, Diego Remolina wrote:
The OP (me) has a two node setup. I am not sure how many nodes in Artem's
configuration (he is running 4.0.2).
It can make sense that the more bricks you have, the higher the performance
hit in certain conditions, given that supposedly one of the issues of
gluster with many small files is that
Hi Atin,
I have sent files to your email directly in other mail. hope you have
received.
regards
Amudhan
On Thu, Jan 17, 2019 at 3:43 PM Atin Mukherjee wrote:
> Can you please run 'glusterd -LDEBUG' and share back the glusterd.log?
> Instead of doing too many back and forth I suggest you to
On Fri, 18 Jan 2019 at 14:25, Mauro Tridici wrote:
> Dear Users,
>
> I’m facing with a new problem on our gluster volume (v. 3.12.14).
> Sometime it happen that “ls” command execution, in a specified directory,
> return empty output.
> “ls” command output is empty, but I know that the involved
Dear Users,
I’m facing with a new problem on our gluster volume (v. 3.12.14).
Sometime it happen that “ls” command execution, in a specified directory,
return empty output.
“ls” command output is empty, but I know that the involved directory contains
some files and subdirectories.
In fact, if I