Hello,
we have a Gluster FS mounted at some /mnt/... path on a Server. The actual
physical device behind this resides on some other Server.
Now, the requirement is to write files to this Gluster FS Volume in a durable
fashion, i.e. for an officially succeeded write the contents MUST have been
I have a question about how disperse volumes handle brick failure. I'm running
version 3.10.10 on all systems. If I have a disperse volume in a 4+2
configuration with 6 servers each serving 1 brick, and maintenance needs to be
performed on all systems, are there any general steps that need to
Thanks
On 2018-03-14 13:50, Karthik Subrahmanya wrote:
> On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya
> wrote:
>
> On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev
> wrote:
>
> Hi Karthik,
>
> Thanks a lot for the explanation.
>
>
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya
wrote:
>
>
> On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev
> wrote:
>
>> Hi Karthik,
>>
>>
>> Thanks a lot for the explanation.
>>
>> Does it mean a distributed volume health can be checked only
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev
wrote:
> Hi Karthik,
>
>
> Thanks a lot for the explanation.
>
> Does it mean a distributed volume health can be checked only by "gluster
> volume status " command?
>
Yes. I am not aware of any other command which can give
Use DRBD then, that will give you required redundancy.
From: Andreas Ericsson [mailto:andreas.erics...@findity.com]
Sent: Wednesday, March 14, 2018 11:32 AM
To: Ondrej Valousek
Cc: Pranith Kumar Karampuri ; Gluster-users@gluster.org
Subject: Re:
If you could replicate the problem you had and provide the volume info +
profile that was requested from the redhat guys that would help in trying
to understand what is happening with your workload. Also if possible the
script you used to generate the load.
We've had our share of difficulties
Gluster offers distributed filesystem. It will NEVER perform as good as a local
filesystem because it can’t.
I also believe NFS will always outperform Gluster in certain situations as it
does not have to deal with distributed locks.
It’s also using FUSE which isn’t great performance-wise.
O.
We can't stick to single server because the law. Redundancy is a legal
requirement for our business.
I'm sort of giving up on gluster though. It would seem a pretty stupid
content addressable storage would suit our needs better.
On 13 March 2018 at 10:12, Ondrej Valousek
Hi Karthik,
Thanks a lot for the explanation.
Does it mean a distributed volume health can be checked only by "gluster
volume status " command?
And one more question: cluster.min-free-disk is 10% by default. What
kind of "side effects" can we face if this option will be reduced to,
for
That seems unlikely. I pre-create the directory layout and then write to
directories I know exist.
I don't quite understand how any settings at all can reduce performance to
1/5000 of what I get when writing straight to ramdisk though, and
especially when running on a single node instead of in a
I no longer have the volume lying around. The most interesting one was a
2GB volume created on ramdisk for a single node. If I can't get that to go
faster than 3MB/sec for writes, I figured I wouldn't bother further.
I was using gluster fuse fs 3.10.7. Everything was running on ubuntu 16.04
On 03/14/2018 07:13 AM, Shyam Ranganathan wrote:
The Gluster community celebrates 13 years of development with this
latest release, Gluster 4.0. This release enables improved integration
with containers, an enhanced user experience, and a next-generation
management framework. The 4.0 release
13 matches
Mail list logo