Restarts will go through a shutdown process. As long as the network
isn't actively unconfigured before the final kill, the tcp connection
will be shutdown and there will be no wait.
On 12/28/17 20:19, Sam McLeod wrote:
Sure, if you never restart / autoscale anything and if your use case
Sure, if you never restart / autoscale anything and if your use case isn't
bothered with up to 42 seconds of downtime, for us - 42 seconds is a really
long time for something like a patient management system to refuse file
attachments from being uploaded etc...
We apply a strict patching
Hi Mauro,
What version of Gluster are you running and what is your volume
configuration?
IIRC, this was seen because of mismatches in the ctime returned to the
client. I don't think there were issues with the files but I will leave it
to Ravi and Raghavendra to comment.
Regards,
Nithya
On 29
Hi Mark,
On 28 December 2017 at 23:56, Mark Connor wrote:
> I have a 10x2 distributed replica volume running gluster3.8.
> Each of my bricks is about 60TB in size. ( 6TB drives Raid 6 10+2 )
>
> I am running of storage so I intend on adding servers with larger 8Tb
>
The reason for the long (42 second) ping-timeout is because re-establishing
fd's and locks can be a very expensive operation. With an average MTBF of 45000
hours for a server, even just a replica 2 would result in a 42 second MTTR
every 2.6 years, or 6 nines of uptime.
On December 27, 2017
I/O is frozen, so you don't get errors, just a delay when accessing.
It's completly transparent, and for VM disks at least even 40 seconds is
fine, not long enough for a web server to timeout, the visitor just
thinks the site was slow for a minute.
Really hasn't been that bad here, but I guess it
Hi All,
anyone had the same experience?
Could you provide me some information about this error?
It happens only on GlusterFS file system.
Thank you,
Mauro
> Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici
> ha scritto:
>
>
> Dear Users,
>
> I’m experiencing a
10 seconds is a very long time for files to go away for applications used at
any scale, it is however what I've set our failover time to after being shocked
by the default of 42 seconds.
--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod
> On 27 Dec 2017, at 10:17 pm, Omar Kohl
I have a 10x2 distributed replica volume running gluster3.8.
Each of my bricks is about 60TB in size. ( 6TB drives Raid 6 10+2 )
I am running of storage so I intend on adding servers with larger 8Tb
drives.
My new bricks will be 80TB in size. I will make sure the replica to the
larger brick will
Hi Paul,
A few questions:
What type of volume is this and what client protocol are you using?
What version of Gluster are you using?
Regards,
Nithya
On 28 December 2017 at 20:09, Paul wrote:
> Hi, All,
>
> If I set cluster.readdir-optimize to on, the performance of "ls" is
>
Hi, All,
If I set cluster.readdir-optimize to on, the performance of "ls" is better,
but I find one problem.
# ls
# ls
files.1 files.2 file.3
I run ls twice. At the first time, ls returns nothing. At the second time,
ls returns all file names.
If turn off cluster.readdir-optimize, I don't see
Can't tell you, I only use gluster for VM disks.
The heal will hammer performances pretty bad, but that really depends on
what you do, so I'd say test it a bunch and use whatever works best.
I think they advise for a high value to make sure you don't have two
nodes marked down in cose succession,
12 matches
Mail list logo