maybe you could find "defaults,_netdev,noauto,x-systemd.automount " useful...
good luck!
El 28/07/18 a les 10:11, Stefan Kania ha escrit:
I tried both, but still the same :-(. Any other solution?
Stefan
Am 26.07.18 um 16:21 schrieb Vlad Kopylov:
or maybe something like fetch-attempts=5
a distributed dispersed 3 redundancy 1 volume?. You will get your
total capacity divided by 3 times 2 (that's 2/3 of total capacity) and this
config still tolerates one node failure at the same time.
Hope this helps.
*Ramon Selga*
934 76 69 10
670 25 37 05
DataLab SL <http://www.datalab
In a very few situations raw gives a little more performance but in most of
cases qcow2 is good enough, we're using qcow2 by default: it allows
snap-shooting and thin provisioning.
If you have redundant PSU's you can use writeback or writethrough (safest). No
need for directsync with xfs in
Del Carlo ha escrit:
Thank you Ramon for your answer.
The only thing I can't understand is why to use such a big shard?
The default is 64MB. I thought maybe to decrease it, I wanted to do some tests
on it.
Best regards,
Il giorno ven 6 set 2019 alle ore 23:09 Ramon Selga <mailto:ramon
Tested several times recently: upgrade 3.12.15 to 7.7 without problems.
Upgrade servers and clients first to 3.12.15 ( old version, take a look to repo
site).
If volumes are replicated you could do it on-line, one by one, looking carefully
with self-heal process.
For disperse volumes you
Dismiss my first question: you have SAS 12Gbps SSDsĀ Sorry!
El 12/12/23 a les 19:52, Ramon Selga ha escrit:
May ask you which kind of disks you have in this setup? rotational, ssd
SAS/SATA, nvme?
Is there a RAID controller with writeback caching?
It seems to me your fio test on local brick
May ask you which kind of disks you have in this setup? rotational, ssd
SAS/SATA, nvme?
Is there a RAID controller with writeback caching?
It seems to me your fio test on local brick has a unclear result due to some
caching.
Try something like (you can consider to increase test file size