Excerpts from Strahil Nikolov's message of 2023-03-25 11:23:09 +:
> Why don't you mount the old system's volume on one of the new gluster
> servers and 'cp' from the first FUSE mount point to the new FUSE mount
> point ?
this is what we did on the first transition, but this time there is to
Excerpts from Strahil Nikolov's message of 2023-03-24 21:11:28 +:
> Gluster excells when you have more servers and clients to consume that data.
you mean multiple smaller servers are better than one large server?
> LVM cache (NVMEs)
we only have a few clients. gluster is for us effectively
Excerpts from Strahil Nikolov's message of 2023-03-21 00:27:58 +:
> Generally,the recommended approach is to have 4TB disks and no more
> than 10-12 per HW RAID.
what kind of raid configuration and brick size do you recommend here?
> Of course , it's not always possible but a
> resync of a
hi,
our current servers are suffering from a weird hardware issue that
forces us to start over.
in short we have two servers with 15 disks at 6TB each, divided into
three raid5 arrays for three bricks per server at 22TB per brick.
each brick on one server is replicated to a brick on the second
artin.
--
general managerrealss.com
student mentor fossasia.org
community mentor blug.sh beijinglug.club
pike programmer pike.lysator.liu.secaudium.net societyserver.org
Martin Bähr
tudent mentor fossasia.org
community mentor blug.sh beijinglug.club
pike programmer pike.lysator.liu.secaudium.net societyserver.org
Martin Bähr working in chinahttp://societyserver.org/mbaehr/
Community Mee