>
>> > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in
case
>>
>> I can use it later for storage)
>>
>> OS not? get enterprise ssd as os (I think some recommend it when
>> colocating monitors, can generate a lot of disk io)
>
>Yes, OS. I have no option to get a SSD.
Replying inline.
--
Salsa
Sent with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, September 20, 2019 1:34 PM, Martin Verges
wrote:
> Hello Salsa,
>
>> I have tested Ceph using VMs but never got to put it to use and had a lot of
>> trouble to
Replying inline
--
Salsa
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Friday, September 20, 2019 1:31 PM, Marc Roos
wrote:
> > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case
>
> I can use it later for storage)
>
> OS not? get enterprise ssd
Hello Salsa,
I have tested Ceph using VMs but never got to put it to use and had a lot
> of trouble to get it to install.
>
if you want to get rid of all the troubles from installing to day2day
operations, you could consider using https://croit.io/croit-virtual-demo
- Use 2 HDDs for SO using
>- Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case
I can use it later for storage)
OS not? get enterprise ssd as os (I think some recommend it when
colocating monitors, can generate a lot of disk io)
>- Install CentOS 7.7
Good choice
>- Use 2 vLANs, one for ceph
I have tested Ceph using VMs but never got to put it to use and had a lot of
trouble to get it to install.
Now I've been asked to do a production setup using 3 servers (Dell R740) with
12 4TB each.
My plan is this:
- Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I can use
1 NVMe is really only limited to a readonly / writethrough cache (which
should be of course possible with bcache). Nobody wants to lose all data
after 1 disk failure...
Another option is the use of bcache / flashcache.
I have experimented with bcache, it is quite easy to et up, but once
you
Another option is the use of bcache / flashcache.
I have experimented with bcache, it is quite easy to et up, but once you run
into performance problems it is hard to pinpoint the problem.
In the end i ended up just adding more disks to share iops, and going for the
default setup (db / wal
Placing it as a Journal / Bluestore DB/WAL will help with writes mostly, by the
sounds of it you want to increase read performance?, how important is the data
on this CEPH cluster?
If you place it as a Journal DB/WAL any failure of it will cause total data
loss so I would very much advise
Dear everyone,
Last year I set up an experimental Ceph cluster (still single node,
failure domain = osd, MB Asus P10S-M WS, CPU Xeon E3-1235L, RAM 64 GB,
HDDs WD30EFRX, Ubuntu 18.04, now with kernel 5.3.0 from Ubuntu mainline
PPA and Ceph 14.2.4 from
Hello Mike and Jason,
as described in my last mail i converted the filesystem to ext4, set "sysctl
vm.dirty_background_ratio=0" and I put the regular workload on the filesystem
(used as a NFS mount).
That seems so to prevent crashes for a entire week now (before this, the nbd
device crashed
OK, looks like clock skew is the problem. I thought this is caused by the
reboot but it did not fix itself after some minutes (mon3 was 6 seconds
ahead).
After forcing time sync from the same server, it seems to be solved now.
Kevin
Am Fr., 20. Sept. 2019 um 07:33 Uhr schrieb Kevin Olbrich :
>
12 matches
Mail list logo