[ceph-users] Performance impact of Heterogeneous environment

2024-01-17 Thread Tino Todino
Hi folks.

I had a quick search but found nothing concrete on this so thought I would ask.

We currently have a 4 host CEPH cluster with an NVMe pool (1 OSD per host) and 
an HDD Pool (1 OSD per host).  Both OSD's use a separate NVMe for DB/WAL. These 
machines are identical (Homogenous) and are Ryzen 7 5800X machines with 64GB 
DDR3200 RAM.  The NVMe's are 1TB Seagate Ironwolfs and the HDD's are 16TB 
Seagate IronWolfs.

We are wanting to add more nodes mainly for capacity and resilience reasons.  
We have an old 3 node cluster of Dell R740 servers that could be added to this 
CEPH cluster.  Instead of DDR4, they use DDR3 (although 1.5TB each!!). and 
instead of Ryzen 7 5800X CPUs they use  old Intel Xeon CPU E5-4657L v2 (96 
cores at 2.4Ghz).

What would be the performance impact of adding these three nodes with the same 
OSD layout (i.e 1NVMe OSD and 1 HDD OSD per host with 1x NVMe DB/WAL NVMe)
Would we get overall better performance or worse?  Can weighting be used to 
mitigate performance penalties and if so is this easy to configure?

On performance, I would deem it Ok for our use case currently (VM disks), as we 
are running on 10Gbe network (with dedicated NICs for public and cluster 
network).

Many thanks in advance

Tino
This E-mail is intended solely for the person or organisation to which it is 
addressed. It may contain privileged or confidential information and, if you 
are not the intended recipient, you must not copy, distribute or take any 
action in reliance upon it. Any views or opinions presented are solely those of 
the author and do not necessarily represent those of Marlan Maritime 
Technologies Ltd. If you have received this E-mail in error, please notify us 
as soon as possible and delete it from your computer. Marlan Maritime 
Technologies Ltd Registered in England & Wales 323 Mariners House, Norfolk 
Street, Liverpool. L1 0BG Company No. 08492427.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CEPH Version choice

2023-05-15 Thread Tino Todino
Hi all,

I've been reading through this email list for a while now, but one thing that 
I'm curious about is why a lot of installations out there aren't upgraded to 
the latest version of CEPH (Quincy).

What are the main reasons for not upgrading to the latest and greatest?

Thanks.

Tino
This E-mail is intended solely for the person or organisation to which it is 
addressed. It may contain privileged or confidential information and, if you 
are not the intended recipient, you must not copy, distribute or take any 
action in reliance upon it. Any views or opinions presented are solely those of 
the author and do not necessarily represent those of Marlan Maritime 
Technologies Ltd. If you have received this E-mail in error, please notify us 
as soon as possible and delete it from your computer. Marlan Maritime 
Technologies Ltd Registered in England & Wales 323 Mariners House, Norfolk 
Street, Liverpool. L1 0BG Company No. 08492427.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] 5 host setup with NVMe's and HDDs

2023-03-29 Thread Tino Todino
Hi folks.

Just looking for some up to date advice please from the collective on how best 
to set up CEPH on 5 Proxmox hosts each configured with the following:

AMD Ryzen 7 5800X CPU
64GB RAM
2x SSD (as ZFS boot disk for Proxmox)
1x 500GB NVMe for DB/WAL
1x 1TB NVMe as an OSD
1x 16TB SATA HDD as an OSD
2x 10GB NIC (One for Public and one for Cluster networks)
1 GB NIC for management interface

The CEPH solution will be used primarily for storage of another Proxmox 
cluster's virtual machines and their data. We'd like a fast pool using the 
NVMe's for critical VMs and a slower HDD based pool for VM's that don't require 
such fast disk access and perhaps require more storage capacity.

To expand in the future we will probably add more hosts in the same sort of 
configuration and/or replace NVMe/HDDs OSDs with more capacious ones.

Ideas for configuration welcome please.

Many thanks

Tino
Coastsense Ltd


This E-mail is intended solely for the person or organisation to which it is 
addressed. It may contain privileged or confidential information and, if you 
are not the intended recipient, you must not copy, distribute or take any 
action in reliance upon it. Any views or opinions presented are solely those of 
the author and do not necessarily represent those of Marlan Maritime 
Technologies Ltd. If you have received this E-mail in error, please notify us 
as soon as possible and delete it from your computer. Marlan Maritime 
Technologies Ltd Registered in England & Wales 323 Mariners House, Norfolk 
Street, Liverpool. L1 0BG Company No. 08492427.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Inherited CEPH nightmare

2022-10-11 Thread Tino Todino
step chooseleaf firstn 0 type host
step emit
}

# end crush map

Ceph -s output:

root@cl1-h1-lv:~# ceph -s
  cluster:
id: 4a4b4fff-d140-4e11-a35b-cbac0e18a3ce
health: HEALTH_OK

  services:
mon: 3 daemons, quorum cl1-h3-lv,cl1-h1-lv,cl1-h4-lv (age 3d)
mgr: cl1-h3-lv(active, since 11w), standbys: cl1-h2-lv, cl1-h1-lv
mds: 1/1 daemons up, 2 standby
osd: 12 osds: 12 up (since 3d), 12 in (since 3d)

  data:
volumes: 1/1 healthy
pools:   4 pools, 305 pgs
objects: 647.02k objects, 2.4 TiB
usage:   7.2 TiB used, 3.7 TiB / 11 TiB avail
pgs: 305 active+clean

  io:
client:   96 KiB/s rd, 409 KiB/s wr, 7 op/s rd, 38 op/s wr

ceph osd df output:

root@cl1-h1-lv:~# ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE RAW USE  DATA OMAP  META 
AVAIL%USE   VAR   PGS  STATUS
 4ssd  0.90970   1.0  932 GiB  635 GiB  632 GiB   1.1 MiB  2.5 GiB  297 
GiB  68.12  1.03   79  up
 9ssd  0.90970   1.0  932 GiB  643 GiB  640 GiB62 MiB  2.1 GiB  289 
GiB  68.98  1.05   81  up
12ssd  0.90970   1.0  932 GiB  576 GiB  574 GiB  1007 KiB  2.1 GiB  355 
GiB  61.87  0.94   70  up
 0ssd  0.90970   1.0  932 GiB  643 GiB  641 GiB   1.1 MiB  2.2 GiB  288 
GiB  69.05  1.05   80  up
 5ssd  0.90970   1.0  932 GiB  595 GiB  593 GiB   1.0 MiB  2.5 GiB  336 
GiB  63.91  0.97   70  up
10ssd  0.90970   1.0  932 GiB  585 GiB  583 GiB   1.6 MiB  2.4 GiB  346 
GiB  62.82  0.95   74  up
 1ssd  0.90970   1.0  932 GiB  597 GiB  595 GiB   1.0 MiB  2.2 GiB  334 
GiB  64.10  0.97   69  up
 6ssd  0.90970   1.0  932 GiB  652 GiB  649 GiB62 MiB  2.4 GiB  280 
GiB  69.94  1.06   85  up
11ssd  0.90970   1.0  932 GiB  587 GiB  584 GiB  1016 KiB  2.5 GiB  345 
GiB  62.98  0.95   72  up
 2ssd  0.90970   1.0  932 GiB  605 GiB  603 GiB62 MiB  2.1 GiB  326 
GiB  64.96  0.98   79  up
 3ssd  0.90970   1.0  932 GiB  645 GiB  643 GiB   1.1 MiB  1.9 GiB  287 
GiB  69.23  1.05   82  up
 7ssd  0.90970   1.0  932 GiB  615 GiB  612 GiB   1.2 MiB  2.6 GiB  317 
GiB  65.99  1.00   74  up
   TOTAL   11 TiB  7.2 TiB  7.2 TiB   196 MiB   28 GiB  3.7 
TiB  66.00
MIN/MAX VAR: 0.94/1.06  STDDEV: 2.80









 

-Original Message-
From: Janne Johansson  
Sent: 10 October 2022 07:52
To: Tino Todino 
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Inherited CEPH nightmare

> osd_memory_target = 2147483648
>
> Based on some reading, I'm starting to understand a little about what can be 
> tweaked. For example, I think the osd_memory_target looks low.  I also think 
> the DB/WAL should be on dedicated disks or partitions, but have no idea what 
> procedure to follow to do this.  I'm actually thinking that the best bet 
> would be to copy the VM's to temporary storage (as there is only about 7TBs 
> worth) and then set-up CEPH from scratch following some kind of best practice 
> guide.

Yes, the memory target is very low, if you have RAM to spare, bumping this to 
4-6-8-10G for each OSD should give some speedups.
If you can, check one of each drive type to see if they gain or lose from 
having write-cache turned off, as per

https://medium.com/coccoc-engineering-blog/performance-impact-of-write-cache-for-hard-solid-state-disk-drives-755d01fcce61

and other guides. The ceph usage pattern combined with some less-than-optimal 
ssd caches sometimes force much more to get flushed when ceph wants to make 
sure a small part actually hits the disk, meaning you get poor iops rates. 
Unfortunately this is very dependent on the controllers and the drives, so 
there is no simple rule if on or off is "best" for all possible combinations, 
but the fio test shown on that and similar pages should tell you quickly if you 
can get 50-100% more write iops out of your drives by having the cache in the 
right mode for each type of disk. Hopefully bumped ram should help with read 
performance, so it should be able to get better perf by two relatively simple 
changes.

Check if any OSDs are bluestore, and if not, convert each filestore OSD to 
bluestore, that would probably give you 50% more write iops on that OSD.

https://www.virtualtothecore.com/how-to-migrate-ceph-storage-volumes-from-filestore-to-bluestore/

They probably are bluestore, but it can't hurt to check if the cluster is old.

--
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Inherited CEPH nightmare

2022-10-07 Thread Tino Todino
 2.729
item cl1-h4-lv weight 1.819
item cl1-h1-lv weight 3.639
}

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map


Based on some reading, I'm starting to understand a little about what can be 
tweaked. For example, I think the osd_memory_target looks low.  I also think 
the DB/WAL should be on dedicated disks or partitions, but have no idea what 
procedure to follow to do this.  I'm actually thinking that the best bet would 
be to copy the VM's to temporary storage (as there is only about 7TBs worth) 
and then set-up CEPH from scratch following some kind of best practice guide.

Anyway, any help would be gratefully received.

Thanks for reading.

Kind regards
Tino Todino


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io