Hello, and sorry about the delay.

Testing the charms for oracular is complicated given the fact that we
don't currently have released the charms for oracular. In addition, even
if we built them manually, there are support charms that don't currently
support oracular (i.e: hacluster for ceph-nfs). As such, the Ceph team
decided to design a test plan for oracular that is based on what was
used for mantic back then (as seen in this comment:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2033428/comments/3 )

To perform verification for oracular, the following steps were taken:

- First, we create a machine that runs oracular. I'm using juju here for 
simplicity:
```
juju add-model oracular
juju add-machine --base ubuntu@24.10 --constraints="cores=4 mem=16G 
root-disk=50G virt-type=virtual-machine"
```

- After the machine has settled, we ssh into it, where we'll run all the 
following commands:
```
juju ssh 0
```

- Inside the VM, we add the PPA with the 19.2.1 point release:
```
ubuntu@juju-fd2278-0:~$ sudo add-apt-repository 
ppa:lmlogiudice/ceph-1921-oracular
ubuntu@juju-fd2278-0:~$ sudo apt update
```

- With the PPA in place, we install both Ceph and Cephadm, and make sure
that the correct version is installed:

```
ubuntu@juju-fd2278-0:~$ sudo apt install ceph cephadm
ubuntu@juju-fd2278-0:~$ ceph -v
ceph version 19.2.1 (9efac4a81335940925dd17dbf407bfd6d3860d28) squid (stable)
```

- Next, we create a Ceph cluster with just this single node (NOTE: You need to 
replace the IP address below with the actual IP address of the machine):
```
ubuntu@juju-fd2278-0:~$ sudo cephadm bootstrap --mon-ip 10.221.109.200 
--single-host-defaults --cluster-network=10.221.109.0/24
```

- In order to create an OSD and verify that everything is in order, we first 
create a loop device:
```
ubuntu@juju-fd2278-0:~$ touch loop.img
ubuntu@juju-fd2278-0:~$ truncate --size 3G ./loop.img
ubuntu@juju-fd2278-0:~$ sudo losetup -fP --show ./loop.img 
/dev/loop0
```

- In order to add this new device to the cluster as an OSD, we run the 
following:
```
ubuntu@juju-fd2278-0:~$ sudo ceph orch daemon add osd `hostname`:/dev/loop0 raw
Created osd(s) 0 on host 'juju-fd2278-0'
```

- Finally, we check that the OSD is up and running:
```
ubuntu@juju-fd2278-0:~$ sudo ceph -s
  cluster:
    id:     f9db617a-4184-11f0-adb8-00163e2c0cd3
    health: HEALTH_WARN
            OSD count 1 < osd_pool_default_size 2
 
  services:
    mon: 1 daemons, quorum juju-fd2278-0 (age 8m)
    mgr: juju-fd2278-0.lxfcbu(active, since 71s), standbys: juju-fd2278-0.xkpcqy
    osd: 1 osds: 1 up (since 13s), 1 in (since 3m)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   426 MiB used, 2.6 GiB / 3 GiB avail
    pgs: 
```

(The warning is harmless).

** Tags removed: verification-needed-oracular
** Tags added: verification-done-oracular

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2097605

Title:
  [SRU] Squid: Ceph new point release 19.2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2097605/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to