Signed-off-by: Matthias Heiserer <m.heise...@proxmox.com>
---
 hyper-converged-infrastructure.adoc | 4 ++--
 pve-storage-rbd.adoc                | 4 ++--
 pveceph.adoc                        | 6 +++---
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/hyper-converged-infrastructure.adoc 
b/hyper-converged-infrastructure.adoc
index ee9f185..4616392 100644
--- a/hyper-converged-infrastructure.adoc
+++ b/hyper-converged-infrastructure.adoc
@@ -48,9 +48,9 @@ Hyper-Converged Infrastructure: Storage
 infrastructure. You can, for example, deploy and manage the following two
 storage technologies by using the web interface only:
 
-- *ceph*: a both self-healing and self-managing shared, reliable and highly
+- *Ceph*: a both self-healing and self-managing shared, reliable and highly
   scalable storage system. Checkout
-  xref:chapter_pveceph[how to manage ceph services on {pve} nodes]
+  xref:chapter_pveceph[how to manage Ceph services on {pve} nodes]
 
 - *ZFS*: a combined file system and logical volume manager with extensive
   protection against data corruption, various RAID modes, fast and cheap
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index 5f8619a..5fe558a 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -109,9 +109,9 @@ management, see the Ceph 
docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/ope
 Ceph client configuration (optional)
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Connecting to an external ceph storage doesn't always allow setting
+Connecting to an external Ceph storage doesn't always allow setting
 client-specific options in the config DB on the external cluster. You can add a
-`ceph.conf` beside the ceph keyring to change the ceph client configuration for
+`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
 the storage.
 
 The ceph.conf needs to have the same name as the storage.
diff --git a/pveceph.adoc b/pveceph.adoc
index 54fb214..fdd4cf6 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -636,7 +636,7 @@ pvesm add rbd <storage-name> --pool <replicated-pool> 
--data-pool <ec-pool>
 ----
 
 TIP: Do not forget to add the `keyring` and `monhost` option for any external
-ceph clusters, not managed by the local {pve} cluster.
+Ceph clusters, not managed by the local {pve} cluster.
 
 Destroy Pools
 ~~~~~~~~~~~~~
@@ -761,7 +761,7 @@ ceph osd crush rule create-replicated <rule-name> <root> 
<failure-domain> <class
 [frame="none",grid="none", align="left", cols="30%,70%"]
 |===
 |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
-|<root>|which crush root it should belong to (default ceph root "default")
+|<root>|which crush root it should belong to (default Ceph root "default")
 |<failure-domain>|at which failure-domain the objects should be distributed 
(usually host)
 |<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
 |===
@@ -943,7 +943,7 @@ servers.
 pveceph fs destroy NAME --remove-storages --remove-pools
 ----
 +
-This will automatically destroy the underlying ceph pools as well as remove
+This will automatically destroy the underlying Ceph pools as well as remove
 the storages from pve config.
 
 After these steps, the CephFS should be completely removed and if you have
-- 
2.30.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to