command to get the list of all the objects in the cephfs data pool. And then
learn how that mapping is performed and parse your listing.
Thanks,
Igor
On 3/20/2024 1:30 AM, Thorne Lawler wrote:
Igor,
Those files are VM disk images, and they're under constant heavy use, so yes-
there/is/
, Mar 20, 2024 at 7:29 PM Thorne Lawler wrote:
Alexander,
I'm happy to create a new pool if it will help, but I don't
presently see how creating a new pool will help us to identify the
source of the 10TB discrepancy in this original cephfs pool.
Please help me to und
24 6:35 pm, Alexander E. Patrakov wrote:
Thorne,
That's why I asked you to create a separate pool. All writes go to the
original pool, and it is possible to see object counts per-pool.
On Wed, Mar 20, 2024 at 6:32 AM Thorne Lawler wrote:
Alexander,
Thank you, but as I said to Igor: The 5.
s to the volume to confirm space
leakage.
On 3/18/2024 3:12 AM, Thorne Lawler wrote:
Thanks Igor,
I have tried that, and the number of objects and bytes_used took a
long time to drop, but they seem to have dropped back to almost the
original level:
* Before creating the file:
o
ally occurred (and
option 1. above isn't the case) it makes sense to run more experiments
with writing/removing a bunch of huge files to the volume to confirm
space leakage.
On 3/18/2024 3:12 AM, Thorne Lawler wrote:
Thanks Igor,
I have tried that, and the number of objects and bytes_use
d.data 41 TiB 3886733 0 11660199
0 0 0 3249045631 177 TiB 9372801877 231 TiB 7.0
MiB 12 MiB
In what sense does my CephFS filesystem include 11660199 'copies'?
Copies of what?
On 18/03/2024 11:12 am, Thorne Lawler wrote:
Thanks
always
other activity taking place.
What tools are there to visually inspect the object map and see how it
relates to the filesystem?
On 15/03/2024 7:18 pm, Igor Fedotov wrote:
ceph df detail --format json-pretty
--
Regards,
Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088
Also, before anyone asks- I have just gone over every client attached to
this filesystem through native CephFS or NFS and checked for deleted
files. There are a total of three deleted files, amounting to about 200G.
On 15/03/2024 10:05 am, Thorne Lawler wrote:
Igor,
Yes. Just a bit.
root
--Original Message-
From: Igor Fedotov
Sent: March 14, 2024 1:37 PM
To: Thorne Lawler;ceph-users@ceph.io;
etienne.men...@ubisoft.com;vbog...@gmail.com
Subject: [ceph-users] Re: CephFS space usage
Thorn,
you might want to assess amount of files on the mounted fs by runnning "du
-h | wc&q
runnning
"du -h | wc". Does it differ drastically from amount of objects in the
pool = ~3.8 M?
And just in case - please run "rados lssnap -p cephfs.shared.data".
Thanks,
Igor
On 3/14/2024 1:42 AM, Thorne Lawler wrote:
Igor, Etienne, Bogdan,
The system is a four node clust
m?
And please give an overview of your OSD layout - amount of OSDs,
shared or dedicated DB/WAL, main and DB volume sizes.
Thanks,
Igor
On 3/13/2024 5:58 AM, Thorne Lawler wrote:
Hi everyone!
My Ceph cluster (17.2.6) has a CephFS volume which is showing 41TB
usage for the data pool, bu
using that space, and
if possible, how can I reclaim that space?
Thank you.
--
Regards,
Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088 607 265
First registrar certified ISO 27001-2013 Data Security Standard ITGOV40172
P +61 499 449 170
_DDNS
/_*Please note:* The information
S) HA ever work in previous versions? Unfortunately, I had no
reason to test this in advance. Currently, only a manual installation
of HAproxybased ingress controller(s) should help.
And, can we hope for a correction in future versions?
Thanks,
Christoph
Am Mo., 4. Sept. 2023 um 07:43 Uhr schrieb Thor
ginit ?
* Is the default dbus package sufficient, or does Ceph require
specific dbus plugins?
Thank you.
On 4/09/2023 9:55 am, Thorne Lawler wrote:
John,
Thanks for getting back to me. I am indeed using cephadm, and I will
dig up those configurations.
Even if Ceph Quincy is current compl
.
On 31/08/2023 11:18 pm, John Mulligan wrote:
On Wednesday, August 30, 2023 8:38:21 PM EDT Thorne Lawler wrote:
If there isn't any documentation for this yet, can anyone tell me:
* How do I inspect/change my NFS/haproxy/keepalived configuration?
* What is it supposed to look like? Do
service_id: xcpnfs
placement:
hosts:
- san1
- san2
spec:
port: 20490
Am I missing something here? Is there another mailing list where I
should be asking about this?
On 31/08/2023 10:38 am, Thorne Lawler wrote:
If there isn't any documentation for this yet, can anyone te
If there isn't any documentation for this yet, can anyone tell me:
* How do I inspect/change my NFS/haproxy/keepalived configuration?
* What is it supposed to look like? Does someone have a working example?
Thank you.
On 31/08/2023 9:36 am, Thorne Lawler wrote:
Sorry everyone,
Is ther
Sorry everyone,
Is there any more detailed documentation on the high availability NFS
functionality in current Ceph?
This is a pretty serious sticking point.
Thank you.
On 30/08/2023 9:33 am, Thorne Lawler wrote:
Fellow cephalopods,
I'm trying to get quick, seamless NFS failover happ
the NFS service to the keepalive and
haproxy services, or do I need to expand the ingress services to refer
to multiple NFS services?
Thank you.
--
Regards,
Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088 607 265
First registrar certified ISO 27001-2013 Data Security Standa
https://github.com/ceph/ceph/pull/47098
Zitat von Thorne Lawler :
Hi everyone!
I have a containerised (cephadm built) 17.2.6 cluster where I have
installed a custom commercial SSL certificate under dashboard.
Before I upgraded from 17.2 to 17.2.6, I successfully installed the
custom SSL cert every
self-signed key.
Have the commands for updating this changed between 17.2.0 and 17.2.6?
Thank you.
--
Regards,
Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088 607 265
First registrar certified ISO 27001-2013 Data Security Standard ITGOV40172
P +61 499 449 170
_DDNS
/_*Please
Fingers crossed.
Thanks again for everyone's quick responses!
On 31/05/2023 12:51 am, Thorne Lawler wrote:
Hi folks!
I have a Ceph production 17.2.6 cluster with 6 machines in it - four
newer, faster machines with 4x3.84TB NVME drives each, and two with
24x1.68TB SAS disks each.
I kno
any risks.
Thanks in advance!
--
Regards,
Thorne Lawler - Senior System Administrator
*DDNS* | ABN 76 088 607 265
First registrar certified ISO 27001-2013 Data Security Standard ITGOV40172
P +61 499 449 170
_DDNS
/_*Please note:* The information contained in this email message and any
a
23 matches
Mail list logo