Hi Jakub,

No, my setup seems to be the same as yours. Our system is mainly for archiving 
loads of data. This data has to be stored forever and allow reads, albeit 
seldom considering the number of objects we will store vs the number of objects 
that ever will be requested.

It just really seems odd that the metadata surrounding the 25M objects is so 
high.

We have 144 osds on 9 storage nodes. Perhaps it makes perfect sense but I’d 
like to know why we are seeing what we are and how it all adds up.

Thanks!
Dan

Get Outlook for iOS<https://aka.ms/o0ukef>



On Sat, Oct 20, 2018 at 12:36 PM -0700, "Jakub Jaszewski" 
<jaszewski.ja...@gmail.com<mailto:jaszewski.ja...@gmail.com>> wrote:

Hi Dan,

Did you configure block.wal/block.db as separate devices/partition 
(osd_scenario: non-collocated or lvm for clusters installed using ceph-ansbile 
playbooks )?

I run Ceph version 13.2.1 with non-collocated data.db and have the same 
situation - the sum of block.db partitions' size is displayed as RAW USED in 
ceph df.
Perhaps it is not the case for collocated block.db/wal.

Jakub

On Sat, Oct 20, 2018 at 8:34 PM Waterbly, Dan 
<dan.water...@sos.wa.gov<mailto:dan.water...@sos.wa.gov>> wrote:
I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers seem 
very high to me.

Get Outlook for iOS<https://aka.ms/o0ukef>



On Sat, Oct 20, 2018 at 10:27 AM -0700, "Serkan Çoban" 
<cobanser...@gmail.com<mailto:cobanser...@gmail.com>> wrote:


4.65TiB includes size of wal and db partitions too.
On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan  wrote:
>
> Hello,
>
>
>
> I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x 
> replication).
>
>
>
> I am confused by the usage ceph df is reporting and am hoping someone can 
> shed some light on this. Here is what I see when I run ceph df
>
>
>
> GLOBAL:
>
>     SIZE        AVAIL       RAW USED     %RAW USED
>
>     1.02PiB     1.02PiB      4.65TiB          0.44
>
> POOLS:
>
>     NAME                                           ID     USED        %USED   
>   MAX AVAIL     OBJECTS
>
>     .rgw.root                                      1      3.30KiB         0   
>      330TiB           17
>
>     .rgw.buckets.data      2      22.9GiB         0        330TiB     24550943
>
>     default.rgw.control                            3           0B         0   
>      330TiB            8
>
>     default.rgw.meta                               4         373B         0   
>      330TiB            3
>
>     default.rgw.log                                5           0B         0   
>      330TiB            0
>
>     .rgw.control           6           0B         0        330TiB            8
>
>     .rgw.meta              7      2.18KiB         0        330TiB           12
>
>     .rgw.log               8           0B         0        330TiB          194
>
>     .rgw.buckets.index     9           0B         0        330TiB         2560
>
>
>
> Why does my bucket pool report usage of 22.9GiB but my cluster as a whole is 
> reporting 4.65TiB? There is nothing else on this cluster as it was just 
> installed and configured.
>
>
>
> Thank you for your help with this.
>
>
>
> -Dan
>
>
>
> Dan Waterbly | Senior Application Developer | 509.235.7500 x225 | 
> dan.water...@sos.wa.gov<mailto:dan.water...@sos.wa.gov>
>
> WASHINGTON STATE ARCHIVES | DIGITAL ARCHIVES
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to