Hi,
>> Your first mail shows 67T (instead of 62)
I have just given an approximate number the first given number is the
right number.
I have deleted all pools and just created a fresh pool for test with PG num
128 and now it's showing a full size of 248TB.
output from " ceph df "
--- RAW
在 2020年10月26日,22:30,Amudhan P 写道:
Hi Jane,
I agree with you and I was trying to say disk which has more PG will fill up
quick.
But, My question even though RAW disk space is 262 TB, pool 2 replica max
storage is showing only 132 TB in the dashboard and when mounting the pool
using cephfs
Hi Jane,
I agree with you and I was trying to say disk which has more PG will fill
up quick.
But, My question even though RAW disk space is 262 TB, pool 2 replica max
storage is showing only 132 TB in the dashboard and when mounting the pool
using cephfs it's showing 62 TB, I could understand
Den sön 25 okt. 2020 kl 15:18 skrev Amudhan P :
> Hi,
>
> For my quick understanding How PG's are responsible for allowing space
> allocation to a pool?
>
An objects name will decide which PG (from the list of PGs in the pool) it
will end
up on, so if you have very few PGs, the
On 2020-10-25 15:20, Amudhan P wrote:
> Hi,
>
> For my quick understanding How PG's are responsible for allowing space
> allocation to a pool?
>
> My understanding that PG's basically helps in object placement when the
> number of PG's for a OSD's is high there is a high possibility that PG
>
Hi,
For my quick understanding How PG's are responsible for allowing space
allocation to a pool?
My understanding that PG's basically helps in object placement when the
number of PG's for a OSD's is high there is a high possibility that PG gets
lot more data than other PG's. At this situation,
On 2020-10-25 05:33, Amudhan P wrote:
> Yes, There is a unbalance in PG's assigned to OSD's.
> `ceph osd df` output snip
> ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
> AVAIL %USE VAR PGS STATUS
> 0 hdd 5.45799 1.0 5.5 TiB 3.6 TiB 3.6 TiB 9.7
Hi,
In ceph, when you create an object, it cannot go any OSD as it fits. An object
is mapped to a placement group using a hash algorithm. Then placement groups
are mapped to OSDs. See [1] for details. So, if any of your OSD goes full,
write operations cannot be guaranteed success. Once you
Hi Stefan,
I have started balancer but what I don't understand is there are enough
free space in other disks.
Why it's not showing those in available space?
How to reclaim the free space?
On Sun 25 Oct, 2020, 2:27 PM Stefan Kooman, wrote:
> On 2020-10-25 05:33, Amudhan P wrote:
> > Yes, There
Yes, There is a unbalance in PG's assigned to OSD's.
`ceph osd df` output snip
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL%USE VAR PGS STATUS
0hdd 5.45799 1.0 5.5 TiB 3.6 TiB 3.6 TiB 9.7 MiB 4.6 GiB
1.9 TiB 65.94 1.31 13 up
1
On 2020-10-24 14:53, Amudhan P wrote:
> Hi,
>
> I have created a test Ceph cluster with Ceph Octopus using cephadm.
>
> Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
> 132TB.
> I have not set quota for any of the pool. what could be the issue?
Unbalance? What does
Hi Nathan,
Attached crushmap output.
let me know if you find any thing odd.
On Sat, Oct 24, 2020 at 6:47 PM Nathan Fish wrote:
> Can you post your crush map? Perhaps some OSDs are in the wrong place.
>
> On Sat, Oct 24, 2020 at 8:51 AM Amudhan P wrote:
> >
> > Hi,
> >
> > I have created a
Can you post your crush map? Perhaps some OSDs are in the wrong place.
On Sat, Oct 24, 2020 at 8:51 AM Amudhan P wrote:
>
> Hi,
>
> I have created a test Ceph cluster with Ceph Octopus using cephadm.
>
> Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
> 132TB.
> I have
13 matches
Mail list logo