Hi,
I will check and confirm the lebel. In the mean time can you guys help me
to find out the root cause of this issue, and how I can resolve this? Is
there any ceph configuration issue or anything we should check
Please advise.
Regards,
Munna
On Wed, Apr 20, 2022 at 7:29 PM Marc wrote:
> >
Hi,
Those are not enterprise SSD, Samsung labeled and all are 2TB in size. we
have three node, 12 disk on each node
Regards,
Munna
On Wed, Apr 20, 2022 at 5:49 PM Stefan Kooman wrote:
> On 4/20/22 13:30, Md. Hejbul Tawhid MUNNA wrote:
> > Dear Team,
> >
> > We have tw
Dear Team,
We have two type to disk in our ceph cluster one is Magnetic disk, another
type is SSD.
# ceph osd crush class ls
[
"hdd",
"ssd"
]
hdd is normally a bit slower, which is normal. initially ssd was faster
read/write. Recently we are facing very slow operation on ssd. Need help
0 B 039 TiB
4
volumes-ssd50 191 GiB 1.2115 TiB
49070
Regards,
Munna
On Wed, Dec 15, 2021 at 1:16 PM Janne Johansson wrote:
> Den ons 15 dec. 2021 kl 07:45 skrev Md. Hejbul Tawhid MUNNA
> :
> > Hi,
> > W
Hi,
We are observing MAX-Available capacity is not reflecting the full size of
the cluster.
We are running mimic version.
initially we installed 3 osd-host containing 5.5TB X 8 each . That time
max_available was 39TB. After two year we had installed two more servers
with the same spec(5.5TB X 8
Hi,
If we use replica size 3 , can we configure failure-domain to set 2
replicas in osd and at least 1 replica in host?
Regards,
Munna
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
, wrote:
> On 12/9/21 13:01, Md. Hejbul Tawhid MUNNA wrote:
> > Hi,
> >
> > This is ceph.conf during the cluster deploy. ceph version is mimic.
> >
> > osd pool default size = 3
> > osd pool default min size = 1
> > osd pool default pg num = 1024
>
quot;: [
{
"op": "take",
"item": -21,
"item_name": "default~ssd"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": &q
.
not idea about ceph-balancer
Regards,
Munna
On Thu, Dec 9, 2021 at 1:54 PM Stefan Kooman wrote:
> Hi,
>
> On 12/9/21 03:11, Md. Hejbul Tawhid MUNNA wrote:
> > Hi,
> >
> > Yes, we have added new osd. Previously we had only one type disk, hdd.
> now
> >
lues and slowly increase the weight one by one.
>
> Also, please share output of ‘ceph osd df’ and ‘ceph health details’
>
> On Wed, 8 Dec 2021 at 11:56 PM, Md. Hejbul Tawhid MUNNA <
> munnae...@gmail.com> wrote:
>
>> Hi,
>>
>> Overall status: HEALTH_ERR
>
Hi,
Overall status: HEALTH_ERR
PG_DEGRADED_FULL: Degraded data redundancy (low space): 19 pgs
backfill_toofull
OBJECT_MISPLACED: 12359314/17705640 objects misplaced (69.804%)
PG_DEGRADED: Degraded data redundancy: 1707105/17705640 objects degraded
(9.642%), 1979 pgs degraded, 1155 pgs undersized
?
Regards,
Munna
On Sun, Dec 5, 2021 at 3:09 PM Zakhar Kirpichenko wrote:
> Hi!
>
> If you use SSDs for OSDs, there's no real benefit from putting DB/WAL on a
> separate drive.
>
> Best regards,
> Z
>
> On Sun, Dec 5, 2021 at 10:15 AM Md. Hejbul Tawhid MUNNA <
Hi,
We are running openstack cloud with backend ceph storage. Currently we have
only HDD storage in our ceph cluster. Now we are planning to add new
server and osd with SSD disk. Currently we are using separate SSD disk for
journal disk.
Now if we install new OSD with SSD disk, do we need
13 matches
Mail list logo