in a failed_repair state.
пт, 25 июн. 2021 г. в 00:36, Vladimir Prokofev :
> Hello.
>
> Today we've experienced a complete CEPH cluster outage - total loss of
> power in the whole infrastructure.
> 6 osd nodes and 3 monitors went down at the same time. CEPH 14.2.10
>
> This resulted in u
Hello.
Today we've experienced a complete CEPH cluster outage - total loss of
power in the whole infrastructure.
6 osd nodes and 3 monitors went down at the same time. CEPH 14.2.10
This resulted in unfound objects, which were "reverted" in a hurry with
ceph pg mark_unfound_lost revert
In
ere
> https://github.com/digitalocean/ceph_exporter
> useful. Note that there are multiple branches, which can be confusing.
>
> > On Jun 15, 2021, at 4:21 PM, Vladimir Prokofev wrote:
> >
> > Good day.
> >
> > I'm writing some code for parsing output data for mo
Good day.
I'm writing some code for parsing output data for monitoring purposes.
The data is that of "ceph status -f json", "ceph df -f json", "ceph osd
perf -f json" and "ceph osd pool stats -f json".
I also need support for all major CEPH releases, starting with Jewel till
Pacific.
What I've
Hello.
I'm trying to write some Python code for analysis of my RBD images storage
usage, rbd and rados package versions are 14.2.16.
Basically I want the same data that I can acquire from shell 'rbd du
' and 'rbd info ' commands, but through Python API.
At the moment I can connect to the cluster,
Hi.
Just want to notice that if you google for ceph python lib examples it
leads to 404
https://www.google.ru/search?hl=ru=ceph+python+rbd
https://docs.ceph.com/en/latest/rbd/api/librbdpy/
Some 3rd party sites and chinese version works fine though
http://docs.ceph.org.cn/rbd/librbdpy/
Just shooting in the dark here, but you may be affected by similar issue I
had a while back, it was discussed here:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZOPBOY6XQOYOV6CQMY27XM37OC6DKWZ7/
In short - they've changed setting bluefs_buffered_io to false in the
recent
ead make on OSD disk?
> Is it rand. read on OSD disk for both cases?
> Then how to explain the performance difference between seq. and rand.
> read inside VM? (seq. read IOPS is 20x than rand. read, Ceph is
> with 21 HDDs on 3 nodes, 7 on each)
>
> Thanks!
> Tony
> > ---
Not exactly. You can also tune network/software.
Network - go for lower latency interfaces. If you have 10G go to 25G or
100G. 40G will not do though, afaik they're just 4x10G so their latency is
the same as in 10G.
Software - it's closely tied to your network card queues and processor
cores. In
ideas how safe that
procedure is? I suppose it should be safe since there was no change in the
actual data storage scheme?
вт, 4 авг. 2020 г. в 14:33, Vladimir Prokofev :
> > What Kingston SSD model?
>
> === START OF INFORMATION SECTION ===
> Model Family: SandForce Driven SSDs
ime is:Tue Aug 4 14:31:36 2020 MSK
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
вт, 4 авг. 2020 г. в 14:17, Eneko Lacunza :
> Hi Vladimir,
>
> What Kingston SSD model?
>
> El 4/8/20 a las 12:22, Vladimir Prokofev escribió:
> >
, Intel SSD journals are not that affected, though they too
experience increased load.
Nevertheless, there're now a lot of read IOPS on block.db devices after
upgrade that were not there before.
I wonder how 600 IOPS can destroy SSDs performance that hard.
вт, 4 авг. 2020 г. в 12:54, Vladimir Prokofev
Good day, cephers!
We've recently upgraded our cluster from 14.2.8 to 14.2.10 release, also
performing full system packages upgrade(Ubuntu 18.04 LTS).
After that performance significantly dropped, main reason beeing that
journal SSDs are now have no merges, huge queues, and increased latency.
want to host up to 3 levels of
> rocksdb in the SSD.
>
> Thanks,
> Orlando
>
> -Original Message-
> From: Igor Fedotov
> Sent: Wednesday, February 5, 2020 7:04 AM
> To: Vladimir Prokofev ; ceph-users@ceph.io
> Subject: [ceph-users] Re: Fwd: BlueFS spillover
Cluster upgraded from 12.2.12 to 14.2.5. All went smooth, except BlueFS
spillover warning.
We create OSDs with ceph-deploy, command goes like this:
ceph-deploy osd create --bluestore --data /dev/sdf --block-db /dev/sdb5
--block-wal /dev/sdb6 ceph-osd3
where block-db and block-wal are SSD
15 matches
Mail list logo