We have also had this issue multiple times in 14.2.11
On Tue, Dec 8, 2020, 5:11 PM wrote:
> I have same issue. My cluster runing 14.2.11 versions. What is your
> version ceph?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send
Marc;
As if that's not enough confusion (from the FAQ):
"Security issues will be updated in CentOS Stream after they are solved in the
current RHEL release. Obviously, embargoed security releases can not be
publicly released until after the embargo is lifted."
Thank you,
Dominic L. Hilsbos,
I am confused about that page
"Does this mean that CentOS Stream is the RHEL BETA test platform now?"
"No, CentOS Stream will be getting fixes and features ahead of RHEL"
However this is how wikipedia describes beta:
Beta version software is often useful for demonstrations and previews
Sorry my cluster is 14.2.11.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have same issue. My cluster runing 14.2.11 versions. What is your version
ceph?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I think you should open an issue on the ceph tracker as it seems the cephadm
upgrade workflow doesn't support multi arch container images.
docker.io/ceph/ceph:v15.2.7 is a manifest list [1], which depending on the host
architecture (x86_64 or ARMv8), will provide you the right container image.
As far as I know, the issue isn't specific to using container as deployment
using packages (rpm or deb) are also affected by the issue (at least CentOS 8
and Ubuntu 20.04 focal)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Marc,
That video may be out of date.
https://centos.org/distro-faq/#q6-will-there-be-separateparallelsimultaneous-streams-for-8-9-10-etc
--
Adam
On Tue, Dec 8, 2020 at 3:50 PM wrote:
>
> Marc;
>
> I'm not happy about this, but RedHat is suggesting that those of us running
> CentOS for
Marc;
I'm not happy about this, but RedHat is suggesting that those of us running
CentOS for production should move to CentOS Stream. As such, I need to
determine if the software I'm running on top of it can be run on Stream.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information
For Ceph,this is fortunately not a major issue. Drives failing is
considered entirely normal, and Ceph will automatically rebuild your data
from redundancy onto a new replacement drive.If You're able to predict the
imminent failure of a drive, adding a new drive /OSD will automatically
start
I did not. Thanks for the info. But if I understand this[1] explanation
correctly. CentOS stream is some sort of trial environment for rhel. So
who is ever going to put SDS on such an OS?
Last post on this blog "But if you read the FAQ, you also learn that
once they start work on RHEL 9,
All;
As you may or may not know; this morning RedHat announced the end of CentOS as
a rebuild distribution[1]. "CentOS" will be retired in favor of the recently
announced "CentOS Stream."
Can Ceph be installed on CentOS Stream?
Since CentOS Stream is currently at 8, the question really is:
Destroy this OSD, replace disk, deploy OSD.
k
Sent from my iPhone
> On 8 Dec 2020, at 15:13, huxia...@horebdata.cn wrote:
>
> Hi, dear cephers,
>
> On one ceph i have a failing disk, whose SMART information signals an
> impending failure but still availble for reads and writes. I am
We rebuilt all of our mons in one cluster such that they bind only to port 3300
with msgrv2. Previous to this we were binding to both 6789 and 3300. All of our
server and client components are sufficiently new (14.2.x) and we haven’t
observed any disruption but I am inquiring if this may be
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.7.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to
Hi Ken,
Thank you for the update! As per:
https://github.com/ceph/ceph-container/issues/1748
We implemented the (dropping ulimit to 1024:4096 for mgr) suggested change
last night, and on our test cluster of 504 OSDs, being polled by the
internal prometheus and our external instance, the mgrs
Hi all,
Just out of curiosity.Considering vector machines are being used in HPC
applications to accelerate certain kernels, do you think there are some
workloads in Ceph that could be good candidates to be offloaded and
accelerated on vector machines ?
Thanks in advance.
BR
Hi Eric & Matt,
I'm working on this again, and was able to reproduce with a versioned
test bucket in v14.2.11. I put a test file "passwd", then deleted it,
then let the lc trim the versions. The exact lc and resulting bi list
are at: https://stikked.web.cern.ch/stikked/view/raw/cc748686
> an
Thanks a lot. I got it.
huxia...@horebdata.cn
From: Janne Johansson
Date: 2020-12-08 13:38
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: [ceph-users] How to copy an OSD from one failing disk to another
one
"ceph osd set norebalance" "ceph osd set nobackfill"
Add new OSD, set osd
"ceph osd set norebalance" "ceph osd set nobackfill"
Add new OSD, set osd weight to 0 on old OSD
unset the norebalance and nobackfill options,
and the cluster will do it all for you.
Den tis 8 dec. 2020 kl 13:13 skrev huxia...@horebdata.cn <
huxia...@horebdata.cn>:
> Hi, dear cephers,
>
> On
FOSDEM is a free software event that offers open source communities a place to
meet, share ideas and collaborate. It is well known for being highly
developer-oriented and in the past brought together 8000+ participants from all
over the world. It's home is in the city of Brussels (Belgium).
Hi, dear cephers,
On one ceph i have a failing disk, whose SMART information signals an impending
failure but still availble for reads and writes. I am setting up a new disk on
the same node to replace it.
What is the best procedure to migrate data (or COPY ) from the failing OSD to
the new
Wow! Distributed epins :) Thanks for trying it. How many
sub-directories under the distributed epin'd directory? (There's a lot
of stability problems that are to be fixed in Pacific associated with
lots of subtrees so if you have too large of a directory, things could
get ugly!)
Yay, beta
23 matches
Mail list logo