they are still reporting as version 12.2.13 because they were not up
during either of the upgrades.
Thank you.
Shain
On 7/29/21, 6:43 PM, "Shain Miley" wrote:
Hello,
I recently upgraded our Luminous ceph cluster to Nautilus. Everything
seemed to go well.
Today I started
f
this upgrade (48 of the 222 osd’s have been upgraded) and I would like to
continue with the upgrade but do not want to proceed if there is a larger issue
of some sort.
Some of the hosts are showing the correct version (5.2.13) in the Dashboard and
I am not sure why the dashboard would
/v3/__http://www.PerformAir.com__;!!Iwwt!FAQkxiDS80ZWksiJket210Oc_wLsRih_-WqhguEb44tq0_Ao7aqrgeIO_C8$
-Original Message-
From: Shain Miley [mailto:smi...@npr.org]
Sent: Friday, July 23, 2021 10:48 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Luminous won't fully recover
We re
is active+undersized+degraded, acting [215,201]
--
Thanks,
Shain
Shain Miley | Director of Platform and Infrastructure | Digital Media |
smi...@npr.org
assignment.
I understand what you mean about not focusing on the osd ids...but my ocd is
making me ask the question.
Thanks,
Shain
On 9/11/20, 9:45 AM, "George Shuklin" wrote:
On 11/09/2020 16:11, Shain Miley wrote:
> Hello,
> I have been wondering for quite s
assignment.
I am currently using ceph-deploy to handle adding nodes to the cluster.
Thanks in advance,
Shain
Shain Miley | Director of Platform and Infrastructure | Digital Media |
smi...@npr.org
___
ceph-users mailing list -- ceph-users@ceph.io
to bluestore and that this is really nothing
to worry about.
Thanks,
Shain
On 9/9/20, 11:16 AM, "Shain Miley" wrote:
Hi,
I recently added 3 new servers to Ceph cluster. These servers use the
H740p mini raid card and I had to install the HWE kernel in Ubuntu 16.04 in
or
Is this normal for deployments going forward…or did something go wrong? These
are 12TB drives but they are showing up as 47G here instead.
We are using ceph version 12.2.13 and I installed this using ceph-deply version
2.0.1.
Thanks in advance,
Shain
Shain Miley | Director of Platform
20 at 6:21 PM Shain Miley wrote:
>
> Hi,
> A few weeks ago several of our rdb images became unresponsive after a few
of our OSDs reached a near full state.
>
> Another member of the team rebooted the server that the rbd images are
mounted on in an att
1:47:06 rbd1 kernel: [2159048.204440] R10: c0ed0c00 R11:
0206 R12: 02424230
Aug 31 11:47:06 rbd1 kernel: [2159048.204441] R13: 02424210 R14:
R15: 0003
Any suggestions on what I can/should do next?
Thanks in advance,
Shain
Shain Miley | D
Hi,
We are thinking about upgrading our cluster currently running ceph version
12.2.12. I am wondering if we should be looking at upgrading to the latest
version of Mimic or the latest version Nautilus.
Can anyone here please provide a suggestion…I continue to be a little bit
confused about
something that is flexible
enough for our environment going forward.
Thanks in advance,
Shain
--
NPR | Shain Miley | Manager of Infrastructure, Digital Media | smi...@npr.org |
202.513.3649
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
12 matches
Mail list logo