Hello,
I tried to upgrade one node in my cluster to pacific in Ubuntu 20.04.2 and
kernel 5.8.0-49. After upgrade OSDs wont start giving below error message
This non-cephadm cluster is running bluestore OSDs which many were
created in Luminous.
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
blues
Hello,
+1 Am facing the same problem in ubuntu after upgrade to pacific
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -
:07 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com> wrote:
> Are the file permissions correct and UID/guid in passwd both 167?
>
> Sent from my iPhone
>
> On 4 Apr 2021, at 12:29, Lomayani S. Laizer wrote:
>
> Hello,
>
> +1 Am facing the same problem in ubun
the current values and change them to 167 using
> usermod
> > > and groupmod.
> > > >
> > > > I had just this issue. It’s partly to do with how perms are used
> within
> > > the containers I think.
> > > >
> > > > I changed the
Hello,
Last week I upgraded my production cluster to Pacific. the cluster was
healthy until a few hours ago.
When scrub run 4hrs ago left the cluster in an inconsistent state. Then
issued the command ceph pg repair 7.182 to try to repair the cluster but
ended with active+recovery_unfound+degraded
) flags 0x0 phys_seg 16 prio
class 0
On Thu, Apr 29, 2021 at 12:24 AM Lomayani S. Laizer
wrote:
> Hello,
> Last week I upgraded my production cluster to Pacific. the cluster was
> healthy until a few hours ago.
> When scrub run 4hrs ago left the cluster in an inconsistent state. T
On 4/29/21 4:58 AM, Lomayani S. Laizer wrote:
> > Hello,
> >
> > Any advice on this. Am stuck because one VM is not working now. Looks
> there
> > is a read error in primary osd(15) for this pg. Should i mark osd 15 down
> > or out? Is there any risk of doing this
Hello,
I might be hit by the same bug. After upgrading from octopus to pacific my
cluster is slower by around 2-3times.
I will try 16.2.6 when is out
On Fri, Sep 10, 2021 at 6:58 PM Igor Fedotov wrote:
> Hi Luis,
>
> some chances that you're hit by https://tracker.ceph.com/issues/52089.
> What
838520] device tap33511c4d-2c*
>* left promiscuous mode*
>* > May 13 09:35:33 compute5 kernel: [123074.838527] brqa72d845b-e9: port*
>* 1(tap33511c4d-2c) entered disabled state*
>* > May 13 09:35:33 compute5 networkd-dispatcher[1614]: Failed to request*
>* link: No such device*
>* &g
Hello Jason,
I confirm this release fixes the crashes. there is no a single crash for
past 4 days
On Mon, Sep 14, 2020 at 2:55 PM Jason Dillaman wrote:
> On Mon, Sep 14, 2020 at 5:13 AM Lomayani S. Laizer
> wrote:
> >
> > Hello,
> > Last week i got time to try de
Hello,
I have upgraded nautilus cluster to octopus few days ago. the cluster was
running ok and even after to octopus everything was running ok
the issue came when i rebooted the servers for updating the kernel. Two
servers out of 6 osd's servers osd cant start. No error reported in
ceph-volume.l
coming online because i deleted the wrong
entry and then rebooted the server. then i run ceph-volume lvm activate
--all after reboot but osds cant come up
Anyone with way around this?
On Tue, Mar 31, 2020 at 5:39 PM Lomayani S. Laizer
wrote:
> Hello,
> I have upgraded nautilus cluster to o
Hello,
After upgrade our ceph cluster to octopus few days ago we are seeing vms
crashes with below error. We are using ceph with openstack(rocky).
Everything running ubuntu 18.04 with kernel 5.3. We seeing this crashes in
busy vms. this is cluster was upgraded from nautilus.
kernel: [430751.1769
Dillaman wrote:
> On Mon, Apr 6, 2020 at 3:55 AM Lomayani S. Laizer
> wrote:
> >
> > Hello,
> >
> > After upgrade our ceph cluster to octopus few days ago we are seeing vms
> > crashes with below error. We are using ceph with openstack(rocky).
> > Everythin
broken somewhere but not figured
out where
On Mon, Apr 6, 2020 at 5:08 PM Eugen Block wrote:
> Hi,
>
> did you manage to get all OSDs up (you reported issues some days ago)?
> Is the cluster in a healthy state?
>
>
> Zitat von "Lomayani S. Laizer" :
>
> &g
I had a similar problem when upgraded to octopus and the solution is to
turn off autobalancing.
You can try to turn off if enabled
ceph balancer off
On Fri, Apr 24, 2020 at 8:51 AM Eugen Block wrote:
> Hi,
> the balancer is probably running, which mode? I changed the mode to
> none in our
Hello,
On my side at point of vm crash these are logs below. At the moment my
debug is at 10 value. I will rise to 20 for full debug. these crashes are
random and so far happens on very busy vms. Downgrading clients in host to
Nautilus these crashes disappear
Qemu is not shutting down in general b
y 8, 2020 at 5:40 AM Brad Hubbard wrote:
> On Fri, May 8, 2020 at 12:10 PM Lomayani S. Laizer
> wrote:
> >
> > Hello,
> > On my side at point of vm crash these are logs below. At the moment my
> debug is at 10 value. I will rise to 20 for full debug. these crashes are
&
18 matches
Mail list logo