Hi,
we're on v14.2.4 and nothing but that. All clients and servers run kernel
ubuntu 18.04 LTS 5.0.0-20.
We're seeing this error:
MountVolume.WaitForAttach failed for volume
"pvc-45a86719-edb9-11e9-9f38-02000a030111" : fail to check rbd image status
with: (exit status 110), rbd output: (2019
Excuse the top-posting.
When looking at the logs it helps to filter by the actual thread that crashed.
$ grep 7f08af3b6700 ceph-osd.27.log.last.error.txt|tail -15
-1001> 2019-10-30 12:55:41.498823 7f08af3b6700 1 --
129.20.199.93:6803/977508 --> 129.20.199.7:0/2975967502 --
osd_op_reply(28304673
On Wed, Oct 30, 2019 at 9:28 AM Jake Grimmett wrote:
>
> Hi Zheng,
>
> Many thanks for your helpful post, I've done the following:
>
> 1) set the threshold to 1024 * 1024:
>
> # ceph config set osd \
> osd_deep_scrub_large_omap_object_key_threshold 1048576
>
> 2) deep scrubbed all of the pgs on th
Thanks 潘东元 for the response.
The creation of a new pool works, and all the PGs corresponding to that
pool have active+clean state.
When I initially set ceph 3 node cluster using juju charms (replication
count per object was set to 3), there were issues with ceph-osd services.
So I had to delete t
Thanks, Wido for the update.
Yeah, I have already tried a restart of ceph-mgr. But it didn't help.
On Wed, Oct 30, 2019 at 4:30 PM Wido den Hollander wrote:
>
>
> On 10/30/19 3:04 AM, soumya tr wrote:
> > Hi all,
> >
> > I have a 3 node ceph cluster setup using juju charms. ceph health shows
>
This is a tangent on Paul Emmerich's response to "[ceph-users] Correct
Migration Workflow Replicated -> Erasure Code". I've tried Paul's method
before to migrate between 2 data pools. However I ran into some issues.
The first issue seems like a bug in RGW where the RGW for the new zone was
able to
Hi Zheng,
Many thanks for your helpful post, I've done the following:
1) set the threshold to 1024 * 1024:
# ceph config set osd \
osd_deep_scrub_large_omap_object_key_threshold 1048576
2) deep scrubbed all of the pgs on the two OSD that reported "Large omap
object found." - these were all in p
The "best" health i was able to get was :
HEALTH_ERR norecover flag(s) set; 1733/37482459 objects misplaced (0.005%); 5
scrub errors; Possible data damage: 2 pgs inconsistent; Degraded data
redundancy: 7461/37482459 objects degraded (0.020%), 24 pgs degraded, 2 pgs
undersized
OSDMAP_FLAGS noreco
I don't use ansible anymore. But this was my config for the host onode1:
./host_vars/onode2.yml:
lvm_volumes:
- data: /dev/sdb
db: '1'
db_vg: host-2-db
- data: /dev/sdc
db: '2'
db_vg: host-2-db
- data: /dev/sde
db: '3'
db_vg: host-2-db
- data: /dev/sdf
db: '4'
Thus spake Brad Hubbard (bhubb...@redhat.com) on mercredi 30 octobre 2019 à
12:50:50:
> Maybe you should set nodown and noout while you do these maneuvers?
> That will minimise peering and recovery (data movement).
As the commands don't take too long, i just had a few slow requests before
the osd
Hi Everyone,
Does anyone know how to indicate block-db and block-wal to device on ansible ?
In ceph-deploy it is quite easy :
ceph-deploy osd create osd_host08 --data /dev/sdl --block-db /dev/sdm12
--block-wal /dev/sdn12 -bluestore
On my data nodes I have 12 HDDs and 2 SSDs I use those SSDs for
Thanks a lot and sorry for the spam, I should have checked ! We are on
18.04, kernel is currently upgrading so if you don't hear back from me then
it is fixed.
Thanks for the amazing support !
On Wed, 30 Oct 2019, 09:54 Lars Täuber, wrote:
> Hi.
>
> Sounds like you use kernel clients with kerne
Hi.
Sounds like you use kernel clients with kernels from canonical/ubuntu.
Two kernels have a bug:
4.15.0-66
and
5.0.0-32
Updated kernels are said to have fixes.
Older kernels also work:
4.15.0-65
and
5.0.0-31
Lars
Wed, 30 Oct 2019 09:42:16 +
Bob Farrell ==> ceph-users :
> Hi. We are ex
Kernel bug due to a bad backport, see recent posts here.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Wed, Oct 30, 2019 at 10:42 AM Bob Farrell wrote:
>
> Hi. W
Hi. We are experiencing a CephFS client issue on one of our servers.
ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus
(stable)
Trying to access, `umount`, or `umount -f` a mounted CephFS volumes causes
my shell to hang indefinitely.
After a reboot I can remount the volumes
Hi,I need to change set-require-min-compat-clientto use upmap mode for the PG
balancer. Will this cause a disconnect of all clients? We're talking cephfs and
RBD images for VMs.
Or is it save to switch that live?
Is safe.
k
___
ceph-users mailing
Hi,I need to change set-require-min-compat-clientto use upmap mode for the PG
balancer. Will this cause a disconnect of all clients? We're talking cephfs and
RBD images for VMs.
Or is it save to switch that live?
thanks
___
ceph-users mailing list
ceph-
Yes you were right, somehow there was an unusual high memory target set, not
sure where this came from. I set it back to normal now, that should fix it I
guess.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.
18 matches
Mail list logo