[ceph-users] Re: Multipath and cephadm

2021-12-24 Thread David Caro
I did not really look deep, but by the last log it seems there's some utf chars somewhere (Greek phi?) And the code is not handling it well when logging, trying to use ASCII. On Thu, 23 Dec 2021, 19:02 Michal Strnad, wrote: > Hi all. > > We have problem using disks accessible via multipath. We

[ceph-users] Re: 16.2.6: clients being incorrectly directed to the OSDs cluster_network address

2021-09-28 Thread David Caro
the cephadm bootstrap command and not created by hand, and > > it worked before the upgrade/reboot so I am pretty confident with it. > > > > What do you think, can this be a bug or is more a misconfiguration on my > > side? > > > > Thanks, > > Javier &

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread David Caro
tag, dem 16.08.2021 um 13:52 +0200 schrieb David Caro: > > Afaik the swapping behavior is controlled by the kernel, there might be > > some tweaks on the container engine side, but > > you might want to try to tweak the default behavior by lowering the > > 'vm.sw

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread David Caro
t; > Is that a known behavior, an bug or configuration problem? On two hosts I > turned of swap and the OSDs a running happily > now for more the 6 weeks. > > Bets, > Alex > > _______ > ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: All OSDs on one host down

2021-08-06 Thread David Caro
d I’ve got the systemctl command right. > > > > You are not mixing 'not container commands' with 'container commands'. As in, > if you execute this journalctl outside of the container it will not find > anything of course. > > > __

[ceph-users] Re: ceph and openstack throttling experience

2021-06-10 Thread David Caro
phs in specific are you looking? > > Regards > > Marcel > > David Caro schreef op 2021-06-10 11:49: > > We have a similar setup, way smaller though (~120 osds right now) :) > > > > We have different capped VMs, but most have 500 write, 1000 read iops > &

[ceph-users] Re: ceph and openstack throttling experience

2021-06-10 Thread David Caro
a side question, is there an easy way to get rid of the slowops warning > besides restarting the involved osd. Otherwise the warning seems to stay > forever > > Regards > > Marcel > ___ > ceph-users mailing list -- ceph-users@ceph.

[ceph-users] Re: Integration of openstack to ceph

2021-06-10 Thread David Caro
users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro SRE - Cloud Services Wikimedia Foundation <https://wikimediafoundation.org/> PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C7E1 D262 69C3 "Imagine a world in which every single human being

[ceph-users] Re: How to find out why osd crashed with cephadm/podman containers?

2021-05-06 Thread David Caro
t; PG_DEGRADED: Degraded data redundancy: 132518/397554 objects degraded > (33.333%), 65 pgs degraded, 65 pgs undersized > > Thank your for your hints. > > Best regards, > Mabi > ___ > ceph-users mailing list -- ceph-users@cep

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread David Caro
objects degraded (0.635%) > 15676 active+clean > 285 active+undersized+degraded+remapped+backfill_wait > 230 incomplete > 176 active+undersized+degraded+remapped+backfilling > 8 down > 6 peering

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread David Caro
sic.es > ID comunicate.csic.es: @50852720l:matrix.csic.es > *** > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro SRE - Cloud Services Wikimedia Foundation <https://wikimediafoundation.org/> PGP S

[ceph-users] Re: Cannot create issue in bugtracker

2021-05-03 Thread David Caro
cker for some day or two? > > https://tracker.ceph.com/issues/new > > > Best regards > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-us

[ceph-users] Re: what-does-nosuchkey-error-mean-while-subscribing-for-notification-in-ceph

2021-04-16 Thread David Caro
___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro SRE - Cloud Services Wikimedia Foundation <https://wikimediafoundation.org/> PGP Signature: 7180 83A2 AC8B

[ceph-users] How to handle bluestore fragmentation

2021-04-15 Thread David Caro
Reading the thread "s3 requires twice the space it should use", Boris pointed out that the fragmentation for the osds is around 0.8-0.9: > On Thu, Apr 15, 2021 at 8:06 PM Boris Behrens wrote: >> I also checked the fragmentation on the bluestore OSDs and it is around >> 0.80 - 0.89 on most

[ceph-users] Re: v14.2.19 Nautilus released

2021-03-30 Thread David Caro
Thanks for the quick release! \o/ On Tue, 30 Mar 2021, 22:30 David Galloway, wrote: > This is the 19th update to the Ceph Nautilus release series. This is a > hotfix release to prevent daemons from binding to loopback network > interfaces. All nautilus users are advised to upgrade to this

[ceph-users] Re: Ceph 14.2.17 ceph-mgr module issue

2021-03-12 Thread David Caro
out what's the issue. On 03/12 16:33, Marc wrote: > > Python3 14.2.11 is still supporting python2, I can't imagine that a minor > update has such a change. Furthermore was el7 officially supported not? > > > > > -Original Message----- > > From: David Caro

[ceph-users] Re: Ceph 14.2.17 ceph-mgr module issue

2021-03-12 Thread David Caro
rized deployments hit this issue as well. I will find thta out > somewhere next week. > > FYI, > > Gr. Stefan > > [1]: https://tracker.ceph.com/issues/49770 > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send a

[ceph-users] Re: Metadata for LibRADOS

2021-03-04 Thread David Caro
RADOS? > > > > > > > > > > > > > > Thank you > > > > > > > ___ > > > > > > > ceph-users mailing list -- ceph-users@ceph.io > > > > > > > To u

[ceph-users] Re: Need Clarification on Maintenance Shutdown Procedure

2021-03-02 Thread David Caro
Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdh...@binghamton.edu > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro SRE - Cloud Services Wikimedia Foun

[ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses

2021-01-20 Thread David Caro
y would be nice. > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro SRE - Cloud Services Wikimedia Foundation <https://wikimediafoundation.org/> PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C7E1 D262 6

[ceph-users] Re: Increase number of objects in flight during recovery

2020-12-03 Thread David Caro
And the pool dump gets huge. > > I would take a look at iostat output for those OSD drives and see if there > are 8 iops or lots more actually. > > -- > May the most significant bit of your life be positive. > ___ > ceph-users m

[ceph-users] Re: Misleading error (osd has already bound to class) when starting osd on nautilus?

2020-11-25 Thread David Caro
t; > Thanks > > On Wed, Nov 25, 2020 at 4:03 PM David Caro wrote: > > > > > Yep, you are right: > > > > ``` > > # cat /sys/block/sdd/queue/rotational > > 1 > > ``` > > > > I was looking to the code too but you got there before me :) >

[ceph-users] Misleading error (osd has already bound to class) when starting osd on nautilus?

2020-11-25 Thread David Caro
oes anyone know what that error is about? Thanks! -- David Caro SRE - Cloud Services Wikimedia Foundation <https://wikimediafoundation.org/> PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C7E1 D262 69C3 "Imagine a world in which every single human being can freely share in the s

[ceph-users] Re: Monitor persistently out-of-quorum

2020-10-29 Thread David Caro
** Compaction Stats [default] ** > Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) > Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) > Comp(cnt) Avg(sec) KeyIn KeyDrop > --- > User 0/00.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 > 0.0 0.0 0.0 1.4 0.00 0.00 1 > 0.001 0 0 > Uptime(secs): 0.0 total, 0.0 interval > Flush(GB): cumulative 0.000, interval 0.000 > AddFile(GB): cumulative 0.000, interval 0.000 > AddFile(Total Files): cumulative 0, interval 0 > AddFile(L0 Files): cumulative 0, interval 0 > AddFile(Keys): cumulative 0, interval 0 > Cumulative compaction: 0.00 GB write, 0.21 MB/s write, 0.00 GB read, 0.00 > MB/s read, 0.0 seconds > Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s > read, 0.0 seconds > Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 > level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for > pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 > memtable_compaction, 0 memtable_slowdown, interval 0 total count > > ** File Read Latency Histogram By Level [default] ** > > 2020-10-28 17:17:13.253 7eff1f7cd1c0 0 mon.mgmt03 does not exist in monmap, > will attempt to join an existing cluster > 2020-10-28 17:17:13.254 7eff1f7cd1c0 0 using public_addr v2:10.2.1.1:0/0 -> > [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] > 2020-10-28 17:17:13.254 7eff1f7cd1c0 0 starting mon.mgmt03 rank -1 at public > addrs [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] at bind addrs > [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] mon_data > /var/lib/ceph/mon/ceph-mgmt03 fsid 374aed9e-5fc1-47e1-8d29-4416f7425e76 > 2020-10-28 17:17:13.256 7eff1f7cd1c0 1 mon.mgmt03@-1(???) e2 preinit fsid > 374aed9e-5fc1-47e1-8d29-4416f7425e76 > 2020-10-28 17:17:13.256 7eff1f7cd1c0 1 mon.mgmt03@-1(???) e2 > initial_members mgmt01,mgmt02,mgmt03, filtering seed monmap > 2020-10-28 17:17:13.256 7eff1f7cd1c0 1 mon.mgmt03@-1(???) e2 preinit clean > up potentially inconsistent store state > 2020-10-28 17:17:13.258 7eff1f7cd1c0 0 -- > [v2:10.2.1.1:3300/0,v1:10.2.1.1:6789/0] send_to message mon_probe(probe > 374aed9e-5fc1-47e1-8d29-4416f7425e76 name mgmt03 new mon_release 14) v7 with > empty dest > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: OSD down, how to reconstruct it from its main and block.db parts ?

2020-10-28 Thread David Caro
Python scripts of ceph-volume, I noticed that tmpfs is mounted > during the run "ceph-colume lvm activate", > and "ceph-bluestore-tool prime-osd-dir" is started from the same script > afterwards. > Should I try starting "ceph-volume lvm activate"

[ceph-users] Re: OSD down, how to reconstruct it from its main and block.db parts ?

2020-10-26 Thread David Caro
under "/var/lib/ceph/osd/" is a tmpfs mount >point filled with appropriate files and symlinks, except of >"/var/lib/ceph/osd/ceph-1", >which is just an empty folder not mounted anywhere. >I tried to run > >"ceph-bluestore-tool prime-osd-dir --dev >/dev/ceph-e53b65ba-5eb0-44f5-9160-a2328f787a0f/osd-block-8c6324a3-0364-4fad-9dcb-81a1661ee202 >--path >/var/lib/ceph/osd/ceph-1" > >it created some files under /var/lib/ceph/osd/ceph-1 but without tmpfs >mount, and these files belonged to root. I changed owner of these files >into ceph.ceph , >I created appropriate symlinks for block and block.db but ceph-osd@1 >did not want to start either. Only "unable to find keyring" messages >disappeared. > >Please give any help on where to move next. >Thanks in advance for your help. >___ >ceph-users mailing list -- ceph-users@ceph.io >To unsubscribe send an email to ceph-users-le...@ceph.io -- David Caro ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-06 Thread David Caro
The hints have to be given from the client side as far as I understand, can you share the client code too? Also,not seems that there's no guarantees that it will actually do anything (best effort I guess): https://docs.ceph.com/docs/mimic/rados/api/librados/#c.rados_set_alloc_hint Cheers On 6