[ceph-users] Pool Max Avail and Ceph Dashboard Pool Useage on Nautilus giving different percentages

2019-12-10 Thread David Majchrzak, ODERLAND Webbhotell AB
from in dashboard? My guess is that is comes from calculating: 1 - Max Avail / (Used + Max avail) = 0.93 Kind Regards, David Majchrzak ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Tuning Nautilus for flash only

2019-11-28 Thread David Majchrzak, ODERLAND Webbhotell AB
or sysctl or things like Wido suggested with c-states would make any differences. (Thank you Wido!) Yes, running benchmarks is great, and we're already doing that ourselves. Cheers and have a nice evening! -- David Majchrzak On tor, 2019-11-28 at 17:46 +0100, Paul Emmerich wrote: > Please don't

[ceph-users] Tuning Nautilus for flash only

2019-11-28 Thread David Majchrzak, ODERLAND Webbhotell AB
? We have 256GB of RAM on each OSD host, 8 OSD hosts with 10 SSDs on each. 2 osd daemons on each SSD. Raise ssd bluestore cache to 8GB? Workload is about 50/50 r/w ops running qemu VMs through librbd. So mixed block size. 3 replicas. Appreciate any advice! Kind Regards, -- David Majchrzak

Re: [ceph-users] eu.ceph.com mirror out of sync?

2019-09-23 Thread David Majchrzak, ODERLAND Webbhotell AB
Hi, I'll have a look at the status of se.ceph.com tomorrow morning, it's maintained by us. Kind Regards, David On mån, 2019-09-23 at 22:41 +0200, Oliver Freyermuth wrote: > Hi together, > > the EU mirror still seems to be out-of-sync - does somebody on this > list happen to know whom to

Re: [ceph-users] Testing a hypothetical crush map

2018-08-06 Thread David Majchrzak
%2F02%2F02%2Fcrushmap-example-of-a-hierarchical-cluster-map=Y2VwaC11c2Vyc0BsaXN0cy5jZXBoLmNvbQ%3D%3D) David Majchrzak CTO ODERLAND Webbhotell AB E // da...@oderland.se (https://link.getmailspring.com/link/1533557996.local-a293f1fe-4d41-v1.3.0-fd741...@getmailspring.com/2?redirect=mailto%3Adavid

Re: [ceph-users] Error: journal specified but not allowed by osd backend

2018-08-03 Thread David Majchrzak
was that I didn't have to backfill twice then, by reusing the osd uuid. I'll see if I can add to the docs after we have updated to Luminous or Mimic and started using ceph-volume. Kind Regards David Majchrzak On aug 3 2018, at 4:16 pm, Eugen Block wrote: > > Hi, > we have a full bluestor

Re: [ceph-users] Error: journal specified but not allowed by osd backend

2018-08-02 Thread David Majchrzak
Hm. You are right. Seems ceph-osd uses id 0 in main.py. I'll have a look in my dev cluster and see if it helps things. /usr/lib/python2.7/dist-packages/ceph_disk/main.py def check_journal_reqs(args): _, _, allows_journal = command([ 'ceph-osd', '--check-allows-journal', '-i', '0', '--log-file',

[ceph-users] Error: journal specified but not allowed by osd backend

2018-08-01 Thread David Majchrzak
Hi! Trying to replace an OSD on a Jewel cluster (filestore data on HDD + journal device on SSD). I've set noout and removed the flapping drive (read errors) and replaced it with a new one. I've taken down the osd UUID to be able to prepare the new disk with the same osd.ID. The journal device

Re: [ceph-users] PGs stuck peering (looping?) after upgrade to Luminous.

2018-07-12 Thread David Majchrzak
Hi/Hej Magnus, We had a similar issue going from latest hammer to jewel (so might not be applicable for you), with PGs stuck peering / data misplaced, right after updating all mons to latest jewel at that time 10.2.10. Finally setting the require_jewel_osds put everything back in place ( we

[ceph-users] Any issues with old tunables (cluster/pool created at dumpling)?

2018-01-31 Thread David Majchrzak
ble warnings in ceph.conf. Are there any "issues" running with old tunables? Disruption of service? Kind Regards, David Majchrzak ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
, cheers. Kind Regards, David Majchrzak > 29 jan. 2018 kl. 23:14 skrev David Majchrzak <da...@visions.se>: > > Thanks Steve! > > So the peering won't actually move any blocks around, but will make sure that > all PGs know what state they are in? That means that

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
;. So when all of the peering is done, I'll unset the norecover/nobackfill flags and backfill will commence but will be less I/O intensive than peering and backfilling at the same time? Kind Regards, David Majchrzak > 29 jan. 2018 kl. 22:57 skrev Steve Taylor <steve.tay...@

[ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
, and will start moving data around everywhere right? Can I use reweight the same way as weight here, slowly increasing it up to 1.0 by increments of say 0.01? Kind Regards, David Majchrzak ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Reweight 0 - best way to backfill slowly?

2018-01-29 Thread David Majchrzak
0 osd.11 > 29 jan. 2018 kl. 22:40 skrev David Majchrzak <da...@visions.se>: > > Hi! > > Cluster: 5 HW nodes, 10 HDDs with SSD journals, filestore, 0.94.9 hammer, > debian wheezy (scheduled to upgrade once this is fixed). > > I have a replaced HDD that a

Re: [ceph-users] Migrating filestore to bluestore using ceph-volume

2018-01-26 Thread David Majchrzak
skrev Wido den Hollander <w...@42on.com>: > > > > On 01/26/2018 07:09 PM, David Majchrzak wrote: >> destroy did remove the auth key, however create didnt add the auth, I had to >> do it manually. >> Then I tried to start the osd.0 again and it failed because osd

Re: [ceph-users] Migrating filestore to bluestore using ceph-volume

2018-01-26 Thread David Majchrzak
d then run the create command without issues. Kind Regards, David Majchrzak > 26 jan. 2018 kl. 18:56 skrev Wido den Hollander <w...@42on.com>: > > > > On 01/26/2018 06:53 PM, David Majchrzak wrote: >> I did do that. >> It didn't add the auth key to ceph, so I

Re: [ceph-users] Migrating filestore to bluestore using ceph-volume

2018-01-26 Thread David Majchrzak
; On 01/26/2018 06:37 PM, David Majchrzak wrote: >> Ran: >> ceph auth del osd.0 >> ceph auth del osd.6 >> ceph auth del osd.7 >> ceph osd rm osd.0 >> ceph osd rm osd.6 >> ceph osd rm osd.7 >> which seems to have removed them. > > Did you destroy

Re: [ceph-users] Migrating filestore to bluestore using ceph-volume

2018-01-26 Thread David Majchrzak
Ran: ceph auth del osd.0 ceph auth del osd.6 ceph auth del osd.7 ceph osd rm osd.0 ceph osd rm osd.6 ceph osd rm osd.7 which seems to have removed them. Thanks for the help Reed! Kind Regards, David Majchrzak > 26 jan. 2018 kl. 18:32 skrev David Majchrzak <da...@visions.se>: &g

Re: [ceph-users] Migrating filestore to bluestore using ceph-volume

2018-01-26 Thread David Majchrzak
0 00 0 0 0 00 0 osd.0 6 00 0 0 0 00 0 osd.6 7 00 0 0 0 00 0 osd.7 I guess I can just remove them from crush,auth and rm them? Kind Regards, David Majchrzak >