from in
dashboard?
My guess is that is comes from calculating:
1 - Max Avail / (Used + Max avail) = 0.93
Kind Regards,
David Majchrzak
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
or sysctl or
things like Wido suggested with c-states would make any differences.
(Thank you Wido!)
Yes, running benchmarks is great, and we're already doing that
ourselves.
Cheers and have a nice evening!
--
David Majchrzak
On tor, 2019-11-28 at 17:46 +0100, Paul Emmerich wrote:
> Please don't
?
We have 256GB of RAM on each OSD host, 8 OSD hosts with 10 SSDs on
each. 2 osd daemons on each SSD. Raise ssd bluestore cache to 8GB?
Workload is about 50/50 r/w ops running qemu VMs through librbd. So
mixed block size.
3 replicas.
Appreciate any advice!
Kind Regards,
--
David Majchrzak
Hi,
I'll have a look at the status of se.ceph.com tomorrow morning, it's
maintained by us.
Kind Regards,
David
On mån, 2019-09-23 at 22:41 +0200, Oliver Freyermuth wrote:
> Hi together,
>
> the EU mirror still seems to be out-of-sync - does somebody on this
> list happen to know whom to
%2F02%2F02%2Fcrushmap-example-of-a-hierarchical-cluster-map=Y2VwaC11c2Vyc0BsaXN0cy5jZXBoLmNvbQ%3D%3D)
David Majchrzak
CTO
ODERLAND Webbhotell AB
E // da...@oderland.se
(https://link.getmailspring.com/link/1533557996.local-a293f1fe-4d41-v1.3.0-fd741...@getmailspring.com/2?redirect=mailto%3Adavid
was that I didn't have to backfill twice then, by reusing the osd
uuid.
I'll see if I can add to the docs after we have updated to Luminous or Mimic
and started using ceph-volume.
Kind Regards
David Majchrzak
On aug 3 2018, at 4:16 pm, Eugen Block wrote:
>
> Hi,
> we have a full bluestor
Hm. You are right. Seems ceph-osd uses id 0 in main.py.
I'll have a look in my dev cluster and see if it helps things.
/usr/lib/python2.7/dist-packages/ceph_disk/main.py
def check_journal_reqs(args):
_, _, allows_journal = command([
'ceph-osd', '--check-allows-journal',
'-i', '0',
'--log-file',
Hi!
Trying to replace an OSD on a Jewel cluster (filestore data on HDD + journal
device on SSD).
I've set noout and removed the flapping drive (read errors) and replaced it
with a new one.
I've taken down the osd UUID to be able to prepare the new disk with the same
osd.ID. The journal device
Hi/Hej Magnus,
We had a similar issue going from latest hammer to jewel (so might not be
applicable for you), with PGs stuck peering / data misplaced, right after
updating all mons to latest jewel at that time 10.2.10.
Finally setting the require_jewel_osds put everything back in place ( we
ble warnings in ceph.conf.
Are there any "issues" running with old tunables? Disruption of service?
Kind Regards,
David Majchrzak
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
, cheers.
Kind Regards,
David Majchrzak
> 29 jan. 2018 kl. 23:14 skrev David Majchrzak <da...@visions.se>:
>
> Thanks Steve!
>
> So the peering won't actually move any blocks around, but will make sure that
> all PGs know what state they are in? That means that
;.
So when all of the peering is done, I'll unset the norecover/nobackfill flags
and backfill will commence but will be less I/O intensive than peering and
backfilling at the same time?
Kind Regards,
David Majchrzak
> 29 jan. 2018 kl. 22:57 skrev Steve Taylor <steve.tay...@
, and will
start moving data around everywhere right?
Can I use reweight the same way as weight here, slowly increasing it up to 1.0
by increments of say 0.01?
Kind Regards,
David Majchrzak
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
0 osd.11
> 29 jan. 2018 kl. 22:40 skrev David Majchrzak <da...@visions.se>:
>
> Hi!
>
> Cluster: 5 HW nodes, 10 HDDs with SSD journals, filestore, 0.94.9 hammer,
> debian wheezy (scheduled to upgrade once this is fixed).
>
> I have a replaced HDD that a
skrev Wido den Hollander <w...@42on.com>:
>
>
>
> On 01/26/2018 07:09 PM, David Majchrzak wrote:
>> destroy did remove the auth key, however create didnt add the auth, I had to
>> do it manually.
>> Then I tried to start the osd.0 again and it failed because osd
d then run the create command without issues.
Kind Regards,
David Majchrzak
> 26 jan. 2018 kl. 18:56 skrev Wido den Hollander <w...@42on.com>:
>
>
>
> On 01/26/2018 06:53 PM, David Majchrzak wrote:
>> I did do that.
>> It didn't add the auth key to ceph, so I
; On 01/26/2018 06:37 PM, David Majchrzak wrote:
>> Ran:
>> ceph auth del osd.0
>> ceph auth del osd.6
>> ceph auth del osd.7
>> ceph osd rm osd.0
>> ceph osd rm osd.6
>> ceph osd rm osd.7
>> which seems to have removed them.
>
> Did you destroy
Ran:
ceph auth del osd.0
ceph auth del osd.6
ceph auth del osd.7
ceph osd rm osd.0
ceph osd rm osd.6
ceph osd rm osd.7
which seems to have removed them.
Thanks for the help Reed!
Kind Regards,
David Majchrzak
> 26 jan. 2018 kl. 18:32 skrev David Majchrzak <da...@visions.se>:
&g
0 00 0 0 0 00 0 osd.0
6 00 0 0 0 00 0 osd.6
7 00 0 0 0 00 0 osd.7
I guess I can just remove them from crush,auth and rm them?
Kind Regards,
David Majchrzak
>
19 matches
Mail list logo