[ceph-users] Librados in openstack

2019-10-15 Thread solarflow99
I was wondering if this is provided somehow? All I see is rbd and radosgw mentioned. If you have applications built with librados surely openstack must have a way to provide it? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: Ceph deployment tool suggestions

2019-09-17 Thread solarflow99
can you just do a kickstart and use ceph-ansible? On Tue, Sep 17, 2019 at 9:59 AM Paul Emmerich wrote: > The best tool to automate both OS and Ceph deployment is ours: > https://croit.io/ > > Check out our demo: https://croit.io/croit-virtual-demo > > Paul > > -- > Paul Emmerich > > Looking

[ceph-users] Re: dashboard not working

2019-09-17 Thread solarflow99
with the following commands: $ ceph config set mgr mgr/dashboard/$name/server_addr $IP $ ceph config set mgr mgr/dashboard/$name/server_port $PORT https://docs.ceph.com/docs/mimic/mgr/dashboard/ On Tue, Sep 17, 2019 at 1:59 AM Lenz Grimmer wrote: > On 9/17/19 9:21 AM, solarflow99 wr

[ceph-users] Re: dashboard not working

2019-09-17 Thread solarflow99
s.ceph.com/docs/mimic/mgr/dashboard/> (To > get the dashboard up and running quickly, you can generate and install a > self-signed certificate using the following built-in command). > > Regards > Thomas > > Am 17.09.2019 um 09:12 schrieb Robert Sander: > > Hi, > &

[ceph-users] dashboard not working

2019-09-16 Thread solarflow99
I have mimic installed and for some reason the dashboard isn't showing up. I see which mon is listed as active for "mgr", the module is enabled, but nothing is listening on port 8080: # ceph mgr module ls { "enabled_modules": [ "dashboard", "iostat", "status" tcp

[ceph-users] Re: disk failure

2019-09-05 Thread solarflow99
dicks are expected to fail, and every once in a while i'll lose one, so that was expected and didn't come as any surprise to me. Are you suggesting failed drives almost always stay down and out? On Thu, Sep 5, 2019 at 11:13 AM Ashley Merrick wrote: > I would suggest checking the logs and

[ceph-users] Re: disk failure

2019-09-05 Thread solarflow99
no, I mean ceph sees it as a failure and marks it out for a while On Thu, Sep 5, 2019 at 11:00 AM Ashley Merrick wrote: > Is your HD actually failing and vanishing from the OS and then coming back > shortly? > > Or do you just mean your OSD is crashing and then restarting it self > shortly

[ceph-users] disk failure

2019-09-05 Thread solarflow99
One of the things i've come to notice is when HDD drives fail, they often recover in a short time and get added back to the cluster. This causes the data to rebalance back and forth, and if I set the noout flag I get a health warning. Is there a better way to avoid this?

[ceph-users] Re: ceph cluster warning after adding disk to cluster

2019-09-04 Thread solarflow99
how about also increasing osd_recovery_threads? On Wed, Sep 4, 2019 at 10:47 AM Guilherme Geronimo < guilherme.geron...@gmail.com> wrote: > Hey hey, > > First of all: 10GBps connection. > > Then, some magic commands: > > # ceph tell 'osd.*' injectargs '--osd-max-backfills 32' > # ceph tell

[ceph-users] Re: forcing an osd down

2019-09-03 Thread solarflow99
on a specific > OSD, which is much safer. > > Best regards, > > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: ceph-users on behalf of > solarflow99 > Sent: 03 September 2019 19: