On 10.07.19 20:46, Reed Dier wrote:
> It does not appear that that page has been updated in a while.
Addressed that already - someone needs to merge it
https://github.com/ceph/ceph/pull/28643
--
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)
signature.asc
Hi Marc,
let me add Danny so he's aware of your request.
Kai
On 23.05.19 12:13, Wido den Hollander wrote:
>
> On 5/23/19 12:02 PM, Marc Roos wrote:
>> Sorry for not waiting until it is published on the ceph website but,
>> anyone attended this talk? Is it production ready?
>>
> Danny from
Hi all,
I think this change really late in the game just results into confusion.
I would be in favor to make the ceph-mgr-dashboard package a dependency
of the ceph-mgr so that people just need to enable the dashboard without
the need to install another package separately. This way we could also
Hi all,
just a friendly reminder to use this pad for CfP coordination .
Right now it seems like I'm the only one who submitted something to
Cephalocon and I can't believe that ;-)
https://pad.ceph.com/p/cfp-coordination
Thanks,
Kai
On 5/31/18 1:17 AM, Gregory Farnum wrote:
> Short version:
Congrats to everyone.
Seems like we're getting closer to pony's, rainbows and ice cream for
everyone!;-)
On 12/11/18 12:15 AM, Mike Perez wrote:
> Hey all,
>
> Great news, the Rook team has declared Ceph to be stable in v0.9! Great work
> from both communities in collaborating to make this
On 22.08.2018 20:57, David Turner wrote:
> does it remove any functionality of the previous dashboard?
No it doesn't. All dashboard_v1 features are integrate and part of the
dashboard_v2 as well.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
On 28.06.2018 23:25, Eric Jackson wrote:
> Recently, I learned that this is not necessary when both are on the same
> device. The wal for the Bluestore OSD will use the db device when set to 0.
That's good to know. Thanks for the input on this Eric.
--
SUSE Linux GmbH, GF: Felix Imendörffer,
I'm also not 100% sure but I think that the first one is the right way
to go. The second command only specifies the db partition but no
dedicated WAL partition. The first one should do the trick.
On 28.06.2018 22:58, Igor Fedotov wrote:
>
> I think the second variant is what you need. But I'm
On 20.06.2018 17:39, Dan van der Ster wrote:
> And BTW, if you can't make it to this event we're in the early days of
> planning a dedicated Ceph + OpenStack Days at CERN around May/June
> 2019.
> More news on that later...
Will that be during a CERN maintenance window?
*that would raise my
e
> ceph osd pool set $pool pgp_num $num
> while sleep 10; do
> ceph osd health | grep -q
> 'peering\|stale\|activating\|creating\|inactive' || break
> done
> done
> for flag in $flags; do
> ceph osd unset $flag
> done
>
> On Thu, May 17, 2018 at 9:27
Hi Oliver,
a good value is 100-150 PGs per OSD. So in your case between 20k and 30k.
You can increase your PGs, but keep in mind that this will keep the
cluster quite busy for some while. That said I would rather increase in
smaller steps than in one large move.
Kai
On 17.05.2018 01:29,
Looks very good. Is it anyhow possible to display the reason why a
cluster is in an error or warning state? Thinking about the output from
ceph -s if this could by shown in case there's a failure. I think this
will not be provided by default but wondering if it's possible to add.
Kai
On
Hi all,
indeed it was a lot of fun again and what I really liked to most are the
open discussions afterwards.
Big thanks goes to Wido for organizing this and we should not forget to
thank you all the sponsors who made this happen as well.
Kai
On 20.04.2018 10:32, Sean Purdy wrote:
> Just a
Is this just from one server or from all servers? Just wondering why VD
0 is using WriteThrough compared to the others. If that's the setup for
the OSD's you already have a cache setup problem.
On 10.04.2018 13:44, Mohamad Gebai wrote:
> megacli -LDGetProp -cache -Lall -a0
>
> Adapter 0-VD
Hi all,
we've created a new #ceph-dashboard channel on OFTC to talk about all
the related dashboard functionalities and developments. This means that
the old "openattic" channel on Freenode is just for openATTIC and
everything new regarding the mgr module will now be discussed in the new
channel
Hi Robert,
thanks will forward it to the community list as well.
Kai
On 03/26/2018 11:03 AM, Robert Sander wrote:
> Hi Kai,
>
> On 22.03.2018 18:04, Kai Wagner wrote:
>> don't know if this is the right place to discuss this but I was just
>> wondering if there's any specif
Hi all,
don't know if this is the right place to discuss this but I was just
wondering if there's any specific mailing list + web site where upcoming
events (Ceph/Open Source/Storage) and conferences are discussed and
generally tracked?
Also I would like to sync upfront on topics that could be
Hi,
given the fact that we don't have Ubu or Centos packages, you could
install directly from our sources.
http://download.openattic.org/sources/3.x/openattic-3.6.2.tar.bz2
Our docs are hosted at: http://docs.openattic.org/en/latest/
Kai
On 03/02/2018 04:39 PM, Budai Laszlo wrote:
> Hi,
>
>
I totally understand and see your frustration here, but you've to keep
in mind that this is an Open Source project with a lots of volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.
Hey,
yes there are plans to add management functionality to the dashboard as
well. As soon as we're covered all the existing functionality to create
the initial PR we'll start with the management stuff. The big benefit
here is, that we can profit what we've already done within openATTIC.
If
Hi,
maybe it's worth looking at this:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017378.html
Kai
On 02/14/2018 11:06 AM, Götz Reinicke wrote:
> Hi,
>
> We have some work to do on our power lines for all building and we have to
> shut down all systems. So there is also no
Hi Wido,
how do you know about that beforehand? There's no official upcoming
event on the ceph.com page?
Just because I'm curious :)
Thanks
Kai
On 12.02.2018 10:39, Wido den Hollander wrote:
> The next one is in London on April 19th
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane
Sometimes I'm just blind. Way to less ML :D
Thanks!
On 12.02.2018 10:51, Wido den Hollander wrote:
> Because I'm co-organizing it! :) It send out a Call for Papers last
> week to this list.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
On 12.02.2018 00:33, c...@elchaka.de wrote:
> I absolutely agree, too. This was really great! Would be Fantastic if the
> ceph days will happen again in Darmstadt - or Düsseldorf ;)
>
> Btw. Will the Slides and perhaps Videos of the presentation be online
> avaiable?
AFAIK Danny is working on
Hi and welcome,
On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR
> feature. We've 2 10Gb connected data centers in the same campus. I
> wonder if it's possible to setup a CEPH cluster with following
> components in each data
Hi all,
I had the idea to use a RBD device as the SBD device for a pacemaker
cluster. So I don't have to fiddle with multipathing and all that stuff.
Have someone already tested this somewhere and can tell how the cluster
reacts on this?
I think this shouldn't be problem, but I'm just wondering
Just for those of you who are not subscribed to ceph-users.
Forwarded Message
Subject:Ceph team involvement in Rook (Deploying Ceph in Kubernetes)
Date: Fri, 19 Jan 2018 11:49:05 +0100
From: Sebastien Han
To: ceph-users
Dito, cya in Darmstadt!
On 01/16/2018 08:47 AM, Wido den Hollander wrote:
> Yes! Looking forward :-) I'll be there :)
>
> Wido
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: OpenPGP digital signature
28 matches
Mail list logo