https://goo.gl/PGE1Bx
Am Fr., 15. Nov. 2019 um 20:46 Uhr schrieb Janne Johansson
mailto:icepic...@gmail.com>>:
Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave
mailto:mc...@uvic.ca>>:
So would you recommend doing an entire node at the same time or per-osd?
You should be able to do it pe
of Victoria
O: 250.472.4997
From: Janne Johansson
Date: Friday, November 15, 2019 at 11:46 AM
To: Cave Mike
Cc: Paul Emmerich , ceph-users
Subject: Re: [ceph-users] Migrating from block to lvm
Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave
mailto:mc...@uvic.ca>>:
So would you recommend
ri, Nov 15, 2019 at 6:04 PM Mike Cave wrote:
>
> Greetings all!
>
>
>
> I am looking at upgrading to Nautilus in the near future (currently on
Mimic). We have a cluster built on 480 OSDs all using multipath and simple
block devices. I see that the ceph
.
I’m looking for opinions on best practices to complete this as I’d like to
minimize impact to our clients.
Cheers,
Mike Cave
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Good afternoon,
I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks per
node) and am trying to determine the best way to go about this task.
I deployed the cluster with ceph ansible and everything worked well. So I’d
like to add the new nodes with ceph ansible as well.
-Original Message-
From: Serkan Çoban
Date: Thursday, September 20, 2018 at 12:28 PM
To: Cave Mike
Cc: ceph-users
Subject: Re: [ceph-users] total_used statistic incorrect
did you read my answer?
On Thu, Sep 20, 2018 at 8:21 PM Mike Cave wrote:
>
> I'll bump this one more time in case someo
l_space 269 TiB
> [root@ceph01 ~]#
>
>
> Regards
> Jakub
>
> On Wed, Sep 19, 2018 at 2:09 PM wrote:
>>
>> The cluster needs time to remove those objects in the previous pools. What
>> you can do is to wait.
>>
>>
>>
>>
>>
&g
TiB
> [root@ceph01 ~]#
>
>
> Regards
> Jakub
>
> On Wed, Sep 19, 2018 at 2:09 PM wrote:
>>
>> The cluster needs time to remove those objects in the previous pools. What
>> you can do is to wait.
>>
>>
>>
>>
>>
>> 发件人:
Greetings,
I’ve recently run into an issue with my new Mimic deploy.
I created some pools and created volumes and did some general testing. In
total, there was about 21 TiB used. Once testing was completed, I deleted the
pools and thus thought I deleted the data.
However, the ‘total_used’
lt;ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Erasure Coded Pools and OpenStack
On Fri, Mar 23, 2018 at 8:08 AM, Mike Cave <mc...@uvic.ca> wrote:
> Greetings all!
>
>
>
> I’m currently attempting to create an EC pool for my glance images, however
> when I save an i
Greetings all!
I’m currently attempting to create an EC pool for my glance images, however
when I save an image through the OpenStack command line, the data is not ending
up in the EC pool.
So a little information on what I’ve done so far.
The way that I understand things to work is that you
move a host from the root bucket into the correct rack without draining
it and then refilling it or do I need to reweight the host to 0, move the host
to the correct bucket, and then reweight it back to it’s correct value?
Any insights here will be appreciated.
Thank you for your time,
Mike Cave
Hi Anthony,
When the OSDs were added it appears they were added with a crush weight of 1 so
I believe we need to change the weighting as we are getting a lot of very full
OSDs.
-21 20.0 host somehost
216 1.0 osd.216 up 1.0 1.0
217
to 2 gradually?
Any and all suggestions are welcome.
Cheers,
Mike Cave
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
14 matches
Mail list logo