On Tue, Dec 20, 2016 at 3:18 PM, Marius Vaitiekunas <
mariusvaitieku...@gmail.com> wrote:
> Hi Cephers,
>
> Could anybody explain, how rgw works with pools? I don't understand
> how .rgw.control, .rgw.gc,, .rgw.buckets.index pools could be 0 size, but
> also have some objects?
>
> # ceph df
Hello,
On Wed, 21 Dec 2016 04:12:56 + Adrian Saul wrote:
>
> I found the other day even though I had 0 weighted OSDs, there was still
> weight in the containing buckets which triggered some rebalancing.
>
> Maybe it is something similar, there was weight added to the bucket even
>
I found the other day even though I had 0 weighted OSDs, there was still weight
in the containing buckets which triggered some rebalancing.
Maybe it is something similar, there was weight added to the bucket even though
the OSD underneath was 0.
> -Original Message-
> From:
I have upgrade 3 jewel cluster on jessie to last 10.2.5, works fine.
- Mail original -
De: "Chad William Seys"
À: "ceph-users"
Envoyé: Mardi 20 Décembre 2016 17:31:49
Objet: [ceph-users] 10.2.5 on Jessie?
Hi all,
Has anyone had
Hello,
I just (manually) added 1 OSD each to my 2 cache-tier nodes.
The plan was/is to actually do the data-migration at the least busiest day
in Japan, New Years (the actual holiday is January 2nd this year).
So I was going to have everything up and in but at weight 0 initially.
Alas at the
Hi,
I have used for a couple month Calamari Centos 6.6 version(on a separate
system) to monitor a Ceph Cluster that consisted of 1 ceph monitor and 3 ceph
OSD servers without any problems.
Recently, CEPH cluster was switched to Centos 7.2 system. I decided to install
Centos 7.0 calamari
I am trying to setup kraken from source and I get an import error on using the
ceph command:
Traceback (most recent call last):
File "/home/ssd/src/vanilla-ceph/ceph-install/bin/ceph", line 112, in
from ceph_argparse import \
ImportError: cannot import name descsort_key
The python path
Hi cephers,
Any ideas on how to proceed on the inconsistencies below? At the moment
our ceph setup has 5 of these - in all cases it seems like some zero
length objects that match across the three replicas, but do not match
the object info size. I tried running pg repair on one of them, but
> Op 20 december 2016 om 17:13 schreef Francois Lafont
> :
>
>
> On 12/20/2016 10:02 AM, Wido den Hollander wrote:
>
> > I think it is commit 0cdf3bc875447c87fdc0fed29831554277a3774b:
> >
10.2.4 is buggy (high CPU usage)
10.2.5 fixes that, no issue found on my cluster
You're free to go :)
On 20/12/2016 17:31, Chad William Seys wrote:
> Hi all,
> Has anyone had success/problems with 10.2.5 on Jessie? I'm being a
> little cautious before updating. ;)
>
> Thanks!
> Chad.
>
On Tue, Dec 20, 2016 at 5:39 PM, Wido den Hollander wrote:
>
>> Op 15 december 2016 om 17:10 schreef Orit Wasserman :
>>
>>
>> Hi Wido,
>>
>> This looks like you are hitting http://tracker.ceph.com/issues/17364
>> The fix is being backported to jewel:
> Op 20 december 2016 om 17:31 schreef Nathan Cutler :
>
>
> > Looks like it was trying to send mail over IPv6 and failing.
> >
> > I switched back to postfix, disabled IPv6, and show a message was
> > recently queued for delivery to you. Please confirm you got it.
>
> Got
Hi all,
Has anyone had success/problems with 10.2.5 on Jessie? I'm being a
little cautious before updating. ;)
Thanks!
Chad.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Jeldrik,
You are right. In this situation, you are better off collocating the journal on
the new SSD OSDs and recycling your journal to an OSD (if its wear level allows
it) once all its attached HDD OSDs are replaced.
As a side note, make sure to monitor the write endurance/wear level on the
Hi Cephers,
Could anybody explain, how rgw works with pools? I don't understand
how .rgw.control, .rgw.gc,, .rgw.buckets.index pools could be 0 size, but
also have some objects?
# ceph df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
507T 190T 316T
> Op 20 december 2016 om 11:50 schreef Kees Meijs :
>
>
> Hi Wido,
>
> Thanks again! Good to hear, it saves us a lot of upgrade trouble in advance.
>
> If I'm not mistaken, we haven't done anything with CRUSH tunables. Any
> pointers on how to make sure we really didn't?
>
If
Hi Wido,
Thanks again! Good to hear, it saves us a lot of upgrade trouble in advance.
If I'm not mistaken, we haven't done anything with CRUSH tunables. Any
pointers on how to make sure we really didn't?
Regards,
Kees
On 20-12-16 10:14, Wido den Hollander wrote:
> No, you don't. A Hammer/Jewel
Hi,
I was playing with oVirt/Cinder integration and faced the issue. At the same
time virsh on CentOS 7.3 was working fine with RBD images. So, as a workaround
following procedure can be used to permanently set the secret on libvirt host:
# vi /tmp/secret.xml
> Op 20 december 2016 om 9:50 schreef Kees Meijs :
>
>
> Hi Wido,
>
> At the moment, we're running Ubuntu 14.04 LTS using the Ubuntu Cloud
> Archive. To be precise again, it's QEMU/KVM 2.3+dfsg-5ubuntu9.4~cloud2
> linked to Ceph 0.94.8-0ubuntu0.15.10.1~cloud0.
>
> So yes, it's
> Op 20 december 2016 om 3:24 schreef Gerald Spencer :
>
>
> Hello all,
>
> We're currently waiting on a delivery of equipment for a small 50TB proof
> of concept cluster, and I've been lurking/learning a ton from you. Thanks
> for how active everyone is.
>
>
> Op 20 december 2016 om 0:52 schreef Francois Lafont
> :
>
>
> Hi,
>
> On 12/19/2016 09:58 PM, Ken Dreyer wrote:
>
> > I looked into this again on a Trusty VM today. I set up a single
> > mon+osd cluster on v10.2.3, with the following:
> >
> > # status
Hi Wido,
At the moment, we're running Ubuntu 14.04 LTS using the Ubuntu Cloud
Archive. To be precise again, it's QEMU/KVM 2.3+dfsg-5ubuntu9.4~cloud2
linked to Ceph 0.94.8-0ubuntu0.15.10.1~cloud0.
So yes, it's all about running a newer QEMU/KVM on a not so new version
of Ubuntu.
Question is, are
22 matches
Mail list logo