I would try 'mv /etc/ceph/osd{,.old}' then run 'ceph-volume simple scan'
again. We had some problems upgrading due to OSDs (perhaps initially
installed as firefly?) missing the 'type' attribute and iirc the
'ceph-volume simple scan' command refused to overwrite existing json files
after I made
Does 'ceph-volume lvm list' show it? If so you can try to activate it with
'ceph-volume lvm activate 122 74b01ec2--124d--427d--9812--e437f90261d4'
Bob
On Tue, May 14, 2019 at 7:35 AM Tarek Zegar wrote:
> Someone nuked and OSD that had 1 replica PGs. They accidentally did echo 1
> >
I'd recommend running through these steps and posting the output as well
http://docs.ceph.com/docs/master/rados/troubleshooting/memory-profiling/
Bob
On Sat, Apr 15, 2017 at 5:39 AM, Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> How many PGs do you have? And did you change any
You can operate without the default pools without issue.
On Fri, Mar 24, 2017 at 1:23 PM, mj wrote:
> Hi,
>
> On the docs on ppols http://docs.ceph.com/docs/cutt
> lefish/rados/operations/pools/ it says:
>
> The default pools are:
>
> *data
> *metadata
> *rbd
>
Blair,
Please do follow up with your findings. I've built samba v4.4.3 packages
for centos 7, updated the kernel to v4.5.4 and tried a number of different
configurations including kernel mounting at /cephfs and sharing /cephfs/dir
without using ceph_vfs, and using ceph_vfs and targeting /dir in
George,
Check the instructions here which should allow you to test your crush rules
without applying them to your cluster.
http://dachary.org/?p=3189
also, fwiw, we are not using an 'emit' after each choose (note these rules
are not implementing what you're trying to)-
# rules
rule
Kees,
See http://dachary.org/?p=3189 for some simple instructions on testing your
crush rule logic.
Bob
On Wed, Jul 6, 2016 at 7:07 AM, Kees Meijs wrote:
> Hi Micha,
>
> Thank you very much for your prompt response. In an earlier process, I
> already ran:
> > $ ceph tell osd.*
Yang,
We've got some proxmox hosts which are still running firefly and appear to
be working fine with Jewel. We did have a problem where the firefly clients
wouldn't communicate with the ceph cluster due to mismatched capabilities
flags but this was resolved by setting "ceph osd crush tunables
I'd guess you previously removed an osd.0 but forgot to perform 'ceph auth
del osd.0'
'ceph auth list' might show some other stray certs.
Bob
On Mon, Apr 4, 2016 at 9:52 PM, wrote:
> Hi,
>
>
>
> I keep getting this error while try to activate:
>
>
>
> [root@mon01 ceph]#
Check your firewall rules
On Fri, Apr 1, 2016 at 10:28 AM, Nate Curry wrote:
> I am having some issues with my newly setup cluster. I am able to get all
> of my 32 OSDs to start after setting up udev rules for my journal
> partitions but they keep going down. It did seem
Calvin,
What does your crushmap look like?
ceph osd tree
I find it strange that 1023 PGs are undersized when only one OSD failed.
Bob
On Thu, Mar 31, 2016 at 9:27 AM, Calvin Morrow
wrote:
>
>
> On Wed, Mar 30, 2016 at 5:24 PM Christian Balzer wrote:
>
Mike,
Recovery would be based on placement groups and those degraded groups would
only exist on the storage pool(s) rather than the cache tier in this
scenario.
Bob
On Fri, Mar 25, 2016 at 8:30 AM, Mike Miller
wrote:
> Hi,
>
> in case of a failure in the storage tier,
Cullen,
We operate a cluster with 4 nodes, each has 2xE5-2630, 64gb ram, 10x4tb
spinners. We've recently replaced 2xm550 journals with a single p3700 nvme
drive per server and didn't see the performance gains we were hoping for.
After making the changes below we're now seeing significantly better
Bryan,
Once the rest of the cluster was updated to v0.94.5 it now appears the one
host running infernalis v9.2.0 OSDs are booting.
Bob
On Fri, Dec 18, 2015 at 3:44 PM, Bob R <b...@drinksbeer.org> wrote:
> Bryan,
>
> I rebooted another host which wasn't updated to CentOS 7.2
map e25905 with expected crc
> 2015-12-18 16:09:50.983355 7fb5c7f39700 0 log_channel(cluster) log [WRN]
> : failed to encode map e25905 with expected crc
>
> I'm running this on Ubuntu 14.04.3 with the linux-image-generic-lts-wily
> kernel (4.2.0-21.25~14.04.1).
>
> Are you runn
Alex,
It looks like you might have an old repo in there with priority=1 so it's
not trying to install hammer. Try mv /etc/yum.repos.d/ceph.repo
/etc/yum.repos.d/ceph.repo.old && mv /etc/yum.repos.d/ceph.repo.rpmnew
/etc/yum.repos.d/ceph.repo then re-run ceph-deploy.
Bob
On Thu, Dec 10, 2015 at
We've been operating a cluster relatively incident free since 0.86. On
Monday I did a yum update on one node, ceph00, and after rebooting we're
seeing every OSD stuck in 'booting' state. I've tried removing all of the
OSDs and recreating them with ceph-deploy (ceph-disk required modification
to
ng some information to
> > monitors making them look like IN?
> >
> > 2015-11-29 2:10 GMT+08:00 Bob R <b...@drinksbeer.org>:
> >> Vasiliy,
> >>
> >> Your OSDs are marked as 'down' but 'in'.
> >>
> >> "Ceph OSDs have two
Vasiliy,
Your OSDs are marked as 'down' but 'in'.
"Ceph OSDs have two known states that can be combined. *Up* and *Down* only
tells you whether the OSD is actively involved in the cluster. OSD states
also are expressed in terms of cluster replication: *In* and *Out*. Only
when a Ceph OSD is
Hello,
We've got two problems trying to update our cluster to infernalis-
ceph-deploy install --release infernalis neb-kvm00
[neb-kvm00][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[neb-kvm00][INFO ] Running command: sudo rpm -Uvh
20 matches
Mail list logo