Hi,
I am also using CephFS with Docker for the same reason you said. Also
Ubuntu 18.04. I used the kernel client before Nautilus, but now FUSE,
because the kernel client is to old (might work now with the newest HWE
kernel).
I don't have any problems at all, neither in Portainer nor any other
Thanks for your reply.
We don't maintain it frequently.
My confusion is whether the more monitor is more advantage for
client(osd,rbdclient...) to get clustermap.
Do All clients comunicate with one monitor of the cluster at the mean time ?
If not how client to decide to communicat with
On Mon, Sep 23, 2019 at 6:50 AM Josh Haft wrote:
>
> Hi,
>
> I've been migrating data from one EC pool to another EC pool: two
> directories are mounted with ceph.dir.layout.pool file attribute set
> appropriately, then rsync from old to new and finally, delete the old
> files. I'm using the
Hi all,
I'm happy to announce that next Oct 16th we will have the Ceph Day
Argentina in Buenos Aires. The event will be held in the Museo de
Informatica de Argentina, so apart from hearing the latest features from
core developers, real use cases from our users and usage experiences from
customers
On 9/25/19 3:22 PM, nalexand...@innologica.com wrote:
> Hi everyone,
>
> We are running Nautilus 14.2.2 with 6 nodes and a total of 44 OSDs, all are
> 2TB spinning disks.
> # ceph osd count-metadata osd_objectstore
> "bluestore": 44
> # ceph osd pool get one size
> size: 3
> # ceph df
>
Hi,
I need your advice about the following setup.
Currently, we have a Ceph nautilus cluster used by Openstack Cinder with
single NIC in 10Gbps on OSD hosts.
We will upgrade the cluster by adding 7 new hosts dedicated to
Nova/Glance and we would like to add a cluster network to isolate
Hi everyone,
We are running Nautilus 14.2.2 with 6 nodes and a total of 44 OSDs, all are 2TB
spinning disks.
# ceph osd count-metadata osd_objectstore
"bluestore": 44
# ceph osd pool get one size
size: 3
# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW
Hi Reed,
I'm not sure what is meant with the grouping / chassis and "set your
failure domain to chassis" respectively.
This is my current crush map:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable
After updating the CRUSH rule from
rule cephfs_ec {
id 1
type erasure
min_size 8
max_size 8
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 4 type host
step choose indep 2 type osd
step emit
}
to
rule cephfs_ec {