There is no need to run two clusters.
You can run a single cluster with separate pool's and rule's so that the
RBD Pool and CephFS pool both are separate and use different disks / PG's.
,Ash
On Thu, Mar 28, 2019 at 2:25 PM wrote:
> Hi,
>
> I'd like to config cluster with two ceph client (RDB +
Hi,
I'd like to config cluster with two ceph client (RDB + Cephfs) separate
from the harddisk storing on the same hardware with Ceph mimic version.
Can anyone suggest me for ceph multiple clusters on the same hardware?
Have you any tools instead ceph-deply --cluster command or other
suggestio
On Thu, Mar 28, 2019 at 8:33 AM solarflow99 wrote:
>
> yes, but nothing seems to happen. I don't understand why it lists OSDs 7 in
> the "recovery_state": when i'm only using 3 replicas and it seems to use
> 41,38,8
Well, osd 8s state is listed as
"active+undersized+degraded+remapped+wait_bac
yes, but nothing seems to happen. I don't understand why it lists OSDs 7
in the "recovery_state": when i'm only using 3 replicas and it seems to
use 41,38,8
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 47 scrub errors
pg 10.2a is active+clean+inconsistent, acting [41,38,8]
47 scrub errors
Thanks I did resized using the base tier pool and the RBD is back online
without a problem.
Regarding the depreciation, our initial design and setup was well designed with
cache tiering for the expected workload. Today (3 - 4 releases later) with
cephfs and our real world experiences, we would
Anecdotally, I see the same behaviour, but there seem to be no negative
side-effects. The “jewel” clients below are more than likely the (Linux)
kernel client:
[cinder] root@aurae-dashboard:~# ceph features
{
"mon": [
{
"features": "0x3ffddff8ffac",
"rele
For upstream, "deprecated" might be too strong of a word; however, it
is strongly cautioned against using [1]. There is ongoing work to
replace cache tiering with a new implementation that hopefully works
better and avoids lots of the internal edge cases that the cache
tiering v1 design required.
Sure
# ceph versions
{
"mon": {
"ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc)
nautilus (stable)": 3
},
"mgr": {
"ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc)
nautilus (stable)": 2
},
"osd": {
"ceph version 14.2.0 (3a54
We recently updated a cluster to the Nautlius release by updating Debian
packages from the Ceph site. Then rebooted all servers.
ceph features still reports older releases, for example the osd
"osd": [
{
"features": "0x3ffddff8ffac",
"release": "lumino
We recently updated a cluster to the Nautlius release by updating Debian
packages from the Ceph site. Then rebooted all servers.
ceph features still reports older releases, for example the osd
"osd": [
{
"features": "0x3ffddff8ffac",
"release": "luminous",
March 27, 2019 1:09 PM, "Fyodor Ustinov" mailto:u...@ufm.su)>
wrote:
Tiering - deprecated? Where can I read more about this?
Looks like it was deprecated in Red Hat Ceph Storage in 2016:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/thread.html#13867
(http://lists.ceph.com/p
Hi!
When using cache pools (which are essentially deprecated functionality
BTW), you should always reference the base tier pool. The fact that a
cache tier sits in front of a slower, base tier is transparently
handled.
Tiering - deprecated? Where can I read more about this?
WBR,
Fyodor.
Hello,
On Wed, Mar 27, 2019 at 1:19 AM Jason Dillaman wrote:
> When using cache pools (which are essentially deprecated functionality
> BTW), you should always reference the base tier pool.
Could you point to more details about the plan to deprecate cache tiers?
AFAIK and as far as he documentat
13 matches
Mail list logo