Re: [ceph-users] ceph balancer do not start

2019-10-24 Thread Jan Peters
Hi Konstantin,

connections coming from qemu vm clients.

Best regards
 
 

Gesendet: Donnerstag, 24. Oktober 2019 um 09:47 Uhr
Von: "Konstantin Shalygin" 
An: ceph-users@lists.ceph.com, "Jan Peters" 
Betreff: Re: [ceph-users] ceph balancer do not start
 
Hi,

ceph features
{
"mon": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 3
}
},
"osd": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 40
}
},
"client": {
"group": {
"features": "0x27fddff8ee8cbffb",
"release": "jewel",
"num": 813
},
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 3
}
}
}
Yes, 0x27fddff8ee8cbffb is not support upmap. This is kernel clients or qemu 
vms?
 
 
 
k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph balancer do not start

2019-10-24 Thread Jan Peters
Hi,

ceph features
{
"mon": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 3
}
},
"osd": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 40
}
},
"client": {
"group": {
"features": "0x27fddff8ee8cbffb",
"release": "jewel",
"num": 813
},
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 3
}
}
}

 
 

Gesendet: Donnerstag, 24. Oktober 2019 um 06:03 Uhr
Von: "Konstantin Shalygin" 
An: ceph-users@lists.ceph.com, "Jan Peters" 
Betreff: Re: [ceph-users] ceph balancer do not start
 
root at ceph-mgr[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]:~# 
ceph balancer mode upmaproot at 
ceph-mgr[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]:~# ceph 
balancer optimize myplanroot at 
ceph-mgr[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]:~# ceph 
balancer show myplan
# starting osdmap epoch 409753
# starting crush version 84
# mode upmap
ceph osd pg-upmap-items 4.18e 34 13
ceph osd pg-upmap-items 4.36d 24 20
ceph osd pg-upmap-items 7.2 10 15
ceph osd pg-upmap-items 7.3 24 20 4 17
ceph osd pg-upmap-items 7.4 0 16 4 25
ceph osd pg-upmap-items 7.5 19 2 8 13
ceph osd pg-upmap-items 7.7 8 21root at 
ceph-mgr[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]:~# ceph 
balancer execute myplan
Error EPERM: min_compat_client jewel < luminous, which is required for 
pg-upmap. Try 'ceph osd set-require-min-compat-client luminous' before using 
the new interfaceroot at 
ceph-mgr[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]:~# ceph osd 
set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 811 connected 
client(s) look like jewel (missing 0x820); add 
--yes-i-really-mean-it to do it anywayroot at 
ceph-mgr[http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com]:~#
 
What is your `ceph features`?
 
 
k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph balancer do not start

2019-10-23 Thread Jan Peters

 Hi David,

thank you. Unfortunately i have the problem that i can't use the upmap mode at 
the moment. 
 

root@ceph-mgr:~# ceph balancer mode upmap
root@ceph-mgr:~# ceph balancer optimize myplan
root@ceph-mgr:~# ceph balancer show myplan
# starting osdmap epoch 409753
# starting crush version 84
# mode upmap
ceph osd pg-upmap-items 4.18e 34 13
ceph osd pg-upmap-items 4.36d 24 20
ceph osd pg-upmap-items 7.2 10 15
ceph osd pg-upmap-items 7.3 24 20 4 17
ceph osd pg-upmap-items 7.4 0 16 4 25
ceph osd pg-upmap-items 7.5 19 2 8 13
ceph osd pg-upmap-items 7.7 8 21
root@ceph-mgr:~# ceph balancer execute myplan
Error EPERM: min_compat_client jewel < luminous, which is required for 
pg-upmap. Try 'ceph osd set-require-min-compat-client luminous' before using 
the new interface
root@ceph-mgr:~# ceph osd set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 811 connected 
client(s) look like jewel (missing 0x820); add 
--yes-i-really-mean-it to do it anyway
root@ceph-mgr:~#


Best regards

Jan

Gesendet: Mittwoch, 23. Oktober 2019 um 05:42 Uhr
Von: "David Turner" 
An: "Jan Peters" 
Cc: ceph-users 
Betreff: Re: [ceph-users] ceph balancer do not start

Of the top of my head, if say your cluster might have the wrong tunables for 
crush-compat. I know I ran into that when I first set up the balancer and 
nothing obviously said that was the problem. Only researching find it for me.
 
My real question, though, is why aren't you using upmap? It is significantly 
better than crush-compat. Unless you have clients on really old kernels that 
can't update or that are on pre-luminous Ceph versions that can't update, 
there's really no reason not to use upmap. 

On Mon, Oct 21, 2019, 8:08 AM Jan Peters 
mailto:haseni...@gmx.de]> wrote:Hello,

I use ceph 12.2.12 and would like to activate the ceph balancer.

unfortunately no redistribution of the PGs is started:

ceph balancer status
{
    "active": true,
    "plans": [],
    "mode": "crush-compat"
}

ceph balancer eval
current cluster score 0.023776 (lower is better)


ceph config-key dump
{
    "initial_mon_keyring":
"AQBLchlbABAA+5CuVU+8MB69xfc3xAXkjQ==",
    "mgr/balancer/active": "1",
    "mgr/balancer/max_misplaced:": "0.01",
    "mgr/balancer/mode": "crush-compat"
}


What am I not doing correctly?

best regards
___
ceph-users mailing list
ceph-users@lists.ceph.com[mailto:ceph-users@lists.ceph.com]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph balancer do not start

2019-10-21 Thread Jan Peters
Hello,

I use ceph 12.2.12 and would like to activate the ceph balancer.

unfortunately no redistribution of the PGs is started:

ceph balancer status
{
"active": true,
"plans": [],
"mode": "crush-compat"
}

ceph balancer eval
current cluster score 0.023776 (lower is better)


ceph config-key dump
{
"initial_mon_keyring":
"AQBLchlbABAA+5CuVU+8MB69xfc3xAXkjQ==",
"mgr/balancer/active": "1",
"mgr/balancer/max_misplaced:": "0.01",
"mgr/balancer/mode": "crush-compat"
}


What am I not doing correctly?

best regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bluestore Hardwaresetup

2018-02-16 Thread Jan Peters

Hi,
 
thank you.
 
Networksetup is like that:
 
2 x 10 GBit LACP for public
2 x 10 GBit LACP for clusternetwork
1 x 1 GBit for management 
Yes Joe, the sizing for block.db and blockwal would be interesting!
 
Is there another advice for SSDs like blog from Sebastian Han?:
 
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
 
Best regards
 
Peter
 
 

Gesendet: Freitag, 16. Februar 2018 um 19:09 Uhr
Von: "Joe Comeau" <joe.com...@hli.ubc.ca>
An: "Michel Raabe" <rmic...@devnu11.net>, "Jan Peters" <haseni...@gmx.de>
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] Bluestore Hardwaresetup

I have a question about  block.db and block.wal
 
How big should they be?
Relative to drive size or ssd size ?
 
Thanks Joe

>>> Michel Raabe <rmic...@devnu11.net> 2/16/2018 9:12 AM >>>
Hi Peter,

On 02/15/18 @ 19:44, Jan Peters wrote:
> I want to evaluate ceph with bluestore, so I need some hardware/configure 
> advices from you.
>
> My Setup should be:
>
> 3 Nodes Cluster, on each with:
>
> - Intel Gold Processor SP 5118, 12 core / 2.30Ghz
> - 64GB RAM
> - 6 x 7,2k, 4 TB SAS
> - 2 x SSDs, 480GB

Network?

> On the POSIX FS you have to set your journal on SSDs. What is the best way 
> for bluestore?
>
> Should I configure separate SSDs for block.db and block.wal?

Yes.

Regards,
Michel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bluestore Hardwaresetup

2018-02-16 Thread Jan Peters

Hi,

 

thank you.

 

Networksetup is like that:

 

2 x 10 GBit LACP for public

2 x 10 GBit LACP for clusternetwork

1 x 1 GBit for management
 

Yes Joe, the sizing for block.db and blockwal would be interesting!

 

Is there another advice for SSDs like blog from Sebastian Han?:

 

https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

 

Best regards

 

Peter

 

 

Gesendet: Freitag, 16. Februar 2018 um 19:09 Uhr
Von: "Joe Comeau" <joe.com...@hli.ubc.ca>
An: "Michel Raabe" <rmic...@devnu11.net>, "Jan Peters" <haseni...@gmx.de>
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] Bluestore Hardwaresetup



I have a question about  block.db and block.wal

 

How big should they be?

Relative to drive size or ssd size ?

 

Thanks Joe



>>> Michel Raabe <rmic...@devnu11.net> 2/16/2018 9:12 AM >>>
Hi Peter,

On 02/15/18 @ 19:44, Jan Peters wrote:
> I want to evaluate ceph with bluestore, so I need some hardware/configure advices from you.
>
> My Setup should be:
>
> 3 Nodes Cluster, on each with:
>
> - Intel Gold Processor SP 5118, 12 core / 2.30Ghz
> - 64GB RAM
> - 6 x 7,2k, 4 TB SAS
> - 2 x SSDs, 480GB

Network?

> On the POSIX FS you have to set your journal on SSDs. What is the best way for bluestore?
>
> Should I configure separate SSDs for block.db and block.wal?

Yes.

Regards,
Michel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Bluestore Hardwaresetup

2018-02-15 Thread Jan Peters
Hi everybody,

I want to evaluate ceph with bluestore, so I need some hardware/configure 
advices from you. 

My Setup should be:

3 Nodes Cluster, on each with:

- Intel Gold Processor SP 5118, 12 core / 2.30Ghz
- 64GB RAM
- 6 x 7,2k, 4 TB SAS
- 2 x SSDs, 480GB

On the POSIX FS you have to set your journal on SSDs. What is the best way for 
bluestore? 

Should I configure separate SSDs for block.db and block.wal?

Is there a way to use CacheTiering or a cachepool? 

Thanks in advance

Peter


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com