Re: [ceph-users] Slow performances on our Ceph Cluster

2017-02-14 Thread Beard Lionel (BOSTON-STORAGE)
Hi,

> On Tue, Feb 14, 2017 at 3:48 AM, David Ramahefason 
> wrote:
> > Any idea on how we could increase performances ? as this really impact
> > our openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to
> > 15 minutes...
>
> Have you configured Glance RBD store properly? The Mikata release changed
> the configuration options needed to thinly provision VMs from backing
> Glance images [1]
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#for-mitaka-only

And also be sure to have your images in Glance in RAW format, as copy-on-write 
is not supported with Ceph when using QCOW2 format.

Lionel


Ce message et toutes les pièces jointes (ci-après le "message") sont établis à 
l'intention exclusive de ses destinataires et sont confidentiels. Si vous 
recevez ce message par erreur ou s'il ne vous est pas destiné, merci de le 
détruire ainsi que toute copie de votre système et d'en avertir immédiatement 
l'expéditeur. Toute lecture non autorisée, toute utilisation de ce message qui 
n'est pas conforme à sa destination, toute diffusion ou toute publication, 
totale ou partielle, est interdite. L'Internet ne permettant pas d'assurer 
l'intégrité de ce message électronique susceptible d'altération, l’expéditeur 
(et ses filiales) décline(nt) toute responsabilité au titre de ce message dans 
l'hypothèse où il aurait été modifié ou falsifié.

This message and any attachments (the "message") is intended solely for the 
intended recipient(s) and is confidential. If you receive this message in 
error, or are not the intended recipient(s), please delete it and any copies 
from your systems and immediately notify the sender. Any unauthorized view, use 
that does not comply with its purpose, dissemination or disclosure, either 
whole or partial, is prohibited. Since the internet cannot guarantee the 
integrity of this message which may not be reliable, the sender (and its 
subsidiaries) shall not be liable for the message if modified or falsified.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow performances on our Ceph Cluster

2017-02-14 Thread Jason Dillaman
On Tue, Feb 14, 2017 at 3:48 AM, David Ramahefason  wrote:
> Any idea on how we could increase performances ? as this really impact our
> openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to 15
> minutes...

Have you configured Glance RBD store properly? The Mikata release
changed the configuration options needed to thinly provision VMs from
backing Glance images [1]

[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#for-mitaka-only

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Slow performances on our Ceph Cluster

2017-02-14 Thread David Ramahefason
Hi,

we're having big performances issues on our OCP - CEPH rack.
It is designed around 3 Store nodes:

- 2 * Haswell-EP (E5-2620v3)
- 128GB DDR4
- 4 * 240GB SSD
- 2 x 10G Mellanox X3

Each node serving a 30 * 4 TB SAS drives (JBOD) via 2 mini SAS connectors

So to resume:

3 nodes
90 OSD's
18 pools

Ceph setup was made using default config mode of fuel.

Any idea on how we could increase performances ? as this really impact our
openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to 15
minutes...

This are the pools we have:

pool 0 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 '.rgw' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 2 flags hashpspool
stripe_width 0
pool 2 'images' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 1021 flags hashpspool
stripe_width 0
removed_snaps [1~6,8~6,f~2,12~1,15~8,1f~7,27~1,29~2,2c~8,35~a]
pool 3 'volumes' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 4096 pgp_num 4096 last_change 1006 flags hashpspool
stripe_width 0
removed_snaps [1~b]
pool 4 'backups' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 516 flags hashpspool
stripe_width 0
pool 5 'compute' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 2048 pgp_num 2048 last_change 1018 flags hashpspool
stripe_width 0
removed_snaps [1~27]
pool 6 '.rgw.root' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 520 flags hashpspool stripe_width
0
pool 7 '.rgw.control' replicated size 3 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 64 pgp_num 64 last_change 522 owner
18446744073709551615 flags hashpspool stripe_width 0
pool 8 '.rgw.gc' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 524 flags hashpspool stripe_width
0
pool 9 '.users.uid' replicated size 3 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 64 pgp_num 64 last_change 526 owner
18446744073709551615 flags hashpspool stripe_width 0
pool 10 '.users' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 528 flags hashpspool stripe_width
0
pool 12 '.rgw.buckets.index' replicated size 3 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 64 pgp_num 64 last_change 568 flags hashpspool
stripe_width 0
pool 13 '.rgw.buckets' replicated size 3 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 64 pgp_num 64 last_change 570 owner
18446744073709551615 flags hashpspool stripe_width 0
pool 14 '.rgw.buckets.extra' replicated size 3 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 64 pgp_num 64 last_change 608 flags hashpspool
stripe_width 0
pool 15 '.log' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 614 flags hashpspool stripe_width
0
pool 17 'scbench' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 100 pgp_num 100 last_change 631 flags hashpspool
stripe_width 0
pool 18 'test' replicated size 3 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 813 flags hashpspool
stripe_width 0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com