Re: [ceph-users] Placement rule not resolved

2015-10-20 Thread ghislain.chevalier
Hi Robert,

Sorry for replying late

We finally use a step take at root on the production  platform
Even if I tested a rule on the sandbox platform with a step take at a non-root 
level ... and it works.

Brgds

-Message d'origine-
De : Robert LeBlanc [mailto:rob...@leblancnet.us] 
Envoyé : mardi 6 octobre 2015 17:55
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-users
Objet : Re: [ceph-users] Placement rule not resolved

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I've only done a 'step take ' where  is a root entry. I haven't tried with it 
being under the root. I would suspect it would work, but you can try to put 
your tiers in a root section and test it there.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Oct 6, 2015 at 6:18 AM,   wrote:
> Hi,
>
>
>
> Context:
>
> Firefly 0.80.9
>
> 8 storage nodes
>
> 176 osds : 14*8 sas and 8*8 ssd
>
> 3 monitors
>
>
>
> I create an alternate crushmap in order to fulfill tiering requirement i.e.
> select ssd or sas.
>
> I created specific buckets “host-ssd” and “host-sas” and regroup them 
> in “tier-ssd” and “tier-sas” under a “tier-root”
>
> E.g. I want to select 1 ssd in 3 distinct hosts
>
>
>
> I don’t understand why the placement rule for sas is working and not 
> for ssd.
>
> Sas are selected even if ,according to the crushmap,  they are not in 
> the right tree.
>
> When sometimes 3 ssd are selected, the pgs stay stuck but active
>
>
>
> I attached the crushmap and ceph osd tree.
>
>
>
> Can someone have a look and tell me where the default is?
>
>
>
> Bgrds
>
> - - - - - - - - - - - - - - - - -
> Ghislain Chevalier
> ORANGE/IMT/OLPS/ASE/DAPI/CSE
>
> Architecte de services d’infrastructure de stockage
>
> Sofware-Defined Storage Architect
> +33299124432
>
> +33788624370
> ghislain.cheval...@orange.com
>
> P Pensez à l'Environnement avant d'imprimer ce message !
>
>
>
> __
> ___
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
> exploites ou copies sans autorisation. Si vous avez recu ce message 
> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi 
> que les pieces jointes. Les messages electroniques etant susceptibles 
> d'alteration, Orange decline toute responsabilite si ce message a ete 
> altere, deforme ou falsifie. Merci.
>
> This message and its attachments may contain confidential or 
> privileged information that may be protected by law; they should not 
> be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and 
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have 
> been modified, changed or falsified.
> Thank you.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.2.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWE+7gCRDmVDuy+mK58QAAdlYQAIKPgcewctAfPisSwvdl
iS60T15U2r2rnuh4G3AQjnmI0eb+nj9O7a1ZH7ttL1k3b5bZz/9/qjK+xnBe
z2UvTjdZltlWVkOSjyjRBpU4JWRS2wZXeMIqVcC71NHT4zGD0otQloftrPLA
ciQ73FDWOJgoA+PMca2oHO91IqQ+UZWr6BAs22scumTW9Zwb/E3QxZJyuh2F
3ajkvalXne97IIMO02ZFB+5PZgg34FukvcdJ/Z/eb+GCE1A57mkL9Wuazu46
u1NvatNWH13I0hZruR5ltWqLV8elTnFFd5KU5XWcyeewKbwbEzFUprUI5xlO
uLNN4vGPrcYYmx6Tm8wWEpSFoZrhOF8NHfIbjn3jM+ZAawzozh1WrMTwWXWG
a6hce307WuJVn/fvNY4IKOzUIwyh/OXPUq+R7RvvkGtnAGJn7aBjuUn6mg6x
AE60XWibRzPGsXvRebEeqEzsfuxbxdt+oml02LByoxei+IZScj446HuyiVqp
9skPJEQgEJL8TChs6+ctS6hkZmo9vJ9Ysk14fJSjXIvTV8eJb12LK9aNig7G
gXYxczfV9fjV/h4TKFcKRYddUj7g8tYpXb8ggJMtqP0B1Pi0gfrV5lsDVH6V
r77ZWisSJ9w+f6lMGzRJTnpeDualcolheBvyFKiqrjoEbxivnow9GFXl7WfT
GpUp
=4tHA
-END PGP SIGNATURE-

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be

Re: [ceph-users] Placement rule not resolved

2015-10-12 Thread ghislain.chevalier
Hi all,

After installing the cluster, all the disks (sas and ssd) were mixed under a 
host, so the calculated reweight was related to the entire capacity.

It doesn't explain why sas disks were selected when using a specific ssd-driven 
rule.

Brgds

De : CHEVALIER Ghislain IMT/OLPS
Envoyé : jeudi 8 octobre 2015 10:47
À : ceph-users; ceph-de...@vger.kernel.org
Objet : RE: [ceph-users] Placement rule not resolved

HI all,

I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and 
puppet.
(the installation was run by the openstack team and I checked the ceph cluster 
configuration afterwards)

After reweighing them to 1, the ceph-cluster is working properly.
Thanks to object lookup module of inkscope, I checked that the osd allocation 
was ok.

What is not normal is that crush tried to allocated osd that are not targeted 
by the rule in that case sas disks instead of ssd disks?
Must the cluster normal behavior ,i.e. the pg allocation, be  to be frozen?
I can say that because if I analyze the stuck pgs (inkscope module) and noticed 
that osd allocation for these pgs were either not correct (acting list) or 
uncomplete.

Best regards

De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de 
ghislain.cheval...@orange.com<mailto:ghislain.cheval...@orange.com>
Envoyé : mardi 6 octobre 2015 14:18
À : ceph-users
Objet : [ceph-users] Placement rule not resolved

Hi,

Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors

I create an alternate crushmap in order to fulfill tiering requirement i.e. 
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in 
"tier-ssd" and "tier-sas" under a "tier-root"
E.g. I want to select 1 ssd in 3 distinct hosts

I don't understand why the placement rule for sas is working and not for ssd.
Sas are selected even if ,according to the crushmap,  they are not in the right 
tree.
When sometimes 3 ssd are selected, the pgs stay stuck but active

I attached the crushmap and ceph osd tree.

Can someone have a look and tell me where the default is?

Bgrds
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLPS/ASE/DAPI/CSE
Architecte de services d'infrastructure de stockage
Sofware-Defined Storage Architect
+33299124432
+33788624370
ghislain.cheval...@orange.com<mailto:ghislain.cheval...@orange-ftgroup.com>
P Pensez à l'Environnement avant d'imprimer ce message !


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement rule not resolved

2015-10-08 Thread ghislain.chevalier
HI all,

I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and 
puppet.
(the installation was run by the openstack team and I checked the ceph cluster 
configuration afterwards)

After reweighing them to 1, the ceph-cluster is working properly.
Thanks to object lookup module of inkscope, I checked that the osd allocation 
was ok.

What is not normal is that crush tried to allocated osd that are not targeted 
by the rule in that case sas disks instead of ssd disks?
Must the cluster normal behavior ,i.e. the pg allocation, be  to be frozen?
I can say that because if I analyze the stuck pgs (inkscope module) and noticed 
that osd allocation for these pgs were either not correct (acting list) or 
uncomplete.

Best regards

De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de 
ghislain.cheval...@orange.com
Envoyé : mardi 6 octobre 2015 14:18
À : ceph-users
Objet : [ceph-users] Placement rule not resolved

Hi,

Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors

I create an alternate crushmap in order to fulfill tiering requirement i.e. 
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in 
"tier-ssd" and "tier-sas" under a "tier-root"
E.g. I want to select 1 ssd in 3 distinct hosts

I don't understand why the placement rule for sas is working and not for ssd.
Sas are selected even if ,according to the crushmap,  they are not in the right 
tree.
When sometimes 3 ssd are selected, the pgs stay stuck but active

I attached the crushmap and ceph osd tree.

Can someone have a look and tell me where the default is?

Bgrds
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLPS/ASE/DAPI/CSE
Architecte de services d'infrastructure de stockage
Sofware-Defined Storage Architect
+33299124432
+33788624370
ghislain.cheval...@orange.com<mailto:ghislain.cheval...@orange-ftgroup.com>
P Pensez à l'Environnement avant d'imprimer ce message !


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement rule not resolved

2015-10-06 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I've only done a 'step take ' where  is a root entry. I haven't tried
with it being under the root. I would suspect it would work, but you
can try to put your tiers in a root section and test it there.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Oct 6, 2015 at 6:18 AM,   wrote:
> Hi,
>
>
>
> Context:
>
> Firefly 0.80.9
>
> 8 storage nodes
>
> 176 osds : 14*8 sas and 8*8 ssd
>
> 3 monitors
>
>
>
> I create an alternate crushmap in order to fulfill tiering requirement i.e.
> select ssd or sas.
>
> I created specific buckets “host-ssd” and “host-sas” and regroup them in
> “tier-ssd” and “tier-sas” under a “tier-root”
>
> E.g. I want to select 1 ssd in 3 distinct hosts
>
>
>
> I don’t understand why the placement rule for sas is working and not for
> ssd.
>
> Sas are selected even if ,according to the crushmap,  they are not in the
> right tree.
>
> When sometimes 3 ssd are selected, the pgs stay stuck but active
>
>
>
> I attached the crushmap and ceph osd tree.
>
>
>
> Can someone have a look and tell me where the default is?
>
>
>
> Bgrds
>
> - - - - - - - - - - - - - - - - -
> Ghislain Chevalier
> ORANGE/IMT/OLPS/ASE/DAPI/CSE
>
> Architecte de services d’infrastructure de stockage
>
> Sofware-Defined Storage Architect
> +33299124432
>
> +33788624370
> ghislain.cheval...@orange.com
>
> P Pensez à l'Environnement avant d'imprimer ce message !
>
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.2.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWE+7gCRDmVDuy+mK58QAAdlYQAIKPgcewctAfPisSwvdl
iS60T15U2r2rnuh4G3AQjnmI0eb+nj9O7a1ZH7ttL1k3b5bZz/9/qjK+xnBe
z2UvTjdZltlWVkOSjyjRBpU4JWRS2wZXeMIqVcC71NHT4zGD0otQloftrPLA
ciQ73FDWOJgoA+PMca2oHO91IqQ+UZWr6BAs22scumTW9Zwb/E3QxZJyuh2F
3ajkvalXne97IIMO02ZFB+5PZgg34FukvcdJ/Z/eb+GCE1A57mkL9Wuazu46
u1NvatNWH13I0hZruR5ltWqLV8elTnFFd5KU5XWcyeewKbwbEzFUprUI5xlO
uLNN4vGPrcYYmx6Tm8wWEpSFoZrhOF8NHfIbjn3jM+ZAawzozh1WrMTwWXWG
a6hce307WuJVn/fvNY4IKOzUIwyh/OXPUq+R7RvvkGtnAGJn7aBjuUn6mg6x
AE60XWibRzPGsXvRebEeqEzsfuxbxdt+oml02LByoxei+IZScj446HuyiVqp
9skPJEQgEJL8TChs6+ctS6hkZmo9vJ9Ysk14fJSjXIvTV8eJb12LK9aNig7G
gXYxczfV9fjV/h4TKFcKRYddUj7g8tYpXb8ggJMtqP0B1Pi0gfrV5lsDVH6V
r77ZWisSJ9w+f6lMGzRJTnpeDualcolheBvyFKiqrjoEbxivnow9GFXl7WfT
GpUp
=4tHA
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Placement rule not resolved

2015-10-06 Thread ghislain.chevalier
Hi,

Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors

I create an alternate crushmap in order to fulfill tiering requirement i.e. 
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in 
"tier-ssd" and "tier-sas" under a "tier-root"
E.g. I want to select 1 ssd in 3 distinct hosts

I don't understand why the placement rule for sas is working and not for ssd.
Sas are selected even if ,according to the crushmap,  they are not in the right 
tree.
When sometimes 3 ssd are selected, the pgs stay stuck but active

I attached the crushmap and ceph osd tree.

Can someone have a look and tell me where the default is?

Bgrds
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLPS/ASE/DAPI/CSE
Architecte de services d'infrastructure de stockage
Sofware-Defined Storage Architect
+33299124432
+33788624370
ghislain.cheval...@orange.com
P Pensez à l'Environnement avant d'imprimer ce message !


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35
device 36 osd.36
device 37 osd.37
device 38 osd.38
device 39 osd.39
device 40 osd.40
device 41 osd.41
device 42 osd.42
device 43 osd.43
device 44 osd.44
device 45 osd.45
device 46 osd.46
device 47 osd.47
device 48 osd.48
device 49 osd.49
device 50 osd.50
device 51 osd.51
device 52 osd.52
device 53 osd.53
device 54 osd.54
device 55 osd.55
device 56 osd.56
device 57 osd.57
device 58 osd.58
device 59 osd.59
device 60 osd.60
device 61 osd.61
device 62 osd.62
device 63 osd.63
device 64 osd.64
device 65 osd.65
device 66 osd.66
device 67 osd.67
device 68 osd.68
device 69 osd.69
device 70 osd.70
device 71 osd.71
device 72 osd.72
device 73 osd.73
device 74 osd.74
device 75 osd.75
device 76 osd.76
device 77 osd.77
device 78 osd.78
device 79 osd.79
device 80 osd.80
device 81 osd.81
device 82 osd.82
device 83 osd.83
device 84 osd.84
device 85 osd.85
device 86 osd.86
device 87 osd.87
device 88 osd.88
device 89 osd.89
device 90 osd.90
device 91 osd.91
device 92 osd.92
device 93 osd.93
device 94 osd.94
device 95 osd.95
device 96 osd.96
device 97 osd.97
device 98 osd.98
device 99 osd.99
device 100 osd.100
device 101 osd.101
device 102 osd.102
device 103 osd.103
device 104 osd.104
device 105 osd.105
device 106 osd.106
device 107 osd.107
device 108 osd.108
device 109 osd.109
device 110 osd.110
device 111 osd.111
device 112 osd.112
device 113 osd.113
device 114 osd.114
device 115 osd.115
device 116 osd.116
device 117 osd.117
device 118 osd.118
device 119 osd.119
device 120 osd.120
device 121 osd.121
device 122 osd.122
device 123 osd.123
device 124 osd.124
device 125 osd.125
device 126 osd.126
device 127 osd.127
device 128 osd.128
device 129 osd.129
device 130 osd.130
device 131 osd.131
device 132 osd.132
device 133 osd.133
device 134 osd.134
device 135 osd.135
device 136 osd.136
device 137 osd.137
device 138 osd.138
device 139 osd.139
device 140 osd.140
device 141 osd.141
device 142 osd.142
device 143 osd.143
device 144 osd.144
device 145 osd.145
device 146 osd.146
device 147 osd.147
device 148 osd.148
device 149 osd.149
device 150 osd.150
device 151 osd.151
device 152 osd.152
device 153 osd.153
device 154 osd.154
device 155 osd.155
device 156 osd.156
device 157 osd.157
device 15