Re: [ceph-users] Placement rule not resolved

2015-10-20 Thread ghislain.chevalier
Hi Robert,

Sorry for replying late

We finally use a step take at root on the production  platform
Even if I tested a rule on the sandbox platform with a step take at a non-root 
level ... and it works.

Brgds

-Message d'origine-
De : Robert LeBlanc [mailto:rob...@leblancnet.us] 
Envoyé : mardi 6 octobre 2015 17:55
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-users
Objet : Re: [ceph-users] Placement rule not resolved

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I've only done a 'step take ' where  is a root entry. I haven't tried with it 
being under the root. I would suspect it would work, but you can try to put 
your tiers in a root section and test it there.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Oct 6, 2015 at 6:18 AM,   wrote:
> Hi,
>
>
>
> Context:
>
> Firefly 0.80.9
>
> 8 storage nodes
>
> 176 osds : 14*8 sas and 8*8 ssd
>
> 3 monitors
>
>
>
> I create an alternate crushmap in order to fulfill tiering requirement i.e.
> select ssd or sas.
>
> I created specific buckets “host-ssd” and “host-sas” and regroup them 
> in “tier-ssd” and “tier-sas” under a “tier-root”
>
> E.g. I want to select 1 ssd in 3 distinct hosts
>
>
>
> I don’t understand why the placement rule for sas is working and not 
> for ssd.
>
> Sas are selected even if ,according to the crushmap,  they are not in 
> the right tree.
>
> When sometimes 3 ssd are selected, the pgs stay stuck but active
>
>
>
> I attached the crushmap and ceph osd tree.
>
>
>
> Can someone have a look and tell me where the default is?
>
>
>
> Bgrds
>
> - - - - - - - - - - - - - - - - -
> Ghislain Chevalier
> ORANGE/IMT/OLPS/ASE/DAPI/CSE
>
> Architecte de services d’infrastructure de stockage
>
> Sofware-Defined Storage Architect
> +33299124432
>
> +33788624370
> ghislain.cheval...@orange.com
>
> P Pensez à l'Environnement avant d'imprimer ce message !
>
>
>
> __
> ___
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
> exploites ou copies sans autorisation. Si vous avez recu ce message 
> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi 
> que les pieces jointes. Les messages electroniques etant susceptibles 
> d'alteration, Orange decline toute responsabilite si ce message a ete 
> altere, deforme ou falsifie. Merci.
>
> This message and its attachments may contain confidential or 
> privileged information that may be protected by law; they should not 
> be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and 
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have 
> been modified, changed or falsified.
> Thank you.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.2.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWE+7gCRDmVDuy+mK58QAAdlYQAIKPgcewctAfPisSwvdl
iS60T15U2r2rnuh4G3AQjnmI0eb+nj9O7a1ZH7ttL1k3b5bZz/9/qjK+xnBe
z2UvTjdZltlWVkOSjyjRBpU4JWRS2wZXeMIqVcC71NHT4zGD0otQloftrPLA
ciQ73FDWOJgoA+PMca2oHO91IqQ+UZWr6BAs22scumTW9Zwb/E3QxZJyuh2F
3ajkvalXne97IIMO02ZFB+5PZgg34FukvcdJ/Z/eb+GCE1A57mkL9Wuazu46
u1NvatNWH13I0hZruR5ltWqLV8elTnFFd5KU5XWcyeewKbwbEzFUprUI5xlO
uLNN4vGPrcYYmx6Tm8wWEpSFoZrhOF8NHfIbjn3jM+ZAawzozh1WrMTwWXWG
a6hce307WuJVn/fvNY4IKOzUIwyh/OXPUq+R7RvvkGtnAGJn7aBjuUn6mg6x
AE60XWibRzPGsXvRebEeqEzsfuxbxdt+oml02LByoxei+IZScj446HuyiVqp
9skPJEQgEJL8TChs6+ctS6hkZmo9vJ9Ysk14fJSjXIvTV8eJb12LK9aNig7G
gXYxczfV9fjV/h4TKFcKRYddUj7g8tYpXb8ggJMtqP0B1Pi0gfrV5lsDVH6V
r77ZWisSJ9w+f6lMGzRJTnpeDualcolheBvyFKiqrjoEbxivnow9GFXl7WfT
GpUp
=4tHA
-END PGP SIGNATURE-

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be

Re: [ceph-users] Placement rule not resolved

2015-10-12 Thread ghislain.chevalier
Hi all,

After installing the cluster, all the disks (sas and ssd) were mixed under a 
host, so the calculated reweight was related to the entire capacity.

It doesn't explain why sas disks were selected when using a specific ssd-driven 
rule.

Brgds

De : CHEVALIER Ghislain IMT/OLPS
Envoyé : jeudi 8 octobre 2015 10:47
À : ceph-users; ceph-de...@vger.kernel.org
Objet : RE: [ceph-users] Placement rule not resolved

HI all,

I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and 
puppet.
(the installation was run by the openstack team and I checked the ceph cluster 
configuration afterwards)

After reweighing them to 1, the ceph-cluster is working properly.
Thanks to object lookup module of inkscope, I checked that the osd allocation 
was ok.

What is not normal is that crush tried to allocated osd that are not targeted 
by the rule in that case sas disks instead of ssd disks?
Must the cluster normal behavior ,i.e. the pg allocation, be  to be frozen?
I can say that because if I analyze the stuck pgs (inkscope module) and noticed 
that osd allocation for these pgs were either not correct (acting list) or 
uncomplete.

Best regards

De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de 
ghislain.cheval...@orange.com<mailto:ghislain.cheval...@orange.com>
Envoyé : mardi 6 octobre 2015 14:18
À : ceph-users
Objet : [ceph-users] Placement rule not resolved

Hi,

Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors

I create an alternate crushmap in order to fulfill tiering requirement i.e. 
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in 
"tier-ssd" and "tier-sas" under a "tier-root"
E.g. I want to select 1 ssd in 3 distinct hosts

I don't understand why the placement rule for sas is working and not for ssd.
Sas are selected even if ,according to the crushmap,  they are not in the right 
tree.
When sometimes 3 ssd are selected, the pgs stay stuck but active

I attached the crushmap and ceph osd tree.

Can someone have a look and tell me where the default is?

Bgrds
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLPS/ASE/DAPI/CSE
Architecte de services d'infrastructure de stockage
Sofware-Defined Storage Architect
+33299124432
+33788624370
ghislain.cheval...@orange.com<mailto:ghislain.cheval...@orange-ftgroup.com>
P Pensez à l'Environnement avant d'imprimer ce message !


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement rule not resolved

2015-10-08 Thread ghislain.chevalier
HI all,

I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and 
puppet.
(the installation was run by the openstack team and I checked the ceph cluster 
configuration afterwards)

After reweighing them to 1, the ceph-cluster is working properly.
Thanks to object lookup module of inkscope, I checked that the osd allocation 
was ok.

What is not normal is that crush tried to allocated osd that are not targeted 
by the rule in that case sas disks instead of ssd disks?
Must the cluster normal behavior ,i.e. the pg allocation, be  to be frozen?
I can say that because if I analyze the stuck pgs (inkscope module) and noticed 
that osd allocation for these pgs were either not correct (acting list) or 
uncomplete.

Best regards

De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de 
ghislain.cheval...@orange.com
Envoyé : mardi 6 octobre 2015 14:18
À : ceph-users
Objet : [ceph-users] Placement rule not resolved

Hi,

Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors

I create an alternate crushmap in order to fulfill tiering requirement i.e. 
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in 
"tier-ssd" and "tier-sas" under a "tier-root"
E.g. I want to select 1 ssd in 3 distinct hosts

I don't understand why the placement rule for sas is working and not for ssd.
Sas are selected even if ,according to the crushmap,  they are not in the right 
tree.
When sometimes 3 ssd are selected, the pgs stay stuck but active

I attached the crushmap and ceph osd tree.

Can someone have a look and tell me where the default is?

Bgrds
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLPS/ASE/DAPI/CSE
Architecte de services d'infrastructure de stockage
Sofware-Defined Storage Architect
+33299124432
+33788624370
ghislain.cheval...@orange.com
P Pensez à l'Environnement avant d'imprimer ce message !


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement rule not resolved

2015-10-06 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I've only done a 'step take ' where  is a root entry. I haven't tried
with it being under the root. I would suspect it would work, but you
can try to put your tiers in a root section and test it there.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Oct 6, 2015 at 6:18 AM,   wrote:
> Hi,
>
>
>
> Context:
>
> Firefly 0.80.9
>
> 8 storage nodes
>
> 176 osds : 14*8 sas and 8*8 ssd
>
> 3 monitors
>
>
>
> I create an alternate crushmap in order to fulfill tiering requirement i.e.
> select ssd or sas.
>
> I created specific buckets “host-ssd” and “host-sas” and regroup them in
> “tier-ssd” and “tier-sas” under a “tier-root”
>
> E.g. I want to select 1 ssd in 3 distinct hosts
>
>
>
> I don’t understand why the placement rule for sas is working and not for
> ssd.
>
> Sas are selected even if ,according to the crushmap,  they are not in the
> right tree.
>
> When sometimes 3 ssd are selected, the pgs stay stuck but active
>
>
>
> I attached the crushmap and ceph osd tree.
>
>
>
> Can someone have a look and tell me where the default is?
>
>
>
> Bgrds
>
> - - - - - - - - - - - - - - - - -
> Ghislain Chevalier
> ORANGE/IMT/OLPS/ASE/DAPI/CSE
>
> Architecte de services d’infrastructure de stockage
>
> Sofware-Defined Storage Architect
> +33299124432
>
> +33788624370
> ghislain.cheval...@orange.com
>
> P Pensez à l'Environnement avant d'imprimer ce message !
>
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.2.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWE+7gCRDmVDuy+mK58QAAdlYQAIKPgcewctAfPisSwvdl
iS60T15U2r2rnuh4G3AQjnmI0eb+nj9O7a1ZH7ttL1k3b5bZz/9/qjK+xnBe
z2UvTjdZltlWVkOSjyjRBpU4JWRS2wZXeMIqVcC71NHT4zGD0otQloftrPLA
ciQ73FDWOJgoA+PMca2oHO91IqQ+UZWr6BAs22scumTW9Zwb/E3QxZJyuh2F
3ajkvalXne97IIMO02ZFB+5PZgg34FukvcdJ/Z/eb+GCE1A57mkL9Wuazu46
u1NvatNWH13I0hZruR5ltWqLV8elTnFFd5KU5XWcyeewKbwbEzFUprUI5xlO
uLNN4vGPrcYYmx6Tm8wWEpSFoZrhOF8NHfIbjn3jM+ZAawzozh1WrMTwWXWG
a6hce307WuJVn/fvNY4IKOzUIwyh/OXPUq+R7RvvkGtnAGJn7aBjuUn6mg6x
AE60XWibRzPGsXvRebEeqEzsfuxbxdt+oml02LByoxei+IZScj446HuyiVqp
9skPJEQgEJL8TChs6+ctS6hkZmo9vJ9Ysk14fJSjXIvTV8eJb12LK9aNig7G
gXYxczfV9fjV/h4TKFcKRYddUj7g8tYpXb8ggJMtqP0B1Pi0gfrV5lsDVH6V
r77ZWisSJ9w+f6lMGzRJTnpeDualcolheBvyFKiqrjoEbxivnow9GFXl7WfT
GpUp
=4tHA
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com