[ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread John Molefe
Hi everyone,

I currently have a ceph cluster running on SUSE and I have an expansion project 
that I will be starting with around June.
Has anybody here deployed (from scratch) or expanded their ceph cluster via 
ansible?? I would appreciate it if you'd share your experiences, challenges, 
topology, etc.

Many thanks
John.


Vrywaringsklousule / Disclaimer: 
http://www.nwu.ac.za/it/gov-man/disclaimer.html 
( http://www.nwu.ac.za/it/gov-man/disclaimer.html )  
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placing replaced disks to correct buckets.

2019-02-18 Thread John Molefe
Hi David

Removal process/commands ran as follows:

#ceph osd crush reweight osd. 0
#ceph osd out 
#systemctl stop ceph-osd@
#umount /var/lib/ceph/osd/ceph-

#ceph osd crush remove osd.
#ceph auth del osd.
#ceph osd rm 
#ceph-disk zap /dev/sd??

Adding them back on:

We skipped stage 1 and replaced the UUIDs of old disks with the new
ones in the policy.cfg
We ran salt '*' pillar.items and confirmed that the output was correct.
It showed the new UUIDs in the correct places.
Next we ran  salt-run state.orch ceph.stage.3
PS: All of the above ran successfully.

The output of ceph osd tree showed that these new disks are currently
in a ghost bucket, not even under root=default and without a weight.

The first step I then tried was to reweight them but found errors
below:
Error ENOENT: device osd. does not appear in the crush map
Error ENOENT: unable to set item id 39 name 'osd.39' weight 5.45599 at
location
{host=veeam-mk2-rack1-osd3,rack=veeam-mk2-rack1,room=veeam-mk2,root=veeam}:
does not exist

But when I run the command: ceph osd find 
v-cph-admin:/testing # ceph osd find 39
{
"osd": 39,
"ip": "143.160.78.97:6870\/24436",
"crush_location": {}
}

Please let me know if there's any other info that you may need to
assist

Regards
J.
>>> David Turner  2019/02/18 17:08 >>>
Also what commands did you run to remove the failed HDDs and the
commands you have so far run to add their replacements back in?

On Sat, Feb 16, 2019 at 9:55 PM Konstantin Shalygin 
wrote:




I recently replaced failed HDDs and removed them from their respective
buckets as per procedure. But I’m now facing an issue when trying to
place new ones back into the buckets. I’m getting an error of ‘osd nr
not found’ OR ‘file or directory not found’ OR command sintax error. I
have been using the commands below: ceph osd crush set  
ceph osd crush  setI do
however find the OSD number when i run command: ceph osd find  Your
assistance/response to this will be highly appreciated. Regards John.  
Please, paste your `ceph osd tree`, your version and what exactly error
you get include osd number.
Less obfuscation is better in this, perhaps, simple case.

 
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Vrywaringsklousule / Disclaimer:
http://www.nwu.ac.za/it/gov-man/disclaimer.html 
( http://www.nwu.ac.za/it/gov-man/disclaimer.html )  
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Placing replaced disks to correct buckets.

2019-02-16 Thread John Molefe
Hi Everyone,

I recently replaced failed HDDs and removed them from their respective
buckets as per procedure.

But I’m now facing an issue when trying to place new ones back into the
buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or
directory not found’ OR command sintax error.

I have been using the commands below:

ceph osd crush set   
ceph osd crush  set   

I do however find the OSD number when i run command:

ceph osd find 

Your assistance/response to this will be highly appreciated.

Regards
John.

Sent from my iPhone
Vrywaringsklousule / Disclaimer: 
http://www.nwu.ac.za/it/gov-man/disclaimer.html 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph snapshots

2018-06-27 Thread John Molefe
Hi everyone

I would like some advice and insight into how ceph snapshots work and how it 
can be setup.

Responses will be much appreciated.

Thanks
John


Vrywaringsklousule / Disclaimer: 
http://www.nwu.ac.za/it/gov-man/disclaimer.html 
( http://www.nwu.ac.za/it/gov-man/disclaimer.html )  
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Adding additional disks to the production cluster without performance impacts on the existing

2018-06-06 Thread John Molefe
Hi everyone

We have completed all phases and the only remaining part is just adding the 
disks to the current cluster but i am afraid of impacting performance as it is 
on production.
Any guides and advices on how this can be achieved with least impact on 
production??

Thanks in advance
John


Vrywaringsklousule / Disclaimer: 
http://www.nwu.ac.za/it/gov-man/disclaimer.html 
( http://www.nwu.ac.za/it/gov-man/disclaimer.html )  
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com