[ceph-users] strange radios df output

2013-11-09 Thread Kevin Weiler
bytes. Am I reading this incorrectly? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chi

Re: [ceph-users] near full osd

2013-11-08 Thread Kevin Weiler
Thanks again Gregory! One more quick question. If I raise the amount of PGs for a pool, will this REMOVE any data from the full OSD? Or will I have to take the OSD out and put it back in to realize this benefit? Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite

Re: [ceph-users] near full osd

2013-11-08 Thread Kevin Weiler
Thanks Gregory, One point that was a bit unclear in documentation is whether or not this equation for PGs applies to a single pool, or the entirety of pools. Meaning, if I calculate 3000 PGs, should each pool have 3000 PGs or should all the pools ADD UP to 3000 PGs? Thanks! -- Kevin Weiler IT

Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kevin Weiler
; 20 I assume this is in bytes. -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com> From: Kur

Re: [ceph-users] near full osd

2013-11-05 Thread Kevin Weiler
All of the disks in my cluster are identical and therefore all have the same weight (each drive is 2TB and the automatically generated weight is 1.82 for each one). Would the procedure here be to reduce the weight, let it rebal, and then put the weight back to where it was? -- Kevin Weiler IT

[ceph-users] near full osd

2013-11-05 Thread Kevin Weiler
Hi guys, I have an OSD in my cluster that is near full at 90%, but we're using a little less than half the available storage in the cluster. Shouldn't this be balanced out? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-c

Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kevin Weiler
Thanks Kyle, What's the unit for osd recovery max chunk? Also, how do I find out what my current values are for these osd options? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +

[ceph-users] ceph recovery killing vms

2013-10-28 Thread Kevin Weiler
o that our VMs don't go down when there is a problem with the cluster? -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailt

Re: [ceph-users] mounting RBD in linux containers

2013-10-28 Thread Kevin Weiler
Hi Josh, We did map it directly to the host, and it seems to work just fine. I think this is a problem with how the container is accessing the rbd module. -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312

Re: [ceph-users] mounting RBD in linux containers

2013-10-18 Thread Kevin Weiler
The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did, however, try a map with an RBD that was format 2. I got the same error. -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax

[ceph-users] mounting RBD in linux containers

2013-10-17 Thread Kevin Weiler
sages on either the container or the host box. Any ideas on how to troubleshoot this? Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@

Re: [ceph-users] ceph-deploy pushy dependency problem

2013-08-28 Thread Kevin Weiler
: NOKEY /usr/bin/env gdisk or pushy >= 0.5.3 python(abi) = 2.7 python-argparse python-distribute python-pushy >= 0.5.3 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 it seems to require both pushy AND python-pushy. -- Kevin Weiler IT IMC Financial Mar

Re: [ceph-users] ceph-deploy pushy dependency problem

2013-08-28 Thread Kevin Weiler
k=0 proxy=_none_ metadata_expire=0 -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com> From: Gary Lowe

[ceph-users] ceph-deploy pushy dependency problem

2013-08-27 Thread Kevin Weiler
rrect version). The spec file looks fine in the ceph-deploy git repo, maybe you just need to rerun the package/repo generation? Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-

[ceph-users] adding osds manually

2013-08-06 Thread Kevin Weiler
Hi again Ceph devs, I'm trying to deploy ceph using puppet and I'm hoping to add my osds non-sequentially. I spoke with dmick on #ceph about this and we both agreed it doesn't seem possible given the documentation. However, I have an example of a ceph cluster that was deployed using ceph-deploy

Re: [ceph-users] trouble authenticating after bootstrapping monitors

2013-08-05 Thread Kevin Weiler
when creating the client.admin key so it doesn't need capabilities? Thanks again! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | http://imc-chicago.com/ Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: kevin.wei...@imc-chicago.com

[ceph-users] trouble authenticating after bootstrapping monitors

2013-08-02 Thread Kevin Weiler
elot on camelot... === mds.camelot === Starting Ceph mds.camelot on camelot... starting mds.camelot at :/0 [root@camelot ~]# ceph auth get mon. access denied If someone could tell me what I'm doing wrong it would be greatly appreciated. Thanks! -- Kevin Weiler IT IMC Financial Markets | 233 S. Wack