Hello,

On Wed, 15 Mar 2017 21:36:00 +0000 James Okken wrote:

> Thanks gentlemen,
> 
> I hope to add more OSD since we will need a good deal more than 2.3TB and I 
> fo want to leave free space / margins.
> 
> I am also thinking of reducing the replication to2 .
>  I am sure I can google how to do that. But I am sure most of my results are 
> going to be people telling me not to do it.

Mostly for good reasons, but that is quite diminished in your RAID'ed OSDs.

> Can you direct me to a good tutorial on how to do so.
> 
No such thing, but you already must have changed your configuration, as
your pools are min_size 1, which is not the default.
Changing them to size=2 should do the trick.

Christian
> 
> And, youre are right, I am a beginner.
> 
> James Okken
> Lab Manager
> Dialogic Research Inc.
> 4 Gatehall Drive
> Parsippany
> NJ 07054
> USA
> 
> Tel:       973 967 5179
> Email:   james.ok...@dialogic.com
> Web:    www.dialogic.com – The Network Fuel Company
> 
> This e-mail is intended only for the named recipient(s) and may contain 
> information that is privileged, confidential and/or exempt from disclosure 
> under applicable law. No waiver of privilege, confidence or otherwise is 
> intended by virtue of communication via the internet. Any unauthorized use, 
> dissemination or copying is strictly prohibited. If you have received this 
> e-mail in error, or are not named as a recipient, please immediately notify 
> the sender and destroy all copies of this e-mail.
> 
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> Maxime Guyot
> Sent: Tuesday, March 14, 2017 7:29 AM
> To: Christian Balzer; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] total storage size available in my CEPH setup?
> 
> Hi,
> 
> >> My question is how much total CEPH storage does this allow me? Only 2.3TB? 
> >> or does the way CEPH duplicates data enable more than 1/3 of the storage?  
> > 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit. 
> >  
> 
> To expand on this, you probably want to keep some margins and not run at your 
> cluster 100% :) (especially if you are running RBD with thin provisioning). 
> By default, “ceph status” will issue a warning at 85% full (osd nearfull 
> ratio). You should also consider that you need some free space for auto 
> healing to work (if you plan to use more than 3 OSDs on a size=3 pool).
> 
> Cheers,
> Maxime 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to