values for
min/max/stddev represent problems where reweighing an OSD (or other
action) is What sort of advisable? Is that the purpose of nearfull or
does one need to monitor individual OSDs too?
--
Adam Carheden
___
ceph-users mailing list
ceph-users@list
ty
DC S3710: 20.00 TBW/GB of drive capacity
Strangely enough there seems to be quite a bit more variance by drive
size (larger drives being better) in the better drives. Possibly that's
just due to rounding of the number presented on the data sheet though.
Thanks
--
Adam Carheden
Systems Adm
On 04/27/2017 12:46 PM, Alexandre DERUMIER wrote:
>
>>> Also, 4 x Intel DC S3520 costs as much as 1 x Intel DC S3610. Obviously
>>> the single drive leaves more bays free for OSD disks, but is there any
>>> other reason a single S3610 is preferable to 4 S3520s? Wouldn't 4xS3520s
>>> mean:
>
> wh
ce it would be expensive in
time and $$ to test for such things.
Any thoughts on multiple Intel 35XX vs a single 36XX/37XX? All have "DC"
prefixes and are listed in the Data Center section of their marketing
pages, so I assume they'll all have the same quality underlying NAND.
--
Adam
g that the S3610 isn't 4 times
faster than the S3520)
c) load spread across 4 SATA channels (I suppose this doesn't really
matter since the drives can't throttle the SATA bus).
--
Adam Carheden
On 04/26/2017 01:55 AM, Eneko Lacunza wrote:
> Adam,
>
> What David said before
On 04/25/2017 11:57 AM, David wrote:
> On 19 Apr 2017 18:01, "Adam Carheden" <mailto:carhe...@ucar.edu>> wrote:
>
> Does anyone know if XFS uses a single thread to write to it's journal?
>
>
> You probably know this but just to avoid any confusio
erformance, but having as many OSDs as I
can cram in a chassis share the SK Hynix drive would get me great
performance for a fraction of the cost.
Anyone have any related advice or experience to share regarding journal
SSD selection?
--
Adam Carheden
__
Thanks for your replies.
I think the sort version is "guaranteed": CEPH will always either store
'size' copies of your data or set heath to a WARN and/or ERR state to
let you know that it can't. I think that's probably the most desirable
answer.
--
Adam Carheden
move host bucket from the crushmap but that that's a really bad idea
because it would trigger a rebuild and the extra disk activity increases
the likelihood of additional drive failures, correct?
--
Adam Carheden
___
ceph-users mailing list
f data the the target location (i.e.
crush map including new OSD) but you still have a copy of the data in
the old location (i.e. crushmap before adding the new OSD). Is CEPH
smart enough to pull the data from the old location, or do you loose data?
--
Ada
a single-client with all the recommended bells and whistles (ssd
journal, 10Gbe)? I assume it depends on both the total number of OSDs
and possibly OSDs per node if one had enough to saturate the network,
correct?
--
Adam Carheden
On 04/06/2017 12:29 PM, Mark Nelson wrote:
> With filestore on
OSDs (medium speed pool)
...but the rebuild would impact performance on the "fast" 600G drive
pool if a 8T drive failed since the medium speed pool would be
rebuilding across all drives, correct?
Thanks
--
Adam Carheden
___
ceph-users mailing
nufacturer's websites (at least not from Seagate).
Is there any way to tell? Is there a rule of thumb, such "as 4T+ is
probably SMR" or "enterprise usually means PMR"?
Thanks
--
Adam Carheden
___
ceph-users mailing list
ceph-users
On Tue, Mar 21, 2017 at 1:54 PM, Kjetil Jørgensen wrote:
>> c. Reads can continue from the single online OSD even in pgs that
>> happened to have two of 3 osds offline.
>>
>
> Hypothetically (This is partially informed guessing on my part):
> If the survivor happens to be the acting primary and i
ack online with size=3 and
min_size=2 but only 2 hosts online, I could remove the host bucket from
the crushmap. CRUSH would then rebalance, but some PGs would likely end
up with 3 OSDs all on the same host. (This is theory. I promise not to
do any such thing to a production system ;)
Thanks
--
Ada
ere I only created a single object!
--
Adam Carheden
On 03/20/2017 08:24 PM, Wes Dillingham wrote:
> This is because of the min_size specification. I would bet you have it
> set at 2 (which is good).
>
> ceph osd pool get rbd min_size
>
> With 4 hosts, and a size of 3, removi
00
-5 0.80737 host host4
12 0.40369 osd.12 up 1.0 1.00000
13 0.40369 osd.13 up 1.0 1.0
--
Adam Carheden
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Mar 16, 2017 at 11:55 AM, Jason Dillaman wrote:
> On Thu, Mar 16, 2017 at 1:02 PM, Adam Carheden wrote:
>> Ceph can mirror data between clusters
>> (http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
>> mirror data between pools in the same clust
at node failure becomes more of a risk than room
failure.
(And yes, I do have a 3rd small room with monitors running so if one
of the primary rooms goes down monitors in the remaining room + 3rd
room have a quorum)
Thanks
--
Adam Carheden
___
ceph-use
>
> > Op 25 februari 2017 om 15:45 schreef Adam Carheden <
> adam.carhe...@gmail.com>:
> >
> >
> > I spoke with the cloud stack guys on IRC yesterday and the only risk is
> > when libvirtd starts. Ceph is supported only with libvirt. Cloudstack can
> &g
that initial one, just as you say. If you have to reboot libvirtd when that
monitor is down, that's a problem. But RR DNS would mean just restarting
libvirtd again will probably fix it.
On Feb 25, 2017 6:56 AM, "Wido den Hollander" wrote:
> Op 24 februari 2017 om 19:48 sch
s load balancing, not HA. I guess it depends on when
Cloudstack does DNS lookup and if there's some minimum unavailable delay
before it flags primary storage as offline, but it seems like substituting
RRDNS for whatever CEPH's internal "find an available monitor
22 matches
Mail list logo