Slot 19 is inside the chassis? Do you check chassis temperature? I
sometimes have more failure rate in chassis HDDs than in front of the
chassis. In our case it was related to the temperature difference.
On Tue, Jan 10, 2023 at 1:28 PM Frank Schilder wrote:
>
> Following up on my previous post,
Currently the biggest HDD is 20TB. 1 exabyte means 50.000 OSD
cluster(without replication or EC)
AFAIK Cern did some tests using 5000 OSDs. I don't know any larger
clusters than Cern's.
So I am not saying it is impossible but it is very unlikely to grow a
single Ceph cluster to that size.
Maybe
Ceph cannot scale like HDFS. There are 10K-20K node HDFS clusters in production.
There is no data locality concept if you use CEPH, every IO will be
served from the network.
On Thu, Aug 26, 2021 at 12:04 PM zhang listar wrote:
>
> Hi, all.
>
> I want to use ceph instead of HDFS in big data
fibers per node for the public
> network
>
> On Tue, Jul 6, 2021 at 10:27 AM Serkan Çoban wrote:
>>
>> Without using FRR you are still L2 below ToR. Do you have any
>> experience with FRR?
>> You should think about multiple uplinks and how you handle them with FRR.
Without using FRR you are still L2 below ToR. Do you have any
experience with FRR?
You should think about multiple uplinks and how you handle them with FRR.
Other than that, it should work, we have been in production with BGP
above ToR for years without any problem.
On Tue, Jul 6, 2021 at 4:10 PM
You can use clay codes(1).
This reads less data for reconstruction.
1- https://docs.ceph.com/en/latest/rados/operations/erasure-code-clay/
On Fri, Jun 25, 2021 at 2:50 PM Andrej Filipcic wrote:
>
>
> Hi,
>
> on a large cluster with ~1600 OSDs, 60 servers and using 16+3 erasure
> coded pools,
caveating the above with - I’m relatively new to Ceph myself
>
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
> From: huxia...@horebdata.cn<mailto:huxia...@horebdata.cn>
> Sent: 15 June 2021 17:52
> To: Serkan Çoban<mailto
Do you observe the same behaviour when you pull a cable?
Maybe a flapping port might cause this kind of behaviour, other than
that you should't see any network disconnects.
Are you sure about LACP configuration, what is the output of 'cat
/proc/net/bonding/bond0'
On Tue, Jun 15, 2021 at 7:19 PM
to this set.
On Wed, Feb 17, 2021 at 11:15 PM Loïc Dachary wrote:
>
>
>
> On 17/02/2021 18:27, Serkan Çoban wrote:
> > Why not put all the data to a zfs pool with 3-4 levels deep directory
> > structure each directory named with 2 byte in range 00-FF?
> > Four
Why not put all the data to a zfs pool with 3-4 levels deep directory
structure each directory named with 2 byte in range 00-FF?
Four levels deep, you get 255^4=4B folders with 3-4 objects per folder
or three levels deep you get 255^3=16M folders with ~1000 objects
each.
On Wed, Feb 17, 2021 at
Disk is not ok, look to the output below:
SMART Health Status: HARDWARE IMPENDING FAILURE GENERAL HARD DRIVE
you should replace the disk.
On Wed, May 20, 2020 at 5:11 PM Thomas <74cmo...@gmail.com> wrote:
>
> Hello,
>
> I have a pool of +300 OSDs that are identical model (Seagate model:
>
You do not want to mix ceph with hadoop, because you'll loose data
locality, which is the main point of hadoop systems.
Every read/write request will go through network, this is not optimal.
On Fri, Apr 24, 2020 at 9:04 AM wrote:
>
> Hi
>
> We have an 3 year old Hadoop cluster - up for refresh -
Maybe following link helps...
https://www.spinics.net/lists/dev-ceph/msg00795.html
On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu wrote:
>
> I thought of that but it doesn't make much sense. AFAICT min_size should
> block IO when i lose 3 osds, but it shouldn't effect the amount of the stored
>
13 matches
Mail list logo