Thanks Anthony for your knowledge.
I am very happy
Vào Th 7, 13 thg 1, 2024 vào lúc 23:36 Anthony D'Atri <
anthony.da...@gmail.com> đã viết:
> There are nuances, but in general the higher the sum of m+k, the lower the
> performance, because *every* operation has to hit that many drives, which
Dear Frank,
"For production systems I would recommend to use EC profiles with at least
m=3" -> can i set min_size with min_size=4 for ec4+2 it's ok for
productions? My data is video from the camera system, it's hot data, write
and delete in some day, 10-15 day ex... Read and write availability is
by “RBD for cloud”, do you mean VM / container general-purposes volumes on
which a filesystem is usually built? Or large archive / backup volumes that
are read and written sequentially without much concern for latency or
throughput?
How many of those ultra-dense chassis in a cluster? Are all
On 12/1/24 22:32, Drew Weaver wrote:
So we were going to replace a Ceph cluster with some hardware we had
laying around using SATA HBAs but I was told that the only right way to
build Ceph in 2023 is with direct attach NVMe.
These kinds of statements make me at least ask questions. Dozens of
>
> Now that you say it's just backups/archival, QLC might be excessive for
> you (or a great fit if the backups are churned often).
PLC isn’t out yet, though, and probably won’t have a conventional block
interface.
> USD70/TB is the best public large-NVME pricing I'm aware of presently; for
Hi,
after osd.15 died in the wrong moment there is:
#ceph health detail
[WRN] PG_AVAILABILITY: Reduced data availability: 1 pg stale
pg 10.17 is stuck stale for 3d, current state
stale+active+undersized+degraded, last acting [15]
[WRN] PG_DEGRADED: Degraded data redundancy: 172/57063399
>> So we were going to replace a Ceph cluster with some hardware we had
>> laying around using SATA HBAs but I was told that the only right way
>> to build Ceph in 2023 is with direct attach NVMe.
My impression are somewhat different:
* Nowadays it is rather more difficult to find 2.5in SAS or
On Mon, Jan 15, 2024 at 03:21:11PM +, Drew Weaver wrote:
> Oh, well what I was going to do wAs just use SATA HBAs on PowerEdge R740s
> because we don't really care about performance as this is just used as a copy
> point for backups/archival but the current Ceph cluster we have [Which is
>
> Oh, well what I was going to do was just use SATA HBAs on PowerEdge R740s
> because we don't really care about performance
That is important context.
> as this is just used as a copy point for backups/archival but the current
> Ceph cluster we have [Which is based on HDDs attached to Dell
Updates on both problems:
Problem 1
--
The bookworm/reef cephadm package needs updating to accommodate the last
change in /usr/share/doc/adduser/NEWS.Debian.gz:
System user home defaults to /nonexistent if --home is not specified.
Packages that call adduser to create system
Oh, well what I was going to do was just use SATA HBAs on PowerEdge R740s
because we don't really care about performance as this is just used as a copy
point for backups/archival but the current Ceph cluster we have [Which is based
on HDDs attached to Dell RAID controllers with each disk in
hi folks,
I currently test erasure-code-lrc (1) in a multi-room multi-rack setup.
The idea is to be able to repair a disk-failures within the rack
itself to lower bandwidth-usage
```bash
ceph osd erasure-code-profile set lrc_hdd \
plugin=lrc \
crush-root=default \
crush-locality=rack \
I would like to add here a detail that is often overlooked: maintainability
under degraded conditions.
For production systems I would recommend to use EC profiles with at least m=3.
The reason being that if you have a longer problem with a node that is down and
m=2 it is not possible to do any
13 matches
Mail list logo