Hello,
if you would choose Seagate MACH.2 2X14 drives, you would get much better
throughput as well as density. Your RAM could be a bit on the lower end,
and for the MACH.2 it definitively would be to low.
You need dedicated metadata drives for S3 or MDS as well. Choose blazing
fast NVMe with
Hi,
RBD is a block device and is suitable for running any application, but its
performance depends on many factors and it has significant overheads. What
you need to determine is whether RBD on your particular hardware with your
particular settings provides satisfactory random I/O performance and
I wonder if CEPH's RBD is suitable for running database system?
Such as Oracle?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
anyone can help me?
On 10/21/2021 16:39, Tommy Sway wrote:
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list --
- What is the expected file/object size distribution and count?
- Is it write-once or modify-often data?
- What's your overall required storage capacity?
- 18 OSDs per WAL/DB drive seems a lot - recommended is ~6-8
- With 12TB OSD the recommended WAL/DB size is 120-480GB (1-4%) per
Hi Ernesto,
Thanks a lot for your answer! But the problem is that even if I don't have
a HAProxy (or any other one) and one of the MGR hosts is currently down and
I'm trying to open HTTPS to that host - it won't be able to answer and send
a redirect. Or if using HAProxy and not monitoring MGR
On Thu, Oct 21, 2021 at 3:56 AM Denis Polom wrote:
>
> Hi
>
> I did, problematic mon was syncing. But issue is while it's syncing,
> Ceph become unreachable. Looks like it tries to become leader but with
> not synced db which may cause Ceph inaccessible?
Can you describe more carefully what the
Hi
I've been trying to Google about Bluestore compression. But most articles I
find are quite old and are from Ceph versions where zstd compression level
was hardcoded to 5.
I've been thinking about enabling zstd compression with a `
compressor_zstd_level` of "-1" in most of my pools. Any
Hi
A while ago we were close to loosing our monitors (disk free space got
thin after a 3 hour network outage) so I am trying to get some grip on
restoring the mon db's from osds with ceph-objectstore-tool according to
this page
Dear Cephers,
I am thinking of designing a cephfs or S3 cluster, with a target to achieve a
minimum of 50GB/s (write) bandwidth. For each node, I prefer 4U 36x 3.5"
Supermicro server with 36x 12TB 7200K RPM HDDs, 2x Intel P4610 1.6TB NVMe SSD
as DB/WAL, a single CPU socket with AMD 7302, and
Hi,
sorry for the delay. So no, the min_size is not the issue here. Is the
86% utilization an average or does it spike to 100% during the
interruptions? Does ceph report slow requests? Have you questioned the
osd daemon which operations took so long with
ceph daemon osd.1
Glad to hear it, and happy to help more if needed :) Pretty sure I made exactly
the same reading error you did...
-Alex
On 10/21/21, 11:23 AM, "Marcel Kuiper" wrote:
Hi Alex
Thanks for your answer. That was very helpful. I apparently completely
misread the script.
Marcel
What data you have to compress? Do you bench compression efficiency?
k
Sent from my iPhone
> On 21 Oct 2021, at 17:43, Elias Abacioglu
> wrote:
>
> Hi
>
> I've been trying to Google about Bluestore compression. But most articles I
> find are quite old and are from Ceph versions where zstd
thanks for the reminder. i turned Tim's initial email into a tracker
issue at https://tracker.ceph.com/issues/53003. you can add any more
details there, and follow the progress
On Thu, Oct 21, 2021 at 1:42 AM wrote:
>
> Hi!
>
> I'm just copying this request from my colleague to this mailing
Hi
I did, problematic mon was syncing. But issue is while it's syncing,
Ceph become unreachable. Looks like it tries to become leader but with
not synced db which may cause Ceph inaccessible?
On 10/20/21 15:30, Michael Moyles wrote:
Have you checked sync status and progress?
A mon status
Hi!
I'm just copying this request from my colleague to this mailing list:
(Source
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/NUPCDV7BC3NEBUPIDYFBSNAEY4KSDOGS/)
We've noticed a massive latency increase on object copy since the
pacific
release. Prior pacific the copy operation
Hi,
I've just installed CEPH and run into issues with Dashboard URL. It's
always getting re-written to an IP address and this causes issues with
HTTPS as it only contains a wildcard certificate in my case. I have
bootstrapped the cluster with '--allow-fqdn-hostname' option, tried to set
host
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
18 matches
Mail list logo