Hi Erich,
I raised one tracker for this https://tracker.ceph.com/issues/65607.
Currently I haven't figured out where was holding the 'dn->lock' in the
'lookup' request or somewhere else, since there is not debug log.
Hopefully we can get the debug logs, which we can push it further.
Thanks
Hi Alexey,
This looks a new issue for me. Please create a tracker for it and
provide the detail call trace there.
Thanks
- Xiubo
On 4/19/24 05:42, alexey.gerasi...@opencascade.com wrote:
Dear colleagues, hope that anybody can help us.
The initial point: Ceph cluster v15.2 (installed and
>Do you have any data on the reliability of QLC NVMe drives?
They were my job for a year, so yes, I do. The published specs are accurate.
A QLC drive built from the same NAND as a TLC drive will have more capacity,
but less endurance. Depending on the model, you may wish to enable
Hello Robin,
thank you.
The object-stat did not show anything suspicious.
And the logs do show
s3:get_obj decode_policy Read AccessControlPolicyxmlns="http://s3.amazonaws.com/doc/2006-03-01/;>XY
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
Hello Anthony,
Do you have any data on the reliability of QLC NVMe drives? How old is
your deep archive cluster, how many NVMes it has, and how many did you
have to replace?
On Sun, Apr 21, 2024 at 11:06 PM Anthony D'Atri wrote:
>
> A deep archive cluster benefits from NVMe too. You can use
Thanks, Anthony.
We'll try 15 -> 16 -> 18.
Best,
Malte
On 21.04.24 17:02, Anthony D'Atri wrote:
It should.
On Apr 21, 2024, at 5:48 AM, Malte Stroem wrote:
Thank you, Anthony.
But does it work to upgrade from the latest 15 to the latest 16, too?
We'd like to be careful.
And then from
> Op 21 apr 2024 om 17:14 heeft Anthony D'Atri het
> volgende geschreven:
>
> Vendor lock-in only benefits vendors.
Strictly speaking, that isn’t necessarily true. Proprietary standards and the
like *can* enhance user experience in some cases. Making it intentionally
difficult to migrate
Vendor lock-in only benefits vendors. You’ll pay outrageously for support /
maint then your gear goes EOL and you’re trolling eBay for parts.
With Ceph you use commodity servers, you can swap 100% of the hardware without
taking downtime with servers and drives of your choice. And you get
A deep archive cluster benefits from NVMe too. You can use QLC up to 60TB in
size, 32 of those in one RU makes for a cluster that doesn’t take up the whole
DC.
> On Apr 21, 2024, at 5:42 AM, Darren Soothill wrote:
>
> Hi Niklaus,
>
> Lots of questions here but let me tray and get through
It should.
> On Apr 21, 2024, at 5:48 AM, Malte Stroem wrote:
>
> Thank you, Anthony.
>
> But does it work to upgrade from the latest 15 to the latest 16, too?
>
> We'd like to be careful.
>
> And then from the latest 16 to the latest 18?
>
> Best,
> Malte
>
>> On 21.04.24 04:14,
What’s the output of:
ceph tell mds.0 damage ls
Zitat von alexey.gerasi...@opencascade.com:
Dear colleagues, hope that anybody can help us.
The initial point: Ceph cluster v15.2 (installed and controlled by
the Proxmox) with 3 nodes based on physical servers rented from a
cloud
> I know this high level texts about
> - scalability,
> - flexibility,
> - distributed,
> - cost-Effectiveness
If you are careful not to over estimate the performance, then you are ok.
>
> Why not something from robin.io or purestorage, netapp, dell/EMC. From
> opensource longhorn or openEBS.
>
Suggestion: Start with your requirements, vs the “ilities” of the storage
system.
By “ilities” I mean scalability, flexibility, distributability, durability,
manageability, and so on - any storage system can and will lay (at least some)
claim to those.
What are the needs of your
Thank you, Anthony.
But does it work to upgrade from the latest 15 to the latest 16, too?
We'd like to be careful.
And then from the latest 16 to the latest 18?
Best,
Malte
On 21.04.24 04:14, Anthony D'Atri wrote:
The party line is to jump no more than 2 major releases at once.
So that
Hi Tobias,
April 18, 2024 at 10:43 PM, "Tobias Langner" wrote:
> While trying to dig up a bit more information, I noticed that the mgr web UI
> was down, which is why we failed the active mgr to have one of the standbys
> to take over, without thinking much...
>
> Lo and behold, this
Hi Niklaus,
Lots of questions here but let me tray and get through some of them.
Personally unless a cluster is for deep archive then I would never suggest
configuring or deploying a cluster without Rocks DB and WAL on NVME.
There are a number of benefits to this in terms of performance and
Dear colleagues, hope that anybody can help us.
The initial point: Ceph cluster v15.2 (installed and controlled by the
Proxmox) with 3 nodes based on physical servers rented from a cloud provider.
CephFS is installed also.
Yesterday we discovered that some of the applications stopped working.
Some additional information. Even though the pgs report unknown, directly
querying them seems like they are correctly up.
What could be causing this disconnect in reported pg states?
```
$ ceph pg dump_stuck inactive | head
ok
PG_STAT STATEUP UP_PRIMARY ACTING ACTING_PRIMARY
16.fc
Hi Tobias,
April 18, 2024 at 8:08 PM, "Tobias Langner" wrote:
>
> We operate a tiny ceph cluster (v16.2.7) across three machines, each
>
> running two OSDs and one of each mds, mgr, and mon. The cluster serves
>
> one main erasure-coded (2+1) storage pool and a few other
I'd assume (w/o
Hi,
We have recently upgraded one of our clusters from Quincy 17.2.6 to Reef
18.2.1, since then we have had 3 instances of our RGWs stop processing
requests. We have 3 hosts that run a single instance of RGW on each, and all 3
just seem to stop processing requests at the same time causing our
Hi Sinan,
On 17.04.24 14:45, si...@turka.nl wrote:
Hello,
I am using Ceph RGW for S3. Is it possible to create (sub)users that
cannot create/delete buckets and are limited to specific buckets?
At the end, I want to create 3 separate users and for each user I want
to create a bucket. The
Thanks for the introduction, Josh!
Hi Ceph Community, I look forward to working with you and learning more
about your community. If anyone has content you'd like shared on social
media, or if you PR questions, feel free to reach out!
Best,
Noah
On Tue, Apr 16, 2024 at 9:00 AM Josh Durgin
Hi,
I have problem to answer to this question:
Why CEPH is better than other storage solutions?
I know this high level texts about
- scalability,
- flexibility,
- distributed,
- cost-Effectiveness
What convince me, but could be received also against, is ceph as a product has
everything what
We operate a tiny ceph cluster across three machines, each running two
OSDs and one of each mds, mgr, and mon. The cluster serves one main
erasure-coded (2+1) storage pool and a few other management-related
pools. The cluster has been running smoothly for several months.
A few weeks ago we
hi ... running against the wall, i need your help, again.
our test stretched cluster is running fine.
now i have 2 questions.
whats the right way to add another pool?
create pool with 4/2 and use the rule for the stretched mode, finished?
the exsisting pools were automaticly set to 4/2 after
Hello all! Hope that anybody can help us.
The initial point: Ceph cluster v15.2 (installed and controlled by the
Proxmox) with 3 nodes based on physical servers rented from a cloud provider.
The volumes provided by Ceph using CephFS and RBD also. We run 2 MDS daemons
but use max_mds=1 so one
26 matches
Mail list logo