I'm coming back to trying mixed SSD+spinning disks after maybe a year.
It was my vague recollection, that if you told ceph "go auto configure all the
disks", it would actually automatically carve up the SSDs into the appropriate
number of LVM segments, and use them as WAL devices for each hdd
I still prefer the simplest solution. There are 4U servers with 110 x
20TB disks on the market.
After raid you get 1.5PiB per server. This is 30 months of data.
2 such servers will hold 5 years of data with minimal problems.
If you need backup; then buy 2 more sets and just send zfs snapshot
diffs
On Wed, Feb 17, 2021 at 05:36:53PM +0100, Loïc Dachary wrote:
> Bonjour,
>
> TL;DR: Is it more advisable to work on Ceph internals to make it
> friendly to this particular workload or write something similar to
> EOS[0] (i.e Rocksdb + Xrootd + RBD)?
CERN's EOSPPC instance, which is one of the
Thanks for that link Dan.
I had searched for a bug but did not find that one.
Rolling back to 14.2.10 (our previous version) has resolved our issue for now.
I’ll keep an eye out for an update with this bug fix.
Cheers,
--
Mike Cave
From: Dan van der Ster
Date: Wednesday, February 17, 2021
Hey Mike,
Maybe it's this? https://tracker.ceph.com/issues/48632
Cheers, Dan
On Wed, Feb 17, 2021, 6:53 PM Mike Cave wrote:
> I am bumping this email to hopefully get some more eyes on it.
>
> We are continuing to have this problem. Unfortunately the cluster is very
> lightly used currently
Hi Konstantin, thank you for your response.
I have not created a bug report yet, as I was not sure if it was a bug or if I
had a configuration issue.
A new piece of information is that if I create a bucket, then try to delete it,
it fails with a 404.
Now if I create a bucket, restart the rgw
On 17/02/2021 18:27, Serkan Çoban wrote:
> Why not put all the data to a zfs pool with 3-4 levels deep directory
> structure each directory named with 2 byte in range 00-FF?
> Four levels deep, you get 255^4=4B folders with 3-4 objects per folder
> or three levels deep you get 255^3=16M folders
Mike, do you create ticket for this issue, especially with logs and reproducer?
k
Sent from my iPhone
> On 17 Feb 2021, at 20:54, Mike Cave wrote:
>
> I am bumping this email to hopefully get some more eyes on it.
>
> We are continuing to have this problem. Unfortunately the cluster is
Hi Paul,
we might have found the reason for MONs going silly on our cluster. There is a
message size parameter that seems way too large. We reduced it today from 10M
(default) to 1M and didn't observe silly MONs since then:
ceph config set global osd_map_message_max_bytes $((1*1024*1024))
I
I am bumping this email to hopefully get some more eyes on it.
We are continuing to have this problem. Unfortunately the cluster is very
lightly used currently until we go full production so we do not have the level
of traffic that would generate a lot of statistics.
We did update to 14.2.16
Why not put all the data to a zfs pool with 3-4 levels deep directory
structure each directory named with 2 byte in range 00-FF?
Four levels deep, you get 255^4=4B folders with 3-4 objects per folder
or three levels deep you get 255^3=16M folders with ~1000 objects
each.
On Wed, Feb 17, 2021 at
Hi Nathan,
Good thinking :-) The names of the objects are indeed the SHA256 of their
content, which provides deduplication.
Cheers
On 17/02/2021 18:04, Nathan Fish wrote:
> I'm not much of a programmer, but as soon as I hear "immutable
> objects" I think "content-addressed". I don't know if
I'm not much of a programmer, but as soon as I hear "immutable
objects" I think "content-addressed". I don't know if you have many
duplicate objects in this set, but content-addressing gives you
object-level dedup for free. Do you have to preserve some meaningful
object names from the original
Bonjour,
TL;DR: Is it more advisable to work on Ceph internals to make it friendly to
this particular workload or write something similar to EOS[0] (i.e Rocksdb +
Xrootd + RBD)?
This is a followup of two previous mails[1] sent while researching this topic.
In a nutshell, the Software Heritage
> -Original Message-
> From: Marc
> Sent: 17 February 2021 15:51
> To: 'ceph-users@ceph.io'
> Subject: [ceph-users] Re: rbd move between pools
>
> >
> > What is the best way to move an rbd image to a different pool. I want
> > to
> > move some 'old' images (some have snapshots) to
>
> What is the best way to move an rbd image to a different pool. I want
> to
> move some 'old' images (some have snapshots) to backup pool. For some
> there is also a difference in device class.
>
> This is what I found on the mailing list, but it is from 2018. So maybe
> this is outdated?
What is the best way to move an rbd image to a different pool. I want to
move some 'old' images (some have snapshots) to backup pool. For some
there is also a difference in device class.
This is what I found on the mailing list, but it is from 2018. So maybe this is
outdated?
rbd export
On 2/17/2021 1:07 PM, Boris Behrens wrote:
Hi Igor,
this is good news for me. Do you have an idea in which version the fix
will be released and can you tell me how I can track if the fix is in
the release?
v14.2.17 will include the fix as the patch is already merged into the
Nautilus
Hi Igor,
this is good news for me. Do you have an idea in which version the fix will
be released and can you tell me how I can track if the fix is in the
release?
I will read a bit about the allocators but I doubt we will do the switch
and just wait it out (if it does not take a year) :)
Thank
Hi Boris,
highly likely you've faced https://tracker.ceph.com/issues/47751
It's fixed in upcoming Nautilus release but v14.2.16 still lacks the fix.
As a workaround you might want to switch back to bitmap or avl allocator.
Thanks,
Igor
On 2/17/2021 12:36 PM, Boris Behrens wrote:
Hi,
Hi,
currently we experience osd daemon crashes and I can't pin the issue. I
hope someone can help me with it.
* We operate multiple cluster (440 SSD - 1PB, 36 SSD - 126TB, 40SSD 100TB,
84HDD - 680TB)
* All clusters were updated around the same time (2021-02-03)
* We restarted ALL ceph daemons
21 matches
Mail list logo