Hi everyone,
The recording is now available:
https://www.youtube.com/watch?v=vQF17UBU4RE
On Thu, Oct 1, 2020 at 7:38 PM Szabo, Istvan (Agoda)
wrote:
> Hi,
>
> Is it available for download or youtube?
>
> Thank you.
>
> From: Peter Sarossy
> Sent: Frida
Doing it in the container seems the right way and you also seem to get
it running. I didn’t have the time to dig into cephadm yet, so my
knowledge is too limited at this point. But I think you could skip the
creation and wiping and just run the create command within the
container.
Zitat
> thx for taking care. I read "works as designed, be sure to have disk
> space for the mon available”.
Well, yeah ;)
> It sounds a little odd that the growth
> from 50MB to ~15GB + compaction space happens within a couple of
> seconds, when two OSD rejoin the cluster.
I’m suspicious — even on
Hmm, in that case the osdmaps do not explain your high mon disk usage.
You'll have to investigate further...
-- dna
On Fri, Oct 2, 2020 at 5:26 PM Andreas John wrote:
>
> Hello *,
>
> thx for taking care. I read "works as designed, be sure to have disk
> space for the mon available". It sounds
One guy in the Russian Ceph chat had this problem when he had "insights" mgr
module enabled. So try
to disable various mgr modules and see if it helps...
> Hello,
>
> we observed massive and sudden growth of the mon db size on disk, from
> 50MB to 20GB+ (GB!) and thus reaching 100% disk usage on
Hello *,
thx for taking care. I read "works as designed, be sure to have disk
space for the mon available". It sounds a little odd that the growth
from 50MB to ~15GB + compaction space happens within a couple of
seconds, when two OSD rejoin the cluster. Does it matter if I have
cephfs in use? Usua
Thanks. You mean directly running ‘ceph-volume lvm create’ on target host (not
inside any container like what ‘ceph orch’ does), right?
And I finally found a hack way to run my OSD in a container.
1. ceph orch daemon add osd host:/dev/sdX
2. On target host, stop the just created OSD servic
The important metric is the difference between these two values:
# ceph report | grep osdmap | grep committed
report 3324953770
"osdmap_first_committed": 3441952,
"osdmap_last_committed": 3442452,
The mon stores osdmaps on disk, and trims the older versions whenever
the PGs are clean. Tri
Does this also count if your cluster is not healthy because of errors
like '2 pool(s) have no replicas configured'
I sometimes use these pools for testing, they are empty.
-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: Massive Mon DB Size with noout on 14.2.11
As long a
As long as the cluster is no healthy, the OSD will require much more space,
depending on the cluster size and other factors. Yes this is somewhat
normal.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniu
Hello,
we observed massive and sudden growth of the mon db size on disk, from
50MB to 20GB+ (GB!) and thus reaching 100% disk usage on the mountpoint.
As far as we can see, it happens if we set "noout" for a node reboot:
After the node and the OSDs come back it looks like the mon db size
increase
Hi,
at the moment there's only the manual way to deploy single OSDs, not
with cephadm. There have been a couple of threads on this list, I
don't have a link though.
You'll have to run something like
ceph-volume lvm create --data /dev/sdX --block.db {VG/LV}
Note that for block.db you'll ne
Den fre 2 okt. 2020 kl 14:07 skrev Manuel Lausch :
> Hi,
> we evaluate rados gateway to provide S3 Storage.
> There is one question related to backup/snapshots.
> Is there any way to create snapshots of buckets and or backup a bucket?
> And how we can access data of a sapshot?
>
> I found only som
Hi,
On 02.10.20 14:38, Alessandro Piazza wrote:
> However, from the Ceph docs, I can't understand if this might be a correct
> use-case for Ceph since the default authentication method CephX doesn't have
> a standard username/password authentication protocol.
CephX is to authenticate the clien
Dear all,
I am experimenting with Ceph as a replacement for the AndrewFileSystem
(https://en.wikipedia.org/wiki/Andrew_File_System). In my current setup, I am
using AFS as a distributed filesystem for approximately 1000 users to store
personal data and let them access their home directories and
Hi All,
I have a cluster that that I tried upgrading from 15.2.4 to 15.2.5 using the
command ‘ceph orch upgrade start —ceph-version 15.2.5’. After upgrading two of
the three mgrs, the third mgr failed and the upgrade stopped. I was able to
get the third mgr upgraded by changing the systemd fi
Hi,
we evaluate rados gateway to provide S3 Storage.
There is one question related to backup/snapshots.
Is there any way to create snapshots of buckets and or backup a bucket?
And how we can access data of a sapshot?
I found only some very old information which indicates that this is not
possibl
If such 'simple' tool as ceph-volume is not properly working, how can I
trust cephadm to be good? Maybe ceph development should rethink trying
to pump out quickly new releases, and take a bit more time for testing.
I am already sticking to the oldest supported version just because of
this.
Hi all,
I’m new to ceph. I recently deployed a ceph cluster with cephadm. Now I want to
add a single new OSD daemon with a db device on SSD. But I can’t find any
documentation about this.
I have tried:
1. Using web dashboard. This requires at least one filter to proceed (type,
vendor, mode
What about the network cards? The motherboard I’m looking for has 2 x 10Gbe,
with that and the CPU frequency, I think the bottleneck will be the HDD. Is
that overkill? Thanks!
Ignacio Ocampo
> On 2 Oct 2020, at 0:38, Martin Verges wrote:
>
>
> For private projects, you can search small 1U s
Hi Eric,
So yes we're hit by this. We have around 1.6M entries in shard 0 with
an empty key, e.g.:
{
"type": "olh",
"idx":
"<80>1001_02/5f/025f8e0fc8234530d6ae7302adf682509f0f7fb68666391122e16d00bd7107e3/2018_11_14/2625203/3034777/metadata.gz",
"entry": {
For private projects, you can search small 1U servers with up to 4 3.5"
disk slots and some e3-1230 v3/4/5 cpu. They can be bought for 250-350€
(used) and then you just plug in a disk.
They are also good for SATA SSDs and work quite well. You can mix both
drives in the same system as well.
--
Mart
22 matches
Mail list logo