Dear ceph folks,
I bumped into an very interesting challenge, how to secure erase a rbd image
data without any encryption?
The motivation is to ensure that there is no information leak on OSDs after
deleting a user specified rbd image, without the extra burden of using rbd
encryption.
any
I've opened a bug report https://tracker.ceph.com/issues/61589, which
unfortunately received no attention.
I fixed the issue by manually setting directory ownership
for /var/lib/ceph/3f50555a-ae2a-11eb-a2fc-ffde44714d86/crash
and /var/lib/ceph/3f50555a-ae2a-11eb-a2fc-ffde44714d86/crash/posted to
Hi Gilles,
I'm not 100% sure but I believe this is relating to the logs kept for
doing incremental sync. When these are false then changes are not
tracked and sync doesn't happen.
My reference is this Red Hat documentation on configuring zones
without replication.
Hi,
I have a very old Ceph cluster running the old dumpling version 0.67.1. One
of the three monitors suffered a hardware failure and I am setting up a new
server to replace the third monitor running Ubuntu 22.04 LTS (all the other
monitors are using the old Ubuntu 12.04 LTS).
I used ceph-deploy
Are you using an EC pool?
On Wed, May 31, 2023 at 11:04 AM Ben wrote:
>
> Thank you Patrick for help.
> The random write tests are performing well enough, though. Wonder why read
> test is so poor with the same configuration(resulting read bandwidth about
> 15MB/s vs 400MB/s of write).
Hi guys,
In perf dump of RGW instance I have two similar sections.
First one:
"objecter": {
"op_active": 0,
"op_laggy": 0,
"op_send": 38816,
"op_send_bytes": 199927218,
"op_resend": 0,
"op_reply": 38816,
"oplen_avg": {
It looks like an old answer from the list just solved my problem!
I found https://www.mail-archive.com/ceph-users@ceph.io/msg14418.html .
So I tried
ceph config rm mds.mds01.ceph03.xqwdjy container_image
ceph config rm mgr.ceph06.xbduuf container_image
And BOOM. It worked.
Thanks for all the
On 6/7/23 14:22, Frank Schilder wrote:
Hi Stefan,
yes, ceph-volume OSDs.
Requirements:
Kernel version requirement and higher: 5.9
cryptsetup: 2.3.4 and higher. Preferably 2.4.x (automatic alignment of
sector size based on physical disk properties).
RAW device:
cryptsetup luksFormat
When you try to change the user using "ceph cephadm set-user" (or any of
the other commands that change ssh settings) it will attempt a connection
to a random host with the new settings, and run the "cephadm check-host"
command on that host. If that fails, it will change the setting back and
I found something else, that might help with identifying the problem.
When I look into which containers are used I see the following:
global:
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45,
mon:
Hi Stefan,
yes, ceph-volume OSDs.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Stefan Kooman
Sent: Wednesday, June 7, 2023 1:38 PM
To: Frank Schilder; Anthony D'Atri; ceph-users@ceph.io
Subject: Re:
On 6/7/23 12:57, Frank Schilder wrote:
Hi Stefan,
sorry, forgot. Block device is almost certainly LVM with dmcrypt - unless you
have another way of using encryption with ceph OSDs.
I can compare LVM with LVM+dmcrypt(default/new) and possibly also raw /dev/sd?
performance. If LVM+dmcrypt
Hi,
I don't find any documentation for this upgrade process. Is there anybody who
has already done it yet?
Is the normal apt-get update method works?
Thank you
This message is confidential and is for the sole use of the intended
recipient(s). It may also be
Hi Stefan,
sorry, forgot. Block device is almost certainly LVM with dmcrypt - unless you
have another way of using encryption with ceph OSDs.
I can compare LVM with LVM+dmcrypt(default/new) and possibly also raw /dev/sd?
performance. If LVM+dmcrypt shows good results, I will also try it with
Hi guys
I deployed the ceph cluster with cephadm and root user, but I need to
change the user to a non-root user
And I did these steps:
1- Created a non-root user on all hosts with access without password and
sudo
`$USER_NAME ALL = (root) NOPASSWD:ALL`
2- Generated a SSH key pair and use
Hi Stefan,
bare metal. I just need to know what kernel version and how to configure the
new queue parameters (I guess its kernel boot parameters). I will do a fio test
to the raw block device first, I think this is what you posted? I can probably
try these settings on our test cluster, which
fs approved.
On Fri, Jun 2, 2023 at 2:54 AM Yuri Weinstein wrote:
> Still awaiting for approvals:
>
> rados - Radek
> fs - Kotresh and Patrick
>
> upgrade/pacific-x - good as is, Laura?
> upgrade/quicny-x - good as is, Laura?
> upgrade/reef-p2p - N/A
> powercycle - Brad
>
> On Tue, May 30, 2023
On 6/6/23 15:33, Frank Schilder wrote:
Yes, would be interesting. I understood that it mainly helps with buffered
writes, but ceph is using direct IO for writes and that's where bypassing the
queues helps.
Yeah, that makes sense.
Are there detailed instructions somewhere how to set up a
On Tue, Jun 6, 2023 at 4:30 PM Dario Graña wrote:
> Hi,
>
> I'm installing a new instance (my first) of Ceph. Our cluster runs
> AlmaLinux9 + Quincy. Now I'm dealing with CephFS and quotas. I read
> documentation about setting up quotas with virtual attributes (xattr) and
> creating volumes and
19 matches
Mail list logo