ame ceph-27923302-87a5-11ec-ac5b-976d21a49941-osd-1-activate -e
CONTAINER_IMAGE=quay.io/ceph/ceph:v
18.2.1 -e NODE_NAME=zephir -e CEPH_USE_RANDOM_NONCE=1 -e
CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v
/var/run/ceph/27923302-87a5-11ec-ac5b-976d21a49941:/var/run/ceph:z -v
/var/log/ce
Regards,
Reto
Am Sa., 6. Jan. 2024 um 17:22 Uhr schrieb Reto Gysi :
> Hi ceph community
>
> I noticed the following problem after upgrading my ceph instance on Debian
> 12.4 from 17.2.7 to 18.2.1:
>
> I had placed bluestore block.db for hdd osd's on raid1
Hi ceph community
I noticed the following problem after upgrading my ceph instance on Debian
12.4 from 17.2.7 to 18.2.1:
I had placed bluestore block.db for hdd osd's on raid1/mirrored logical
volumes on 2 nvme devices, so that if a single block.db nvme device fails,
that not all hdd osds fail.
Hi,
No, I don't think it's not very useful at best, and bad at worst.
In the IT organizations I've worked so far, any systems that actually store
data, were in the highest security zone, where no connection incoming/or
outgoing to the internet was allowed. Our systems couldn't even resolve
Hi
I haven't updated to reef yet. I've tried this on quincy.
# create a testfile on cephfs.rgysi.data pool
root@zephir:/home/rgysi/misc# echo cephtest123 > cephtest.txt
#list inode of new file
root@zephir:/home/rgysi/misc# ls -i cephtest.txt
1099518867574 cephtest.txt
convert inode value to
s.
>
> On Wed, May 10, 2023 at 6:33 PM Reto Gysi wrote:
>
>> Hi
>>
>> For me with ceph version 17.2.6 rbd doesn't allow me to (delete (I've
>> configured that delete only moves image to trash)/) purge an image that
>> still has snapshots. I need to first delet
Hi
For me with ceph version 17.2.6 rbd doesn't allow me to (delete (I've
configured that delete only moves image to trash)/) purge an image that
still has snapshots. I need to first delete all the snapshots.
from man page:
rbd rm image-spec
Delete an rbd image (including all data
Hi Eugen,
I've created a certificate with subject alternative names, so the
certificate is valid on each node of the cluster.
[image: image.png]
Cheers
Reto
Am Do., 20. Apr. 2023 um 11:42 Uhr schrieb Eugen Block :
> Hi *,
>
> I've set up grafana, prometheus and node-exporter on an adopted
>
Ok, thanks Venky!
Am Do., 20. Apr. 2023 um 06:12 Uhr schrieb Venky Shankar <
vshan...@redhat.com>:
> Hi Reto,
>
> On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov wrote:
> >
> > On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
> > >
> > >
> > &
Apr 19 18:41:38 2023
root@zephir:~#
Thank you very much.
So I will wait and see if Venky or Shankar give feedback if the 2 cephfs
file systems should use different ec pools
Thanks & Cheers
Reto
Am Mi., 19. Apr. 2023 um 18:04 Uhr schrieb Ilya Dryomov :
> On Wed, Apr 19, 2023 at 5:57
Hi,
Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov :
> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
> >
> > yes, I used the same ecpool_hdd also for cephfs file systems. The new
> pool ecpool_test I've created for a test, I've also created it with
> applic
is that I should migrate the cephfs
data from ecpool_hdd to a separate erasure code pool for cephfs and then
remove the 'cephfs' application tag from the ecpool_hdd pool, correct?
Am Mi., 19. Apr. 2023 um 09:37 Uhr schrieb Ilya Dryomov :
> On Tue, Apr 18, 2023 at 11:34 PM Reto Gysi wrote:
> &g
hrieb Ilya Dryomov :
> On Tue, Apr 18, 2023 at 5:45 PM Reto Gysi wrote:
> >
> > Hi Ilya
> >
> > Sure.
> >
> > root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1
> --debug-rbd 20 >/home/rgysi/log.txt 2>&1
>
> You probably h
at you expect (if those values are even
> set)? I've never used custom values for those configs but if you don't
> specify a pool name the default name "rbd" is expected by ceph. At
> least that's how I know it.
>
> Zitat von Reto Gysi :
>
> > Hi Ilya
> &g
Hi Ilya
Sure.
root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1
--debug-rbd 20 >/home/rgysi/log.txt 2>&1
root@zephir:~#
Am Di., 18. Apr. 2023 um 16:19 Uhr schrieb Ilya Dryomov :
> On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi wrote:
> >
> > Hi,
>
new and
> > existing images with existing data pool 'ecpool_hdd'
>
> just one thought, could this be a caps mismatch? Is it the same user
> in those two pools who creates snaps (or tries to)? If those are
> different users could you share the auth caps?
>
> Zita
st.txt
root@zephir:~#
I'm currently running
ceph osd pool repair ecpool_hdd
and will check later if that fixes the problem
Cheers
Reto
Am Mo., 17. Apr. 2023 um 21:18 Uhr schrieb Ilya Dryomov :
> On Mon, Apr 17, 2023 at 6:37 PM Reto Gysi wrote:
> >
> > Hi Ilya,
> >
> >
10 config
rbd_request_timed_out_seconds30 config
rbd_skip_partial_discard true config
rbd_sparse_read_threshold_bytes 65536config
Cheers
Reto Gysi
Am Mo., 17. Apr. 2023 um 17:31 Uhr schrieb Ilya Dryomov :
> On Mon, Apr 1
quot;,
"mds.jellyfin.zephir.iqywsn",
"osd.12",
"osd.7",
"osd.2",
"crash.zephir",
"rgw.default.zephir.jqmick",
"mds.backups.zephir.ygigch",
"osd.0",
"osd
19 matches
Mail list logo