I found out that it's already possible to specify storage path in OSD service
specification yaml. It works for data_devices, but unfortunately not for
db_devices and wal_devices, at least not in my case.
service_type: osd
service_id: osd_spec_default
placement:
host_pattern: '*'
data_devices:
Do know that this is the only mailing list I am subscribed to, that
sends me so much spam. Maybe the list admin should finally have a word
with other list admins on how they are managing their lists
___
ceph-users mailing list -- ceph-users@ceph.
On 8/09/2020 5:30 pm, Marc Roos wrote:
Do know that this is the only mailing list I am subscribed to, that
sends me so much spam. Maybe the list admin should finally have a word
with other list admins on how they are managing their lists
___
cep
Hello,
Is it possible to somehow sync a ceph from one site to a ceph form another site?
I'm just using the cephfs feature and no block devices.
Being able to sync cephfs pools between two sites would be great for a hot
backup, in case one site fails.
Thanks in advance,
Simon
On 2020-09-08 11:22, Simon Sutter wrote:
> Hello,
>
>
> Is it possible to somehow sync a ceph from one site to a ceph form another
> site?
> I'm just using the cephfs feature and no block devices.
>
> Being able to sync cephfs pools between two sites would be great for a hot
> backup, in case
Thanks Stefan,
First of all, for a bit more context, we use this ceph cluster just for hot
backups, so 99% write 1% read, no need for low latency.
Ok so the snapshot function would mean, we would have like a colder backup.
Just like a snapshot of a VM, without any incremental functionality, whic
just my 5 cents, admin should disable postings on web interface ...
all spams are injected via hyperkitty !!
since there is no parameter to accomplish this, admin should hack into
"post_to_list" and raise a exeption upon posting attempts to mittigate
this !
regards
Gerhard W. Recher
n
update:
admin should consider to use Version 1.3.4
https://hyperkitty.readthedocs.io/en/latest/news.html
* Implemented a new |HYPERKITTY_ALLOW_WEB_POSTING| that allows
disabling the web posting feature. (Closes #264)
Gerhard W. Recher
net4sec UG (haftungsbeschränkt)
Leitenweg 6
8692
https://tracker.ceph.com/issues/46558
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hey all,
I'm creating a new post for this issue as we've narrowed the problem
down to a partsize limitation on multipart upload. We have discovered
that in our production Nautilus (14.2.11) cluster and our lab Nautilus
(14.2.10) cluster that multipart uploads with a configured part size
of greater
journal_devices is for filestore and filestore isn't supported with cephadm
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I had been looking into this issue all day and during testing found
that a specific configuration option we had been setting for years was
the culprit. Not setting this value and letting it fall back to the
default seems to have fixed our issue with multipart uploads.
If you are curious, the confi
Hi,
I've got a ceph cluster, 7 nodes, 168 OSDs, with 96G of ram on each server.
Ceph has been instructed to set a memory target of 3G until we increase RAM
to 128G per node. Available memory tends to hover around 14G. I do see a
tiny bit (KB) of swap utilization per ceph-osd process, but there's n
thanks, Shubjero
Would you consider creating a ceph tracker issue for this?
regards,
Matt
On Tue, Sep 8, 2020 at 4:13 PM shubjero wrote:
>
> I had been looking into this issue all day and during testing found
> that a specific configuration option we had been setting for years was
> the culpri
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use ceph df command to show
the used capactiy of the cluster:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1.8 PiB 788 TiB 1.0 PiB 1.0 PiB
Dear Ceph Users,
I am testing my 3 node Proxmox + Ceph cluster.
I have performed osd benchmark with the below command.
# ceph tell osd.0 bench
Do I need to perform any cleanup to delete benchmark data from osd ?
I have googled for same but nowhere mentioned post steps after osd
benchmark comman
I googled "got 0x6706be76, expected" and found some hits regarding ceph, so
whatever it is, you are not the first, and that number has some internal
meaning.
Redhat solution for similar issue says that checksum is for seeing all
zeroes, and hints at a bad write cache on the controller or something
17 matches
Mail list logo