[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-22 Thread Stefan Kooman
On 6/18/21 8:42 PM, Sage Weil wrote: We've been beat up for years about how complicated and hard Ceph is. Rook and cephadm represent two of the most successful efforts to address usability (and not just because they enable deployment management via the dashboard!), and taking advantage of conta

[ceph-users] radosgw user "check_on_raw" setting

2021-06-22 Thread Jared Jacob
Hello, I am setting up user quotas and I would like to enable the check on raw setting for my user's quota. I can't find any documentation on how to change this setting in any of the ceph documents. Do any of you know how to change this setting? Possibly using radosgw-admin? Thanks in advance! Ja

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-22 Thread Stefan Kooman
On 6/21/21 6:19 PM, Nico Schottelius wrote: And while we are at claiming "on a lot more platforms", you are at the same time EXCLUDING a lot of platforms by saying "Linux based container" (remember Ceph on FreeBSD? [0]). Indeed, and that is a more fundamental question: how easy it is to make

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-22 Thread Stefan Kooman
On 6/22/21 6:56 PM, Martin Verges wrote: > There is no "should be", there is no one answer to that, other than 42. Containers have been there before Docker, but Docker made them popular, exactly for the same reason as why Ceph wants to use them: ship a known good version (CI tests) of the soft

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Konstantin Shalygin
140 LV's actually, in hybrid OSD case Cheers, k Sent from my iPhone > On 22 Jun 2021, at 12:56, Thomas Roth wrote: > > I was going to try cephfs on ~10 servers with 70 HDD each. That would make > each system having to deal with 70 OSDs, on 70 LVs?

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-22 Thread Stefan Kooman
On 6/21/21 7:37 PM, Marc wrote: I have seen no arguments why to use containers other than to try and make it "easier" for new ceph people. I advise to read the whole thread again, especially Sage his comments, as there are other benefits. It would free up resources that can be dedicated t

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-22 Thread Marc
> > > > > I have seen no arguments why to use containers other than to try and > make it "easier" for new ceph people. > > I advise to read the whole thread again, especially Sage his comments, > as there are other benefits. It would free up resources that can be > dedicated to (arguably) more pr

[ceph-users] RBD migration between 2 EC pools : very slow

2021-06-22 Thread Gilles Mocellin
Hello Cephers, On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40 To image from a 3+2 EC pool to a 8+2 one. The use case is Veeam backup on XFS filesystems, mounted via KRBD. Backups are running, and I can see 200MB/s Throughput. But my migration (rbd migrate prep

[ceph-users] Re: Can not mount rbd device anymore

2021-06-22 Thread Ml Ml
ceph -s is healthy. I started to do a xfs_repair on that block device now which seems to do something...: - agno = 1038 - agno = 1039 - agno = 1040 - agno = 1041 - agno = 1042 - agno = 1043 - agno = 1044 - agno = 1045 - agno =

[ceph-users] Re: Create and listing topics with AWS4 fails

2021-06-22 Thread Yuval Lifshitz
Hi Daniel, You are correct, currently, only v2 auth is supported for topic management. (tracked here: https://tracker.ceph.com/issues/50039) It should be fixed soon but may take some time before it is backported to Pacific (will keep the list posted). Best Regards, Yuval On Tue, Jun 22, 2021 at

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-22 Thread Martin Verges
> There is no "should be", there is no one answer to that, other than 42. Containers have been there before Docker, but Docker made them popular, exactly for the same reason as why Ceph wants to use them: ship a known good version (CI tests) of the software with all dependencies, that can be run "a

[ceph-users] Can not mount rbd device anymore

2021-06-22 Thread Ml Ml
Hello List, oversudden i can not mount a specific rbd device anymore: root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k /etc/ceph/ceph.client.admin.keyring /dev/rbd0 root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/ (just never times out) Any idea how to debug that mount? Tc

[ceph-users] Create and listing topics with AWS4 fails

2021-06-22 Thread Daniel Iwan
fbff700 20 HTTP_ACCEPT_ENCODING=gzip, deflate, br debug 2021-06-22T15:36:15.572+ 7ff04fbff700 20 HTTP_AUTHORIZATION=AWS4-HMAC-SHA256 Credential=utuAMlfhgTAOzMkTNPb/20210622/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token,

[ceph-users] Re: ceph fs mv does copy, not move

2021-06-22 Thread Frank Schilder
I don't think so. It is exactly the same location in all tests and it is reproducible. Why would a move be a copy on some MDSs/OSDs but not others? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Marc Sent: 22 J

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Clyso GmbH - Ceph Foundation Member
hi thomas, just a quick note. If you have a few large osds, ceph will have problems distributing the data based on the number of placement groups and the number of objects per placement group, ... I recommend reading the concept of placement groups. ___ Clyso Gmb

[ceph-users] Re: Having issues to start more than 24 OSDs per host

2021-06-22 Thread David Orman
https://tracker.ceph.com/issues/50526 https://github.com/alfredodeza/remoto/issues/62 If you're brave (YMMV, test first non-prod), we pushed an image with the issue we encountered fixed as per above here: https://hub.docker.com/repository/docker/ormandj/ceph/tags?page=1 that you can use to install

[ceph-users] Re: ceph fs mv does copy, not move

2021-06-22 Thread Frank Schilder
I get really strange timings depending on kernel version; see below. Did the patch of the kernel client get lost? The only difference between gnosis and smb01 is that gnosis is physical and smb01 is a KVM. Both have direct access to the client network and use the respective kernel clients. Timi

[ceph-users] Re: OSD bootstrap time

2021-06-22 Thread Jan-Philipp Litza
Hi again, turns out the long bootstrap time was my own fault. I had some down&out OSDs for quite a long time, which prohibited the monitor from pruning the OSD maps. Makes sense, when I think about it, but I didn't before. Rich's hint to get the cluster to health OK first pointed me in the right d

[ceph-users] Re: Can not mount rbd device anymore

2021-06-22 Thread Alex Gorbachev
On Tue, Jun 22, 2021 at 10:12 AM Ml Ml wrote: > ceph -s is healthy. I started to do a xfs_repair on that block device > now which seems to do something...: > > - agno = 1038 > - agno = 1039 > - agno = 1040 > - agno = 1041 > - agno = 1042 > - agno =

[ceph-users] Re: Octopus support

2021-06-22 Thread Janne Johansson
Den tis 22 juni 2021 kl 15:44 skrev Shafiq Momin : > I see octopus is having limited Suport on Centos 7 I have prod cluster with > 1.2 PTB data with nautilus 14.2 > Can we upgrade on Centos 7 from nautilus to octopus or we foresee issue Upgrading to octopus should be fine, we run C7 cluster with t

[ceph-users] Octopus support

2021-06-22 Thread Shafiq Momin
Hi all, I see octopus is having limited Suport on Centos 7 I have prod cluster with 1.2 PTB data with nautilus 14.2 Can we upgrade on Centos 7 from nautilus to octopus or we foresee issue We have erasure coded pool Please guide on recommended approach and document if any Will yum upgrade will

[ceph-users] Re: Can not mount rbd device anymore

2021-06-22 Thread Alex Gorbachev
On Tue, Jun 22, 2021 at 8:36 AM Ml Ml wrote: > Hello List, > > oversudden i can not mount a specific rbd device anymore: > > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > /etc/ceph/ceph.client.admin.keyring > /dev/rbd0 > > root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/

[ceph-users] Re: ceph fs mv does copy, not move

2021-06-22 Thread Frank Schilder
The move seems to work as expected on recent kernels. I get O(1) with this version: # uname -r 5.9.9-1.el7.elrepo.x86_64 I cannot upgrade on the machine I need to do the move on. Is it worth trying a newer fuse client, say from the nautilus or octupus repo? Best regards, = Fran

[ceph-users] ceph fs mv does copy, not move

2021-06-22 Thread Frank Schilder
Dear all, some time ago I reported that the kernel client resorts to a copy instead of move when moving a file across quota domains. I was told that the fuse client does not have this problem. If enough space is available, a move should be a move, not a copy. Today, I tried to move a large fil

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-22 Thread Marc
Maybe it is nice to send this as a calendar invite? So it nicely shows up at correct local time of everyone? > -Original Message- > From: Mike Perez > Sent: Tuesday, 22 June 2021 14:50 > To: ceph-users > Subject: [ceph-users] Re: Ceph Month June Schedule Now Available > > Hi everyone

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-22 Thread Mike Perez
Hi everyone, Join us in ten minutes for week 4 of Ceph Month! 9:00 ET / 15:00 CEST cephadm [sebastian wagner] 9:30 ET / 15:30 CEST CephFS + fscrypt: filename and content encryption 10:00 ET / 16:00 CEST Crimson Update [Samuel Just] Meeting link:https://bluejeans.com/908675367 Full schedule: http

[ceph-users] How can I check my rgw quota ?

2021-06-22 Thread Massimo Sgaravatto
Sorry for the very naive question: I know how to set/check the rgw quota for a user (using radosgw-admin) But how can a radosgw user check what is the quota assigned to his/her account , using the S3 and/or the swift interface ? I don't get this information using "swift stat", and I can't fin

[ceph-users] Re: ceph fs mv does copy, not move

2021-06-22 Thread Marc
Could this not be related to the mds and different osd's being used? > -Original Message- > From: Frank Schilder > Sent: Tuesday, 22 June 2021 13:25 > To: ceph-users@ceph.io > Subject: [ceph-users] Re: ceph fs mv does copy, not move > > I get really strange timings depending on kernel

[ceph-users] Re: Spurious Read Errors: 0x6706be76

2021-06-22 Thread Igor Fedotov
Hi Jay, this alert was introduced in Pacific indeed. That's probably why you haven't seen it before. And it definitely implies read retries, the following output mentions  that explicitly: HEALTH_WARN 1 OSD(s) have spurious read errors [WRN] BLUESTORE_SPURIOUS_READ_ERRORS: 1 OSD(s) have sp

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Janne Johansson
Den tis 22 juni 2021 kl 11:55 skrev Thomas Roth : > Hi all, > newbie question: > The documentation seems to suggest that with ceph-volume, one OSD is created > for each HDD (cf. 4-HDD-example in > https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/) > > This seems odd: what i

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Thomas Roth
Thank you all for the clarification! I just did not grasp the concept before, probably because I am used to those systems that form a layer on top of the local file system. If ceph does it all, down to the magnetic platter, all the better. Cheers Thomas On 6/22/21 12:15 PM, Marc wrote: That

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Burkhard Linke
Hi, just an addition: currentl CEPH releases also include disk monitoring (e.g. SMART and other health related features). These do not work with raid devices. You will need external monitoring for your OSD disks. Regards, Burkhard ___ ceph-use

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Marc
That is the idea, what is wrong with this concept? If you aggregate disks, you still aggregate 70 disks, and you still be having 70 disks. Everything you do that ceph can't be aware of creates a potential misinterpretation of the reality and make ceph act in a way it should not. > -Origi

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Burkhard Linke
Hi, On 22.06.21 11:55, Thomas Roth wrote: Hi all, newbie question: The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/) This seems odd: what if a server has

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Robert Sander
On 22.06.21 11:55, Thomas Roth wrote: > That would make each system > having to deal with 70 OSDs, on 70 LVs? Yes. And 70 is a rather unusual number of HDDs in a Ceph node. Normally you have something like 20 to 24 block devices in a single node. Each OSD needs CPU and RAM. You could theoretical

[ceph-users] HDD <-> OSDs

2021-06-22 Thread Thomas Roth
Hi all, newbie question: The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/) This seems odd: what if a server has a finite number of disks? I was going to try