[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-21 Thread Konstantin Shalygin
Hi Lucian, > On 29 Jun 2021, at 17:02, Lucian Petrut > wrote: > > It’s a compatibility issue, we’ll have to update the Windows Pacific build. > > Sorry for the delayed reply, hundreds of Ceph ML mails ended up in my spam > box. Ironically, I’ll have to thank Office 365 for that :). Can you

[ceph-users] Re: ceph-users Digest, Vol 102, Issue 52

2021-07-21 Thread renjianxinlover
sorry,a point was left out yesterday. Currently, the .index pool with that three OSD(.18,.19,.29) is not in use and nearly has no any data. | | renjianxinlover | | renjianxinlo...@163.com | 签名由网易邮箱大师定制 On 7/21/2021 14:46, wrote: Send ceph-users mailing list submissions to ceph-users@ceph.io To

[ceph-users] Re: octopus garbage collector makes slow ops

2021-07-21 Thread Igor Fedotov
Hi Mahnoosh, you might want to set bluefs_buffered_io to true for every OSD. It looks it's false by default in v15.2.12 Thanks, Igor On 7/18/2021 11:19 PM, mahnoosh shahidi wrote: We have a ceph cluster with 408 osds, 3 mons and 3 rgws. We updated our cluster from nautilus 14.2.14 to

[ceph-users] new ceph cluster + iscsi + vmware: choked ios?

2021-07-21 Thread Philip Brown
I just brought up a new Octopus cluster (because I want to run it on centos 7 for now) Everything looks fairly nice on the ceph side. Running FIO on a gateway pulls up some respectable IO/s on an rbd mapped image. I can use targetcli to iscsi share it out to a VMware node (cant use gwcli on

[ceph-users] Re: How to make CephFS a tiered file system?

2021-07-21 Thread huxia...@horebdata.cn
Dear Patrick, Thanks a lot for pointing out the HSM ticket. We will see whether we have the resource to do something with the ticket. I am thinking of a temporary solution for HSM using cephfs client commands. The following command 'setfattr -n ceph.dir.layout.pool -v NewPool

[ceph-users] Huge headaches with NFS and ingress HA failover

2021-07-21 Thread Andreas Weisker
Hi, we recently set up a new pacific cluster with cephadm. Deployed nfs on two hosts and ingress on two other hosts. (ceph orch apply for nfs and ingress like on the docs page) So far so good. ESXi with NFS41 connects, but the way ingress works confuses me. It distributes clients static to

[ceph-users] Re: How to make CephFS a tiered file system?

2021-07-21 Thread Patrick Donnelly
Hello samuel, On Mon, Jul 19, 2021 at 2:28 PM huxia...@horebdata.cn wrote: > > Dear Cepher, > > I have a requirement to use CephFS as a tiered file system, i.e. the data > will be first stored onto an all-flash pool (using SSD OSDs), and then > automatically moved to an EC coded pool (using

[ceph-users] Re: Procedure for changing IP and domain name of all nodes of a cluster

2021-07-21 Thread Konstantin Shalygin
Hi, > On 21 Jul 2021, at 10:53, Burkhard Linke > wrote: > > One client with special needs is openstack cinder. The database entries > contain the mon list for volumes Another question: do you know where is saved this list? I mean, how to see the current records via cinder command?

[ceph-users] Call for Information IO500 Future Directions

2021-07-21 Thread IO500 Committee
The IO500 Foundation requests your help with determining the future direction for the IO500 lists and data repositories. We ask you complete a short survey that will take less than 5 minutes. The survey is here: https://forms.gle/cFMV4sA3iDUBuQ73A Deadline for responses is 27 August 2021 to

[ceph-users] Re: Using CephFS in High Performance (and Throughput) Compute Use Cases

2021-07-21 Thread Mark Nelson
Hi Manuel, I was the one that did Red Hat's IO500 CephFS submission.  Feel free to ask any questions you like.  Generally speaking I could achieve 3GB/s pretty easily per kernel client and up to about 8GB/s per client with libcephfs directly (up to the aggregate cluster limits assuming

[ceph-users] Re: Using CephFS in High Performance (and Throughput) Compute Use Cases

2021-07-21 Thread Christoph Brüning
Hello, no experience yet, but we are planning to do the same (although partly NVME, partly spinning disks) for our upcoming cluster. It's going to be rather focused on AI and ML applications that use mainly GPUs, so the actual number of nodes is not going to be overwhelming, probably around

[ceph-users] Re: RHCS 4.1 with grafana and prometheus with Node exporter.

2021-07-21 Thread Ramanathan S
Hi ceph users, Can someone share some comments on the below query. Regards Ram. On Mon, 12 Jul, 2021, 3:16 pm Ramanathan S, wrote: > Hi Abdelillah, > > We use the below link to install Ceph in containerized deployment using > ansible. > > >

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-21 Thread Ilya Dryomov
On Tue, Jul 20, 2021 at 11:49 PM Robert W. Eckert wrote: > > The link in the ceph documentation > (https://docs.ceph.com/en/latest/install/windows-install/) is > https://cloudbase.it/ceph-for-windows/ is https://cloudba.se/ceph-win-latest > the same? Yes. https://cloudba.se/ceph-win-latest

[ceph-users] nobody in control of ceph csi development?

2021-07-21 Thread Marc
Crappy code continues to live on? This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. ___ ceph-users mailing list --

[ceph-users] Limiting subuser to his bucket

2021-07-21 Thread Rok Jaklič
Hi, is it possible to limit access of the subuser that he sees (read, write) only "his" bucket? And also be able to create a bucket inside that bucket? Kind regards, Rok ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Using CephFS in High Performance (and Throughput) Compute Use Cases

2021-07-21 Thread Manuel Holtgrewe
Dear all, we are looking towards setting up an all-NVME CephFS instance in our high-performance compute system. Does anyone have any experience to share in a HPC setup or an NVME setup mounted by dozens of nodes or more? I've followed the impressive work done at CERN on Youtube but otherwise

[ceph-users] Re: Radosgw bucket listing limited to 10001 object ?

2021-07-21 Thread Daniel Gryniewicz
On 7/20/21 5:23 PM, [AR] Guillaume CephML wrote: Hello, On 20 Jul 2021, at 17:48, Daniel Gryniewicz wrote: That's probably this one: https://tracker.ceph.com/issues/49892 Looks like we forgot to mark it for backport. I've done that now, so it should be in the next Pacific. I’m not

[ceph-users] Re: [ Ceph Failover ] Using the Ceph OSD disks from the failed node.

2021-07-21 Thread Thore
Good evening, On 7/21/21 10:44 AM, Lokendra Rathour wrote: Hello Everyone, https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/operations_guide/index#handling-a-node-failure * refer to the section * "Replacing the node, reinstalling the operating system, and using

[ceph-users] Re: Procedure for changing IP and domain name of all nodes of a cluster

2021-07-21 Thread mabi
‐‐‐ Original Message ‐‐‐ On Wednesday, July 21st, 2021 at 9:53 AM, Burkhard Linke wrote: > You need to ensure that TCP traffic is routeable between the networks > > for the migration. OSD only hosts are trivial, an OSD updates its IP > > information in the OSD map on boot. This should

[ceph-users] Procedure for changing IP and domain name of all nodes of a cluster

2021-07-21 Thread mabi
Hello, I need to relocate an Octopus (15.2.13) ceph cluster of 8 nodes to another internal network. This means that the IP address of each nodes as well as the domain name will change. The hostname itself will stay the same. What would be the best steps in order to achieive that from the ceph

[ceph-users] ceph octopus lost RGW daemon, unable to add back due to HEALTH WARN

2021-07-21 Thread Ernesto O. Jacobs
I'm running a 11 node Ceph cluster running octopus (15.2.8) I mainly run this as a RGW cluster so had 8 RGW daemons on 8 nodes. Currently I got 1 PG degraded and some misplaced objects as I added a temporary node. Today I tried and expanded the RGW cluster from 8 to 10, this didn't work as one

[ceph-users] [ Ceph Failover ] Using the Ceph OSD disks from the failed node.

2021-07-21 Thread Lokendra Rathour
Hello Everyone, We have Ceph Based three Node setup. In this Setup, we want to test the Complete Node failover and reuse the old OSD Disk from the failed node. we are referring to the Red-Hat based document:

[ceph-users] Re: Object Storage (RGW)

2021-07-21 Thread Etienne Menguy
Hi, It’s only compatible with S3 and swift. You could also use object storage with rados, bypassing rgw. But it’s not user friendly and don’t provide the same features. Which kind of access/API were you expecting? Étienne > On 21 Jul 2021, at 09:43, Michel Niyoyita wrote: > > Dear Ceph

[ceph-users] Re: Files listed in radosgw BI but is not available in ceph

2021-07-21 Thread Boris Behrens
Good morning everybody, we've dug further into it but still don't know how this could happen. What we ruled out for now: * Orphan objects cleanup process. ** There is only one bucket with missing data (I checked all other buckets yesterday) ** The "keep this files" list is generated by

[ceph-users] Re: Procedure for changing IP and domain name of all nodes of a cluster

2021-07-21 Thread Burkhard Linke
Hi, On 7/21/21 9:40 AM, mabi wrote: Hello, I need to relocate an Octopus (15.2.13) ceph cluster of 8 nodes to another internal network. This means that the IP address of each nodes as well as the domain name will change. The hostname itself will stay the same. What would be the best steps

[ceph-users] Object Storage (RGW)

2021-07-21 Thread Michel Niyoyita
Dear Ceph users, I would like to ask if ceph object storage (RGW) is compatible only with Amazon S3 and Openstack Swift . is any other way it can be used apart of those 2 services? kindly help me to understand , because in training the offer is for S3 and SWIFT only . Best Regards

[ceph-users] Re: inbalancing data distribution for osds with custom device class

2021-07-21 Thread Eugen Block
Hi, three OSDs is just not enough, if possible you should add more SSDs to the index pool. Have you checked the disk saturation (e.g. with iostat)? I would expect a high usage. Zitat von renjianxinlover : Ceph: ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous