Re: [ceph-users] Manually deleting an RGW bucket

2018-09-28 Thread Konstantin Shalygin
How do I delete an RGW/S3 bucket and its contents if the usual S3 API commands don't work? The bucket has S3 delete markers that S3 API commands are not able to remove, and I'd like to reuse the bucket name. It was set up for versioning and lifecycles under ceph 12.2.5 which broke the bucket

Re: [ceph-users] QEMU/Libvirt + librbd issue using Luminous 12.2.7

2018-09-28 Thread Andre Goree
On 2018/09/28 2:26 pm, Andre Goree wrote: On 2018/08/21 1:24 pm, Jason Dillaman wrote: Can you collect any librados / librbd debug logs and provide them via pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf" file's "[client]" section and re-run to gather the logs. [client] l

Re: [ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread Frank de Bot (lists)
John Spray wrote: > On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote: >> >> Hi, >> >> On my cluster I tried to clear all objects from a pool. I used the >> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench >> cleanup doesn't clean everything, because there was a lot of othe

Re: [ceph-users] QEMU/Libvirt + librbd issue using Luminous 12.2.7

2018-09-28 Thread Andre Goree
On 2018/08/21 1:24 pm, Jason Dillaman wrote: Can you collect any librados / librbd debug logs and provide them via pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf" file's "[client]" section and re-run to gather the logs. [client] log file = /path/to/a/log/file debug ms = 1

Re: [ceph-users] Problems after increasing number of PGs in a pool

2018-09-28 Thread Paul Emmerich
I guess the pool is mapped to SSDs only from the name and you only got 20 SSDs. So you should have about ~2000 effective PGs taking replication into account. Your pool has ~10k effective PGs with k+m=5 and you seem to have 5 more pools Check "ceph osd df tree" to see how many PGs per OSD you

Re: [ceph-users] Problems after increasing number of PGs in a pool

2018-09-28 Thread Burkhard Linke
Hi, On 28.09.2018 18:04, Vladimir Brik wrote: Hello I've attempted to increase the number of placement groups of the pools in our test cluster and now ceph status (below) is reporting problems. I am not sure what is going on or how to fix this. Troubleshooting scenarios in the docs don't seem

[ceph-users] swift staticsite api

2018-09-28 Thread junk required
HI there, I'm trying to enable swift static site ability in my rgw. It appears to be supported http://docs.ceph.com/docs/master/radosgw/swift/ but I can't find any documentation on it. All I can find is for s3 https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gate

[ceph-users] Manually deleting an RGW bucket

2018-09-28 Thread Sean Purdy
Hi, How do I delete an RGW/S3 bucket and its contents if the usual S3 API commands don't work? The bucket has S3 delete markers that S3 API commands are not able to remove, and I'd like to reuse the bucket name. It was set up for versioning and lifecycles under ceph 12.2.5 which broke the

Re: [ceph-users] OSDs crashing

2018-09-28 Thread Josh Haft
Created: https://tracker.ceph.com/issues/36250 On Tue, Sep 25, 2018 at 9:08 PM Brad Hubbard wrote: > > On Tue, Sep 25, 2018 at 11:31 PM Josh Haft wrote: > > > > Hi cephers, > > > > I have a cluster of 7 storage nodes with 12 drives each and the OSD > > processes are regularly crashing. All 84 ha

[ceph-users] Problems after increasing number of PGs in a pool

2018-09-28 Thread Vladimir Brik
Hello I've attempted to increase the number of placement groups of the pools in our test cluster and now ceph status (below) is reporting problems. I am not sure what is going on or how to fix this. Troubleshooting scenarios in the docs don't seem to quite match what I am seeing. I have no idea h

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread Marc Roos
Is this useful? I think this is the section of the client log when [@test2 m]$ cat out6 cat: out6: Input/output error 2018-09-28 16:03:39.082200 7f1ad01f1700 10 client.3246756 fill_statx on 0x100010943bc snap/devhead mode 040557 mtime 2018-09-28 14:49:35.349370 ctime 2018-09-28 14:49:35.349

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread Marc Roos
If I copy the file out6 to out7 in the same location. I can read the out7 file on the nfs client. -Original Message- To: ceph-users Subject: [ceph-users] cephfs issue with moving files between data pools gives Input/output error Looks like that if I move files between different dat

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread John Spray
On Fri, Sep 28, 2018 at 2:28 PM Marc Roos wrote: > > > Looks like that if I move files between different data pools of the > cephfs, something is still refering to the 'old location' and gives an > Input/output error. I assume this, because I am using different client > ids for authentication. > >

Re: [ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread John Spray
On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote: > > Hi, > > On my cluster I tried to clear all objects from a pool. I used the > command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench > cleanup doesn't clean everything, because there was a lot of other > testing going on here)

[ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread Frank (lists)
Hi, On my cluster I tried to clear all objects from a pool. I used the command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench cleanup doesn't clean everything, because there was a lot of other testing going on here). Now 'rados -p bench ls' returns a list of objects, which do

[ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread Marc Roos
Looks like that if I move files between different data pools of the cephfs, something is still refering to the 'old location' and gives an Input/output error. I assume this, because I am using different client ids for authentication. With the same user as configured in ganesha, mounting (ker

Re: [ceph-users] Bluestore DB showing as ssd

2018-09-28 Thread Igor Fedotov
Hi Brett, most probably your device is reported as hdd by the kernel, please check by running the following:  cat /sys/block/sdf/queue/rotational It should be 0 for SSD. But as far as I know BlueFS (i.e. DB+WAL stuff) doesn't have any specific behavior which depends on this flag so most pr

Re: [ceph-users] Mimic cluster is offline and not healing

2018-09-28 Thread Stefan Kooman
Quoting by morphin (morphinwith...@gmail.com): > Good news... :) > > After I tried everything. I decide to re-create my MONs from OSD's and > I used the script: > https://paste.ubuntu.com/p/rNMPdMPhT5/ > > And it worked!!! Congrats! > I think when 2 server crashed and come back same time some h

Re: [ceph-users] CRUSH puzzle: step weighted-take

2018-09-28 Thread Dan van der Ster
On Fri, Sep 28, 2018 at 12:51 AM Goncalo Borges wrote: > > Hi Dan > > Hope to find you ok. > > Here goes a suggestion from someone who has been sitting in the side line for > the last 2 years but following stuff as much as possible > > Will weight set per pool help? > > This is only possible in l

Re: [ceph-users] CRUSH puzzle: step weighted-take

2018-09-28 Thread Dan van der Ster
On Thu, Sep 27, 2018 at 9:57 PM Maged Mokhtar wrote: > > > > On 27/09/18 17:18, Dan van der Ster wrote: > > Dear Ceph friends, > > > > I have a CRUSH data migration puzzle and wondered if someone could > > think of a clever solution. > > > > Consider an osd tree like this: > > > >-2 4428

Re: [ceph-users] CRUSH puzzle: step weighted-take

2018-09-28 Thread Dan van der Ster
On Thu, Sep 27, 2018 at 6:34 PM Luis Periquito wrote: > > I think your objective is to move the data without anyone else > noticing. What I usually do is reduce the priority of the recovery > process as much as possible. Do note this will make the recovery take > a looong time, and will also make