[ceph-users] Re: ceph df: pool stored vs bytes_used -- raw or not?

2021-06-08 Thread Konstantin Shalygin
Stored==used was resolved for this cluster. Actually problem is what you was discover in previous year: zero's. Filestore lack of META counter - always zero. When I purged last drained OSD from cluster - statistics becomes to normal immediately Thanks, k > On 20 May 2021, at 21:22, Dan van

[ceph-users] Re: OSD bootstrap time

2021-06-08 Thread Richard Bade
Hi Jan-Philipp, I've noticed this a couple of times on Nautilus after doing some large backfill operations. It seems the osd map doesn't clear properly after the cluster returns to Health OK and builds up on the mons. I do a "du" on the mon folder e.g. du -shx /var/lib/ceph/mon/ and this shows

[ceph-users] Ceph Ansible fails on check if monitor initial keyring already exists

2021-06-08 Thread Jared Jacob
I am running the Ceph ansible script to install ceph version Stable-6.0 (Pacific). When running the sample yml file that was supplied by the github repo it runs fine up until the "ceph-mon : check if monitor initial keyring already exists" step. There it will hang for 30-40 minutes before

[ceph-users] OSD bootstrap time

2021-06-08 Thread Jan-Philipp Litza
Hi everyone, recently I'm noticing that starting OSDs for the first time takes ages (like, more than an hour) before they are even picked up by the monitors as "up" and start backfilling. I'm not entirely sure if this is a new phenomenon or if it always was that way. Either way, I'd like to

[ceph-users] Re: Mon crash when client mounts CephFS

2021-06-08 Thread Robert W. Eckert
When I had issues with the monitors, it was access on the monitor folder under /var/lib/ceph//mon./store.db, make sure it is owned by the ceph user. My issues originated from a hardware issue - the memory needed 1.3 v, but the mother board was only reading 1.2 (The memory had the issue, the

[ceph-users] Re: Mon crash when client mounts CephFS

2021-06-08 Thread Ilya Dryomov
On Tue, Jun 8, 2021 at 9:20 PM Phil Merricks wrote: > > Hey folks, > > I have deployed a 3 node dev cluster using cephadm. Deployment went > smoothly and all seems well. > > If I try to mount a CephFS from a client node, 2/3 mons crash however. > I've begun picking through the logs to see what I

[ceph-users] DocuBetter Meeting -- 09 June 2021 1730 UTC

2021-06-08 Thread John Zachary Dover
A DocuBetter Meeting will be held on 09 June 2021 at 1730 UTC. This is the monthly DocuBetter Meeting that is more convenient for European and North American Ceph contributors than the other meeting, which is convenient for people in Australia and Asia (and which is very rarely attended).

[ceph-users] Mon crash when client mounts CephFS

2021-06-08 Thread Phil Merricks
Hey folks, I have deployed a 3 node dev cluster using cephadm. Deployment went smoothly and all seems well. If I try to mount a CephFS from a client node, 2/3 mons crash however. I've begun picking through the logs to see what I can see, but so far other than seeing the crash in the log itself,

[ceph-users] Announcing go-ceph v0.10.0

2021-06-08 Thread John Mulligan
I'm happy to announce another release of the go-ceph API bindings. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.10.0 Changes in the release are detailed in the link above. The bindings aim to play a similar role to

[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-06-08 Thread Szabo, Istvan (Agoda)
Yes, but with this the bucket contents will not be synced only. The bucket will be available everywhere just will be empty. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com

[ceph-users] Re: Index pool hasn't been cleaned up and caused large omap, safe to delete the index file?

2021-06-08 Thread Szabo, Istvan (Agoda)
Some more information: HGK is the master, ASH and SGP is the secondary, let me show 1 shard in all DCs (FYI the bucket has been deleted which is relates to this bucket index). HKG and ASH give back empty command output for this: rados -p hkg or ash.rgw.buckets.index listomapvals

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Eneko Lacunza
Hi Michael, El 8/6/21 a las 11:38, Ml Ml escribió: Hello List, i used to build 3 Node Cluster with spinning Rust and later with (Enterprise) SSDs. All i did was to buy a 19" Server with 10/12 Slots, plug in the Disks and i was done. The Requirements were just 10/15TB Disk usage (30-45TB Raw).

[ceph-users] OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Ml Ml
Hello List, i used to build 3 Node Cluster with spinning Rust and later with (Enterprise) SSDs. All i did was to buy a 19" Server with 10/12 Slots, plug in the Disks and i was done. The Requirements were just 10/15TB Disk usage (30-45TB Raw). Now i was asked if i could also build a cheap

[ceph-users] Index pool hasn't been cleaned up and caused large omap, safe to delete the index file?

2021-06-08 Thread Szabo, Istvan (Agoda)
Hi, In my multisite setup 1 big bucket has been deleted and seems like hasn't been cleaned up on one of the secondary site. Is it safe to delete the 11 shard objects from the index pool which holding the omaps of that bucket files? Also a quick question, is it a problem if we use like this?

[ceph-users] Re: ceph buckets

2021-06-08 Thread Janne Johansson
Den tis 8 juni 2021 kl 14:31 skrev Rok Jaklič : > Which mode is that and where can I set it? > This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/ ? Yes, the description says it all there, doesn't it? >> >> Apart from that, there is a mode for RGW with tenant/bucketname

[ceph-users] Re: ceph buckets

2021-06-08 Thread Rok Jaklič
Which mode is that and where can I set it? This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/ ? On Tue, Jun 8, 2021 at 2:24 PM Janne Johansson wrote: > Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič : > > Hi, > > I try to create buckets through rgw in following order: >

[ceph-users] Re: ceph buckets

2021-06-08 Thread Janne Johansson
Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič : > Hi, > I try to create buckets through rgw in following order: > - *bucket1* with *user1* with *access_key1* and *secret_key1* > - *bucket1* with *user2* with *access_key2* and *secret_key2* > > when I try to create a second bucket1 with user2 I get

[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-06-08 Thread Soumya Koduri
On 6/8/21 4:59 PM, Szabo, Istvan (Agoda) wrote: Yes, but with this the bucket contents will not be synced only. The bucket will be available everywhere just will be empty. There is option to enable sync on the bucket(s) which will then be synced across all the configured zones (as per the

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Sebastian Knust
Hi Michael, On 08.06.21 11:38, Ml Ml wrote: Now i was asked if i could also build a cheap 200-500TB Cluster Storage, which should also scale. Just for Data Storage such as NextCloud/OwnCloud. With similar requirements (server primarily for Samba and NextCloud, some RBD use, very limited

[ceph-users] ceph buckets

2021-06-08 Thread Rok Jaklič
Hi, I try to create buckets through rgw in following order: - *bucket1* with *user1* with *access_key1* and *secret_key1* - *bucket1* with *user2* with *access_key2* and *secret_key2* when I try to create a second bucket1 with user2 I get *Error response code BucketAlreadyExists.* Why? Should

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Christian Wuerdig
Since you mention NextCloud it will probably be RWG deployment. ALso it's not clear why 3 nodes? Is rack-space a premium? Just to compare your suggestion: 3x24 (I guess 4U?) x 8TB with Replication = 576 TB raw storage + 192 TB usable Let's go 6x12 (2U) x 4TB with EC 3+2 = 288 TB raw storage +

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Janne Johansson
Den tis 8 juni 2021 kl 11:39 skrev Ml Ml : > Maybe combine 3x 10TB HDDs to a 30TB Raid0/striping Disk => which > would speed up the performance, but have a bigger impact on a dying > disk. ^^ This sounds like a very bad idea. When this 30T monster fails, you will have to wait for 30TB to

[ceph-users] Re: How to enable lazyio under kcephfs?

2021-06-08 Thread Dan van der Ster
Hi, client_force_lazyio only works for ceph-fuse and libcephfs: https://github.com/ceph/ceph/pull/26976/files You can use the ioctl to enable per file with the kernel mount, but you might run into the same problem we did: https://tracker.ceph.com/issues/44166 Please share if it works for you.

[ceph-users] How to enable lazyio under kcephfs?

2021-06-08 Thread opengers
ceph: 14.2.x kernel: 4.15 In cephfs, due to the need for cache consistency, When a client is executing buffer IO, another client will hang when reading and writing the same file It seems that lazyio can solve this problem, lazyio allows multiple clients to execute buffer IO at the same