[ceph-users] ceph tell mds.0 dirfrag split - syntax of the "frag" argument

2024-05-15 Thread Alexander E. Patrakov
Hello, In the context of https://tracker.ceph.com/issues/64298, I decided to do something manually. In the help output of "ceph tell" for an MDS, I found these possibly useful commands: dirfrag ls : List fragments in directory dirfrag merge : De-fragment directory by path dirfrag split :

[ceph-users] Re: Forcing Posix Permissions On New CephFS Files

2024-05-09 Thread Alexander E. Patrakov
Hello Matthew, You can inherit the group, but not the user, of the containing folder. This can be achieved by making the folder setgid and then making sure that the client systems have a proper umask. See the attached PDF for a presentation that I conducted on this topic to my students in the

[ceph-users] Re: RGW multisite slowness issue due to the "304 Not Modified" responses on primary zone

2024-05-01 Thread Alexander E. Patrakov
e=50.8 ms > > > Any guidance would be greatly appreciated. > > Regards, > Mohammad Saif > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov _

[ceph-users] Re: Best practice and expected benefits of using separate WAL and DB devices with Bluestore

2024-04-21 Thread Alexander E. Patrakov
icht Munich HRB 231263 > > Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx > > > > > > > > > > _______ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > ___

[ceph-users] Re: MDS Behind on Trimming...

2024-04-07 Thread Alexander E. Patrakov
.suse.com/support/kb/doc/?id=19740 > >> > >> But it doesn't seem to help, maybe I should decrease it further? I am > >> guessing this must be a common issue...? I am running Reef on the MDS > >> servers, but most clients are on Quincy. > >> > >> Thanks for any advice! > >> > >> cheers, > >> erich > >> ___ > >> ceph-users mailing list -- ceph-users@ceph.io > >> To unsubscribe send an email to ceph-users-le...@ceph.io > >> > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS Behind on Trimming...

2024-03-28 Thread Alexander E. Patrakov
directory in question responds > > again and all is well. Then a few hours later it started happening > > again (not always the same directory). > > > > I hope I'm not experiencing a bug, but I can't see what would be causing > > this... > > > > On 3/28/24 2:37 P

[ceph-users] Re: MDS Behind on Trimming...

2024-03-28 Thread Alexander E. Patrakov
r, 109 op/s rd, 1.40k op/s wr > >>> > >>> And the specifics are: > >>> > >>> # ceph health detail > >>> HEALTH_WARN 1 MDSs report slow requests; 1 MDSs behind on trimming > >>> [WRN] MDS_SLOW_REQUEST: 1 MDSs report slow requests > >>> mds.slugfs.pr-md-01.xdtppo(mds.0): 99 slow requests are blocked > > >>> 30 secs > >>> [WRN] MDS_TRIM: 1 MDSs behind on trimming > >>> mds.slugfs.pr-md-01.xdtppo(mds.0): Behind on trimming (13884/250) > >>> max_segments: 250, num_segments: 13884 > >>> > >>> That "num_segments" number slowly keeps increasing. I suspect I just > >>> need to tell the MDS servers to trim faster but after hours of > >>> googling around I just can't figure out the best way to do it. The > >>> best I could come up with was to decrease "mds_cache_trim_decay_rate" > >>> from 1.0 to .8 (to start), based on this page: > >>> > >>> https://www.suse.com/support/kb/doc/?id=19740 > >>> > >>> But it doesn't seem to help, maybe I should decrease it further? I am > >>> guessing this must be a common issue...? I am running Reef on the > >>> MDS servers, but most clients are on Quincy. > >>> > >>> Thanks for any advice! > >>> > >>> cheers, > >>> erich > >>> ___ > >>> ceph-users mailing list -- ceph-users@ceph.io > >>> To unsubscribe send an email to ceph-users-le...@ceph.io > >>> > >> > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS Behind on Trimming...

2024-03-28 Thread Alexander E. Patrakov
I suspect I just > >> need to tell the MDS servers to trim faster but after hours of > >> googling around I just can't figure out the best way to do it. The > >> best I could come up with was to decrease "mds_cache_trim_decay_rate" > >> from 1.0 to .8 (to start), based on this page: > >> > >> https://www.suse.com/support/kb/doc/?id=19740 > >> > >> But it doesn't seem to help, maybe I should decrease it further? I am > >> guessing this must be a common issue...? I am running Reef on the MDS > >> servers, but most clients are on Quincy. > >> > >> Thanks for any advice! > >> > >> cheers, > >> erich > >> ___ > >> ceph-users mailing list -- ceph-users@ceph.io > >> To unsubscribe send an email to ceph-users-le...@ceph.io > >> > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-28 Thread Alexander E. Patrakov
On Thu, Mar 28, 2024 at 9:17 AM Angelo Hongens wrote: > According to 45drives, saving the CTDB lock file in CephFS is a bad idea Could you please share a link to their page that says this? -- Alexander E. Patrakov ___ ceph-users mailing list -- c

[ceph-users] Re: Erasure Code with Autoscaler and Backfill_toofull

2024-03-27 Thread Alexander E. Patrakov
filling > >> [40,43,33,32,30,38,22,35,9]p40 [27,10,20,7,30,21,1,28,31]p27 > >> 36.79 222315 575955797107713 active+remapped+backfilling > >> [1,36,31,33,25,23,14,3,13]p1[27,6,31,23,25,5,14,29,13]p27 > >> 36.8d 29 1284156 95523

[ceph-users] Re: cephfs client not released caps when running rsync

2024-03-26 Thread Alexander E. Patrakov
inux kernels (client side): 5.10 and 6.1 > > Did I understand everything correctly? is this the expected behavior > when running rsync? > > > And one more problem (I don’t know if it’s related or not), when rsync > finishes copying, all caps are freed except the last two (pinned i_caps >

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-25 Thread Alexander E. Patrakov
On Mon, Mar 25, 2024 at 7:37 PM Torkil Svensgaard wrote: > > > > On 24/03/2024 01:14, Torkil Svensgaard wrote: > > On 24-03-2024 00:31, Alexander E. Patrakov wrote: > >> Hi Torkil, > > > > Hi Alexander > > > >> Thanks for the update.

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-25 Thread Alexander E. Patrakov
On Mon, Mar 25, 2024 at 11:01 PM John Mulligan wrote: > > On Friday, March 22, 2024 2:56:22 PM EDT Alexander E. Patrakov wrote: > > Hi John, > > > > > A few major features we have planned include: > > > * Standalone servers (internally defined us

[ceph-users] Re: ceph cluster extremely unbalanced

2024-03-25 Thread Alexander E. Patrakov
t; > > Is it the only way to approach this, that each OSD has to be recreated? > > Thank you for reply > > dp > > On 3/24/24 12:44 PM, Alexander E. Patrakov wrote: > > Hi Denis, > > > > My approach would be: > > > > 1. Run "ceph osd metadata&quo

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Alexander E. Patrakov
modify_timestamp: Sun Mar 24 17:44:33 2024 > ~~~ > > On 24/03/2024 21:10, Curt wrote: > > Hey Mathew, > > > > One more thing out of curiosity can you send the output of blockdev > > --getbsz on the rbd dev and rbd info? > > > > I'm using 16TB rbd i

[ceph-users] Re: ceph cluster extremely unbalanced

2024-03-24 Thread Alexander E. Patrakov
its work and after that > change the OSDs crush weights to be even? > > * or should it otherwise - first to make crush weights even and then > enable the balancer? > > * or is there another safe(r) way? > > What are the ideal balancer settings for that? > > I'm expecting a

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread Alexander E. Patrakov
ugh, honestly, that seems to be counter-intuitive to me > considering CERN uses Ceph for their data storage needs. > > Any ideas / thoughts? > > Cheers > > Dulux-Oz > > On 23/03/2024 18:52, Alexander E. Patrakov wrote: > > Hello Dulux-Oz, > > > > Please tre

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
t; longer mentioned but it unfortunately made no difference for the number > of backfills which went 59->62->62. > > Mvh. > > Torkil > > On 23-03-2024 22:26, Alexander E. Patrakov wrote: > > Hi Torkil, > > > > I have looked at the files that you at

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
that PG again after the OSD restart. On Sun, Mar 24, 2024 at 4:56 AM Torkil Svensgaard wrote: > > > > On 23-03-2024 21:19, Alexander E. Patrakov wrote: > > Hi Torkil, > > Hi Alexander > > > I have looked at the CRUSH rules, and the equivalent rules work on my

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
u have a few OSDs that have 300+ PGs, the observed maximum is 347. Please set it to 400. On Sun, Mar 24, 2024 at 3:16 AM Torkil Svensgaard wrote: > > > > On 23-03-2024 19:05, Alexander E. Patrakov wrote: > > Sorry for replying to myself, but "ceph osd pool ls detail&q

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
e that appears after the words "erasure profile" in the "ceph osd pool ls detail" output. On Sun, Mar 24, 2024 at 1:56 AM Alexander E. Patrakov wrote: > > Hi Torkil, > > I take my previous response back. > > You have an erasure-coded pool with nine shards but only th

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
incompatible, as there is no way to change the EC parameters. It would help if you provided the output of "ceph osd pool ls detail". On Sun, Mar 24, 2024 at 1:43 AM Alexander E. Patrakov wrote: > > Hi Torkil, > > Unfortunately, your files contain nothing obviously bad or suspic

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
> > You can attached files to the mail here on the list. > > Doh, for some reason I was sure attachments would be stripped. Thanks, > attached. > > Mvh. > > Torkil -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Large number of misplaced PGs but little backfill going on

2024-03-23 Thread Alexander E. Patrakov
for Magnetic Resonance DRCMR, Section 714 > Copenhagen University Hospital Amager and Hvidovre > Kettegaard Allé 30, 2650 Hvidovre, Denmark > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le..

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-23 Thread Alexander E. Patrakov
ux-Oz > _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Laptop Losing Connectivity To CephFS On Sleep/Hibernation

2024-03-23 Thread Alexander E. Patrakov
On Sat, Mar 23, 2024 at 3:08 PM duluxoz wrote: > > > On 23/03/2024 18:00, Alexander E. Patrakov wrote: > > Hi Dulux-Oz, > > > > CephFS is not designed to deal with mobile clients such as laptops > > that can lose connectivity at any time. And I am not tal

[ceph-users] Re: Laptop Losing Connectivity To CephFS On Sleep/Hibernation

2024-03-23 Thread Alexander E. Patrakov
; Is this the solution to my issue, or is there a better way to construct > the fstab entries, or is there another solution I haven't found yet in > the doco or via google-foo? > > All help and advice greatly appreciated - thanks in advance > > Cheers > > Dulux-Oz > __

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-22 Thread Alexander E. Patrakov
d with the "ldap" idmap backend. I am sure other weird but valid setups exist - please extend the list if you can. Which of the above scenarios would be supportable without resorting to the old way of installing SAMBA manually alongside the cluster? -- Alexander E. Patrakov

[ceph-users] Re: log_latency slow operation observed for submit_transact, latency = 22.644258499s

2024-03-22 Thread Alexander E. Patrakov
16 active+clean+scrubbing+deep > >> 1active+remapped+backfill_wait+backfill_toofull > >> > >> io: > >> client: 117 MiB/s rd, 68 MiB/s wr, 274 op/s rd, 183 op/s wr > >> recovery: 438 MiB/s, 192 ob

[ceph-users] Re: OSD does not die when disk has failures

2024-03-21 Thread Alexander E. Patrakov
263 > Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx > _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CephFS space usage

2024-03-20 Thread Alexander E. Patrakov
t I don't presently see > how creating a new pool will help us to identify the source of the 10TB > discrepancy in this original cephfs pool. > > Please help me to understand what you are hoping to find...? > On 20/03/2024 6:35 pm, Alexander E. Patrakov wrote: > > Thorne, > &g

[ceph-users] Re: CephFS space usage

2024-03-20 Thread Alexander E. Patrakov
ilesystem > are virtual machine disks. They are under constant, heavy write load. There > is no way to turn this off. > On 19/03/2024 9:36 pm, Alexander E. Patrakov wrote: > > Hello Thorne, > > Here is one more suggestion on how to debug this. Right now, there is > uncertainty on whe

[ceph-users] Re: CephFS space usage

2024-03-19 Thread Alexander E. Patrakov
lete all copies of this transmission together with any > > attachments. / > > > -- > Igor Fedotov > Ceph Lead Developer > > Looking for help with your Ceph cluster? Contact us athttps://croit.io > > croit GmbH, Freseniusstr. 31h, 81247 Munich > CEO: Martin Verges - VAT-ID: DE310638492 > Com. register: Amtsgericht Munich HRB 231263 > Web:https://croit.io | YouTube:https://goo.gl/PGE1Bx > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Robust cephfs design/best practice

2024-03-15 Thread Alexander E. Patrakov
know your current and > future workloads to configure it accordingly. This is also true for any > other shared filesystem. > > > Best regards, > > Burkhard Linke > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-11 Thread Alexander E. Patrakov
nge it is to destroy / redeploy the OSD. > > > > There was a succession of PRs in the Octopus / Pacific timeframe around > > default min_alloc_size for HDD and SSD device classes, including IIRC one > > temporary reversion. > > > > However, the osd label after up

[ceph-users] Re: How can I clone data from a faulty bluestore disk?

2024-02-03 Thread Alexander E. Patrakov
______ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Changing A Ceph Cluster's Front- And/Or Back-End Networks IP Address(es)

2024-01-31 Thread Alexander E. Patrakov
a config file (I assume it's /etc/ceph/ceph.conf) on each Node > > c) Rebooting the Nodes > > d) Taking each Node out of Maintenance Mode > > Thanks in advance > > Cheers > > Dulux-Oz > ___ > ceph-users mailing

[ceph-users] Re: Network Flapping Causing Slow Ops and Freezing VMs

2024-01-06 Thread Alexander E. Patrakov
ailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CEPH Cluster performance review

2023-11-13 Thread Alexander E. Patrakov
; > Bangladesh Export Import Company Ltd. > > Level-8, SAM Tower, Plot #4, Road #22, Gulshan-1, Dhaka-1212,Bangladesh > > Tel: +880 9609 000 999, +880 2 5881 5559, Ext: 14191, Fax: +880 2 9895757 > > Cell: +8801787680828, Email: mosharaf.hoss...@bol-online.co

[ceph-users] Re: MDS_CACHE_OVERSIZED, what is this a symptom of?

2023-09-19 Thread Alexander E. Patrakov
size needing to be > bigger? Is it a problem with the clients holding onto some kind of > reference (documentation says this can be a cause, but now how to check for > it). > > Thanks in advance, > Pedro Lopes > ___ > ceph-users maili

[ceph-users] Best practices regarding MDS node restart

2023-09-09 Thread Alexander E. Patrakov
tandby-replay, as expected. Is there a better way? Or, should I have rebooted mds02 without much thinking? -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Unhappy Cluster

2023-09-08 Thread Alexander E. Patrakov
s an older cluster running Nautilus 14.2.9. > > Any thoughts? > Thanks > -Dave > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Alexander E. Patrakov __

[ceph-users] Re: Rocksdb compaction and OSD timeout

2023-09-07 Thread Alexander E. Patrakov
xplain that. > > > > You run the online compacting for this OSD's (`ceph osd compact > > ${osd_id}` command), right? > > > > > > > > k > > -- > Jean-Philippe Méthot > Senior Openstack system administrator > Administrateur système Openstac

[ceph-users] Re: librbd 4k read/write?

2023-08-11 Thread Alexander E. Patrakov
I tested it on another smaller cluster, with 36 SAS disks and got the > same result. > > I don't know exactly what to look for or configure to have any improvement. > _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-06-30 Thread Alexander E. Patrakov
__ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexa

[ceph-users] Re: 1 pg inconsistent and does not recover

2023-06-27 Thread Alexander E. Patrakov
gt; OSDs left (33 and 20) whose checksums disagree. > > I am just guessing this, though. > Also, if this is correct, the next question would be: What is with OSD 20? > Since there is no error reported at all for OSD 20, I assume that its > checksum agrees with its data. > Now, can

[ceph-users] Re: [rgw multisite] Perpetual behind

2023-06-17 Thread Alexander E. Patrakov
between e.g. Germany and Singapore to catch up fast. It will be limited by the amount of data that can be synced in one request and the hard-coded maximum number of requests in flight. In Reef, there are new tunables that help on high-latency links: rgw_data_sync_spawn_window, rgw_bucket_sync

[ceph-users] Re: How to release the invalid tcp connection under radosgw?

2023-06-13 Thread Alexander E. Patrakov
0.x.x.12:50024ESTABLISHED > 76749/radosgw > > > but client ip 10.x.x.12 is unreachable(because the node was shutdown), the > status of the tcp connections is always "ESTABLISHED", how to fix it? Please use this guide: https://www.cyberciti.biz/tips/cutting-the-tc

[ceph-users] Re: Encryption per user Howto

2023-06-02 Thread Alexander E. Patrakov
clusion into the Zen kernel, available for Arch Linux users, and the result is that the resulting system stopped booting for some users. So a proper backport is required, even though the Cloudflare patch applies as-is. https://github.com/zen-kernel/zen-kernel/issues/306 https://github.com/zen-kernel/zen

[ceph-users] Re: Ceph iscsi gateway semi deprecation warning?

2023-05-26 Thread Alexander E. Patrakov
ction-grade setup. At the very least, wait until this subproject makes it into Ceph documentation and becomes available as RPMs and DEBs. For now, you can still use ceph-iscsi - assuming that you need it, i.e. that raw RBD is not an option. -- Alexander E. Patrakov

[ceph-users] Re: Encryption per user Howto

2023-05-26 Thread Alexander E. Patrakov
On Sat, May 27, 2023 at 5:09 AM Alexander E. Patrakov wrote: > > Hello Frank, > > On Fri, May 26, 2023 at 6:27 PM Frank Schilder wrote: > > > > Hi all, > > > > jumping on this thread as we have requests for which per-client fs mount > > encryption

[ceph-users] Re: Encryption per user Howto

2023-05-26 Thread Alexander E. Patrakov
email to ceph-users-le...@ceph.io > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Encryption per user Howto

2023-05-21 Thread Alexander E. Patrakov
a different key derived from its name and a per-bucket master key which never leaves Vault. Note that users will be able to create additional buckets by themselves, and they won't be encrypted, so tell them either not to do that or to encrypt the new buckets similarly. -

[ceph-users] Re: Disks are filling up even if there is not a single placement group on them

2023-04-10 Thread Alexander E. Patrakov
to". Better don't use it, and let your ceph cluster recover. If you can't wait, try to use upmaps to say that all PGs are fine where they are now, i.e that they are not misplaced. There is a script somewhere on GitHub that does this, but unfortunately I can't find it right now. --

[ceph-users] Re: compiling Nautilus for el9

2023-04-02 Thread Alexander E. Patrakov
e can do it), then upgrade the hosts to EL9 while still keeping Nautilus, then, still containerized, upgrade to a more recent Ceph release (but note that you can't upgrade from nautilus to Quincy directly, you need Octopus or Pacific as a middle step), and then optionally undo the containeriza

[ceph-users] Re: OSD crashes during upgrade mimic->octopus

2022-10-12 Thread Alexander E. Patrakov
will slowly creep and accumulate and eat disk space - and the problematic part is that this creepage is replicated to OSDs. -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: encrypt OSDs after creation

2022-10-11 Thread Alexander E. Patrakov
norebalance flag during the operation. -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 3 OSDs can not be started after a server reboot - rocksdb Corruption

2022-02-22 Thread Alexander E. Patrakov
4711 *** Immediate > >>> shutdown (osd_fast_shutdown=true) *** > >>> 2022-02-21T13:53:40.455+0100 7fc9645f4f00 0 set uid:gid to 64045:64045 > >>> (ceph:ceph) > >>> 2022-02-21T13:53:40.455+0100 7fc9645f

[ceph-users] Re: Advice on enabling autoscaler

2022-02-07 Thread Alexander E. Patrakov
e default limit. Even Nautilus can do 400 PGs per OSD, given "mon max pg per osd = 400" in ceph.conf. Of course it doesn't mean that you should allow this. -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubs

[ceph-users] Re: Correct Usage of the ceph-objectstore-tool??

2022-01-06 Thread Alexander E. Patrakov
ail" output over time, and with/without the OSDs with injected PGs running. At the very least, it provides a useful metric of what is remaining to do. Also an interesting read-only command (but maybe for later) would be: "ceph osd safe-to-destroy 123" where 123 is the dead OSD id. --

[ceph-users] Re: Help - Multiple OSD's Down

2022-01-06 Thread Alexander E. Patrakov
пт, 7 янв. 2022 г. в 00:50, Alexander E. Patrakov : > чт, 6 янв. 2022 г. в 12:21, Lee : > >> I've tried add a swap and that fails also. >> > > How exactly did it fail? Did you put it on some disk, or in zram? > > In the past I had to help a customer who hit memory

[ceph-users] Re: Help - Multiple OSD's Down

2022-01-06 Thread Alexander E. Patrakov
64 GB GB of zram-based swap on each server (with 128 GB of physical RAM in this type of server). -- Alexander E. Patrakov ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Can we deprecate FileStore in Quincy?

2021-06-26 Thread Alexander E. Patrakov
grades, but I wouldn't do it at home. Simply because Ceph never made sense for small clusters, no matter what the hardware is - for such use cases, you could always do a software RAID over ISCSI or over AoE, with less overhead. -- Alexander E. Patrakov CV: http://u.pc.cd/wT8otalK __

[ceph-users] Re: RBD migration between 2 EC pools : very slow

2021-06-23 Thread Alexander E. Patrakov
d, unsuccessfully, to tune their setup, but our final recommendation (successfully benchmarked but rejected due to costs) was to create a separate replica 3 pool for new backups. -- Alexander E. Patrakov CV: http://u.pc.cd/wT8otalK ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Alexander E. Patrakov
h.io > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexander E. Patrakov CV: http://u.pc.cd/wT8otalK ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Best practices for OSD on bcache

2021-03-04 Thread Alexander E. Patrakov
he "real" hot data. -- Alexander E. Patrakov CV: http://u.pc.cd/wT8otalK ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Alexander E. Patrakov
wrong. Ceph 15 runs on CentOS 7 just fine, but without the dashboard. -- Alexander E. Patrakov CV: http://u.pc.cd/wT8otalK ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-04 Thread Alexander E. Patrakov
gt; > Cheers, > > Simon > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > ___ > ceph-users maili

[ceph-users] Re: PG inconsistent with empty inconsistent objects

2021-01-16 Thread Alexander E. Patrakov
inconsistents` key was empty! What is this? Is it a bug in Ceph or..? > > Thanks. > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Alexan

[ceph-users] Re: PGs down

2020-12-20 Thread Alexander E. Patrakov
on_seconds=0 > > and attempted to start the OSDs in question. Same error as before. Am I > setting compaction options correctly? You may also want this: bluefs_log_compact_min_size=999G -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-u

[ceph-users] Re: Possibly unused client

2020-12-16 Thread Alexander E. Patrakov
7535 v1::0/3495403341' > entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mgr", > ""]}]: dispatch > > Does that help? > > Regards, > Eugen > > > Zitat von "Alexander E. Patrakov" : > > > H

[ceph-users] Possibly unused client

2020-12-16 Thread Alexander E. Patrakov
s not used for, say, the past week? Or, what logs should I turn on so that if it is used during the next week, it is mentioned there? -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an ema

[ceph-users] Re: Ceph flash deployment

2020-11-03 Thread Alexander E. Patrakov
at are still > valid for bluestore or not? I mean the read_ahead_kb and disk scheduler. > > Thanks. > > On Tue, Nov 3, 2020 at 10:55 PM Alexander E. Patrakov > wrote: >> >> On Tue, Nov 3, 2020 at 6:30 AM Seena Fallah wrote: >> > >> > Hi all, >&g

[ceph-users] Re: Ceph flash deployment

2020-11-03 Thread Alexander E. Patrakov
ight also look at (i.e. benchmark for your workload specifically) disabling the deepest idle states. -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The feasibility of mixed SSD and HDD replicated pool

2020-10-25 Thread Alexander E. Patrakov
l using the > above crush rule. > > Am I correct about the above statements? How would this work from your > experience? Thanks. This works (i.e. guards against host failures) only if you have strictly separate sets of hosts that have SSDs and that have HDDs. I.e., there should be n

[ceph-users] Is cephfs multi-volume support stable?

2020-10-10 Thread Alexander E. Patrakov
that multiple filesystems in the same cluster are an experimental feature, and the "latest" version of the same doc makes the same claim. What should I believe - the presentation or the official docs? -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___

[ceph-users] Re: NVMe's

2020-09-23 Thread Alexander E. Patrakov
may be a bit too tight. -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Low level bluestore usage

2020-09-22 Thread Alexander E. Patrakov
s why the options: --bluestore-block-db-size=31G: ceph-bluestore-tool refuses to do anything if this option is not set to any value --bluefs-log-compact-min-size=31G: make absolutely sure that log compaction doesn't happen, because it would hit "bluefs enospc" again. -- Alexander E. P

[ceph-users] Re: Vitastor, a fast Ceph-like block storage for VMs

2020-09-22 Thread Alexander E. Patrakov
rk MTU. 4. The utilization figures for SSDs and network interfaces during each test. Also, given that the scope of the project only includes block storage, I think it would be fair to ask for a comparison with DRBD 9 and possibly Linstor, not only with Ceph. -- Alexander E. Patrakov CV: htt

[ceph-users] Re: rbd-nbd stuck request

2020-07-24 Thread Alexander E. Patrakov
On Fri, Jul 24, 2020 at 7:43 PM Herbert Alexander Faleiros wrote: > > On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrote: > > On Fri, Jul 24, 2020 at 6:01 PM Herbert Alexander Faleiros > > wrote: > > > > > > Hi, > > > >

[ceph-users] Re: rbd-nbd stuck request

2020-07-24 Thread Alexander E. Patrakov
this. Is the rbd-nbd process running? I.e.: # cat /proc/partitions # ps axww | grep nbd -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rbd map image with journaling

2020-07-24 Thread Alexander E. Patrakov
t; (stable) > > # uname -r > 5.4.52-050452-generic You could use rbd-nbd # rbd-nbd map image@snap -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Octopus upgrade breaks Ubuntu 18.04 libvirt

2020-07-08 Thread Alexander E. Patrakov
> virEventPollCalculateTimeout:369 : > >> >> > Timeout at 1594059521930 due in 4997 ms > >> >> > 2020-07-06 18:18:36.933+: 3273: info : virEventPollRunOnce:640 : > >> >> > EVENT_POLL_RUN: nhandles=21 timeou

[ceph-users] Re: Placement of block/db and WAL on SSD?

2020-07-05 Thread Alexander E. Patrakov
-4% of the data device. 3) --data on something (then the db goes there as well) and --block.wal on a small (i.e. not large enough to use as a db device) but very fast nvdimm. -- Alexander E. Patrakov CV: http://pc.cd/PLz7 ___ ceph-users mailing list --

[ceph-users] Cannot remove cache tier

2020-07-03 Thread Alexander E. Patrakov
0 KiB/s rd, 251 MiB/s wr, 366 op/s rd, 278 op/s wr cache:123 MiB/s flush, 72 MiB/s evict, 31 op/s promote, 3 PGs flushing, 1 PGs evicting Is there any workaround, short of somehow telling the client to stop creating new rbds? -- Alexander E. Patrakov CV: http://pc.cd/PLz7 _