[ceph-users] Re: v15.2.4 Octopus released

2020-06-30 Thread Neha Ojha
On Tue, Jun 30, 2020 at 6:04 PM Dan Mick wrote: > > True. That said, the blog post points to > http://download.ceph.com/tarballs/ where all the tarballs, including > 15.2.4, live. > > On 6/30/2020 5:57 PM, Sasha Litvak wrote: > > David, > > > > Download link points to 14.2.10 tarball. > > > >

[ceph-users] Re: v15.2.4 Octopus released

2020-06-30 Thread Dan Mick
True. That said, the blog post points to http://download.ceph.com/tarballs/ where all the tarballs, including 15.2.4, live. On 6/30/2020 5:57 PM, Sasha Litvak wrote: David, Download link points to 14.2.10 tarball. On Tue, Jun 30, 2020, 3:38 PM David Galloway wrote: We're happy to

[ceph-users] Re: v15.2.4 Octopus released

2020-06-30 Thread Sasha Litvak
David, Download link points to 14.2.10 tarball. On Tue, Jun 30, 2020, 3:38 PM David Galloway wrote: > We're happy to announce the fourth bugfix release in the Octopus series. > In addition to a security fix in RGW, this release brings a range of fixes > across all components. We recommend that

[ceph-users] Re: removing the private cluster network

2020-06-30 Thread Frank Schilder
There are plenty of threads with info on this, see, for example, https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/Y23SQN357RYMFTBKJ2VKIQRR43KURWZJ/#4EYQVVJ7IOSEBPZGMPPRZAZ5XBUHHF5F . Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Mark Kirkwood
Increasing the memory target appears to have solved the issue. On 26/06/20 11:47 am, Mark Kirkwood wrote: Progress update: - tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests - will increase osd_memory_target from 4 to 16G, and observe On 24/06/20 1:30 pm, Mark Kirkwood

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Nigel Williams
On Wed, 1 Jul 2020 at 01:47, Anthony D'Atri wrote: > > However when I've looked at the IO metrics for the nvme it seems to be only > > lightly loaded, so does not appear to be the issue (at 1st sight anyway). > > How are you determining “lightly loaded”. Not iostat %util I hope. For reference,

[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Liam Monahan
I downgraded a single RGW node in the cluster back to Nautilus (14.2.10), ran "radosgw-admin user stats --uid= --sync-stats" and the usage calculated to the correct usage values. It seems like there is a potential bug in how Octopus is calculating its user stats. ~ Liam

[ceph-users] v15.2.4 Octopus released

2020-06-30 Thread David Galloway
We're happy to announce the fourth bugfix release in the Octopus series. In addition to a security fix in RGW, this release brings a range of fixes across all components. We recommend that all Octopus users upgrade to this release. For a detailed release notes with links & changelog please refer

[ceph-users] removing the private cluster network

2020-06-30 Thread Magnus HAGDORN
Hi there, we currently have a ceph cluster with 6 nodes and a public and cluster network. Each node has two bonded 2x1GE network interfaces, one for the public and one for the cluster network. We are planning to upgrade the networking to 10GE. Given the modest size of our cluster we would like to

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-30 Thread Anthony D'Atri
> That is an interesting point. We are using 12 on 1 nvme journal for our > Filestore nodes (which seems to work ok). The workload for wal + db is > different so that could be a factor. However when I've looked at the IO > metrics for the nvme it seems to be only lightly loaded, so does not

[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Liam Monahan
Reposting here since it seemed to start a new thread. Thanks, both. That’s a useful observation. I wonder what I can try to get accurate user stats. All of our users are quota-ed, so wrong users stats actually stop them from writing data. Since stats are only updated on write: I have some

[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Liam Monahan
Thanks, both. That’s a useful observation. I wonder what I can try to get accurate user stats. All of our users are quota-ed, so wrong users stats actually stop them from writing data. Since stats are only updated on write: I have some users who are inactive and their stats are correct. I

[ceph-users] Best practice for object store design

2020-06-30 Thread Szabo, Istvan (Agoda)
Hi, What is the let's say best practice to place haproxy, rgw, mon services in a new cluster? We would like to have a new setup, but unsure how to create a best setup in front of the OSD nodes. Let's say we have 3 mons as ceph suggest it, where should I put haproxy and rados? Should be vm or

[ceph-users] Multisite setup with and without replicated region

2020-06-30 Thread Szabo, Istvan (Agoda)
Hi, It is possible to create a multisite cluster with multiple zones? I'd like to have zone/region which is replicated across DCs, but I want to have without replication as well. Would prefer to use earlier version of ceph, not octopus yet. Thank you This

[ceph-users] RES: Debian install

2020-06-30 Thread Rafael Quaglio
Thanks for your reply Anastasios, I was waiting for some answer. My /etc/apt/sources.list.d/ceph.list content is: deb https://download.ceph.com/debian-nautilus/ buster main Even if I do “apt-get update”, the packages still the same. The Ceph client (CephFS mount) is

[ceph-users] Re: Move WAL/DB to SSD for existing OSD?

2020-06-30 Thread Lindsay Mathieson
On 30/06/2020 8:17 pm, Eugen Block wrote: Don't forget to set the correct LV tags for the new db device as mentioned in [1] and [2]. [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6OHVTNXH5SLI4ABC75VVP7J2DT7X4FZA/ [2] https://tracker.ceph.com/issues/42928 Thanks

[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread EDH - Manuel Rios
You can ignore rgw.none details, it dont make sense today from our experience Still dont know why dev dont cleanup bucket with those rgw.none stats... Some of our buckets got it others new ones no. -Mensaje original- De: Janne Johansson Enviado el: martes, 30 de junio de 2020 8:40

[ceph-users] Re: Move WAL/DB to SSD for existing OSD?

2020-06-30 Thread Eugen Block
Don't forget to set the correct LV tags for the new db device as mentioned in [1] and [2]. [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6OHVTNXH5SLI4ABC75VVP7J2DT7X4FZA/ [2] https://tracker.ceph.com/issues/42928 Zitat von Lindsay Mathieson : On 29/06/2020 11:44

[ceph-users] Re: Suspicious memory leakage

2020-06-30 Thread XuYun
Seems the attached log file is missing: https://pastebin.com/wAULN20N > 2020年6月30日 下午1:26,XuYun 写道: > > Hi, > > We’ve observed some suspicious memory leak problems of MGR since upgraded to > Nautilus. > Yesterday I upgrade our cluster to the latest 14.2.10

[ceph-users] Suspicious memory leakage

2020-06-30 Thread XuYun
Hi, We’ve observed some suspicious memory leak problems of MGR since upgraded to Nautilus. Yesterday I upgrade our cluster to the latest 14.2.10 and this problem seems still reproducible. According to the monitoring chart (memory usage of the active mgr node), the memory consumption started

[ceph-users] Re: Bench on specific OSD

2020-06-30 Thread vitalif
Create a pool with size=minsize=1 and use ceph-gobench https://github.com/rumanzo/ceph-gobench Hi all. Is there anyway to completely health check one OSD host or instance? For example rados bech just on that OSD or do some checks for disk and front and back netowrk? Thanks.

[ceph-users] Re: Issue with ceph-ansible installation, No such file or directory

2020-06-30 Thread Mason-Williams, Gabryel (DLSLtd,RAL,LSCI)
Hello, I have added to the https://github.com/ceph/ceph-ansible/issues/4955 issue how I solved this problem, in short, it requires you to set the PATH inside of the environment keyword in all relevant files to something like: environment: CEPH_VOLUME_DEBUG: 1

[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-30 Thread Janne Johansson
Den mån 29 juni 2020 kl 17:27 skrev Liam Monahan : > > For example, here is a bucket that all of a sudden reports that it has > 18446744073709551615 objects! The actual count should be around 20,000. > > "rgw.none": { > "size": 0, > "size_actual": 0, >