[ceph-users] Re: Ceph Squid released?

2024-04-30 Thread James Page
Hi Robert

On Mon, Apr 29, 2024 at 8:06 AM Robert Sander 
wrote:

> On 4/29/24 08:50, Alwin Antreich wrote:
>
> > well it says it in the article.
> >
> > The upcoming Squid release serves as a testament to how the Ceph
> > project continues to deliver innovative features to users without
> > compromising on quality.
> >
> >
> > I believe it is more a statement of having new members and tiers and to
> > sound the marketing drums a bit. :)
>
> The Ubuntu 24.04 release notes also claim that this release comes with
> Ceph Squid:
>
> https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890


Almost - we've been using snapshots from the squid branch to get early
visibility of the upcoming Squid release as we knew it would come after the
Ubuntu 24.04 release date.  The release notes were not quite on this front
(I've now updated).

The Squid release will be provided via a Stable Release Updates post
release by the Ceph project.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: dashboard on Ubuntu 22.04: python3-cheroot incompatibility

2022-07-22 Thread James Page
Hi Matthias


On Fri, Jul 22, 2022 at 4:50 PM Matthias Ferdinand 
wrote:

> Hi,
>
> trying to activate ceph dashboard on a 17.2.0 cluster (Ubuntu 22.04
> using standard ubuntu repos), the dashboard module crashes because it
> cannot understand the python3-cheroot version number '8.5.2+ds1':
>
> root@mceph00:~# ceph crash info
> 2022-07-22T14:44:03.226395Z_a6b006a7-10c3-443d-9ead-161e06a27bf3
> {
> "backtrace": [
> "  File \"/usr/share/ceph/mgr/dashboard/__init__.py\", line
> 52, in \nfrom .module import Module, StandbyModule  # noqa:
> F401",
> "  File \"/usr/share/ceph/mgr/dashboard/module.py\", line 49,
> in \npatch_cherrypy(cherrypy.__version__)",
> "  File
> \"/usr/share/ceph/mgr/dashboard/cherrypy_backports.py\", line 197, in
> patch_cherrypy\naccept_socket_error_0(v)",
> "  File
> \"/usr/share/ceph/mgr/dashboard/cherrypy_backports.py\", line 124, in
> accept_socket_error_0\nif v < StrictVersion(\"9.0.0\") or
> cheroot_version < StrictVersion(\"6.5.5\"):",
> "  File \"/lib/python3.10/distutils/version.py\", line 64, in
> __gt__\nc = self._cmp(other)",
> "  File \"/lib/python3.10/distutils/version.py\", line 168, in
> _cmp\nother = StrictVersion(other)",
> "  File \"/lib/python3.10/distutils/version.py\", line 40, in
> __init__\nself.parse(vstring)",
> "  File \"/lib/python3.10/distutils/version.py\", line 137, in
> parse\nraise ValueError(\"invalid version number '%s'\" % vstring)",
> =>  "ValueError: invalid version number '8.5.2+ds1'"
> ],
> "ceph_version": "17.2.0",
> "crash_id":
> "2022-07-22T14:44:03.226395Z_a6b006a7-10c3-443d-9ead-161e06a27bf3",
> "entity_name": "mgr.mceph05",
> "mgr_module": "dashboard",
> "mgr_module_caller": "PyModule::load_subclass_of",
> "mgr_python_exception": "ValueError",
> "os_id": "22.04",
> "os_name": "Ubuntu 22.04 LTS",
> "os_version": "22.04 LTS (Jammy Jellyfish)",
> "os_version_id": "22.04",
> "process_name": "ceph-mgr",
> "stack_sig":
> "3f893983e716f2a7e368895904cf3485ac7064d3294a45ea14066a1576c818e3",
> "timestamp": "2022-07-22T14:44:03.226395Z",
> "utsname_hostname": "mceph05",
> "utsname_machine": "x86_64",
> "utsname_release": "5.15.0-41-generic",
> "utsname_sysname": "Linux",
> "utsname_version": "#44-Ubuntu SMP Wed Jun 22 14:20:53 UTC 2022"
> }
>
> If I remove the version check (see below), dashboard appears to be working.


https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1967139

I just uploaded a fix for cheroot to resolve this issue - the stable
release update team should pick that up next week.

Cheers

James
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best practices for OSD on bcache

2021-03-03 Thread James Page
Hi Norman

On Wed, Mar 3, 2021 at 2:47 AM Norman.Kern  wrote:

> James,
>
> Can you tell me what's the hardware config of your bcache? I use the 400G
> SATA SSD as cache device and
>
> 10T HDD as the storage device.  Hardware relationed?
>

It might be - all of the deployments I've seen/worked with use NVMe SSD
devices and some more recent ones have used NVMe attached Optane devices as
well (but that is usual).

Backing HDD's are SAS attached 12TB ish  7K spinning disks.



>
> On 2021/3/2 下午4:49, James Page wrote:
> > Hi Norman
> >
> > On Mon, Mar 1, 2021 at 4:38 AM Norman.Kern  wrote:
> >
> >> Hi, guys
> >>
> >> I am testing ceph on bcache devices,  I found the performance is not
> good
> >> as expected. Does anyone have any best practices for it?  Thanks.
> >>
> > I've used bcache quite a bit with Ceph with the following configuration
> > options tweaked
> >
> > a) use writeback mode rather than writethrough (which is the default)
> >
> > This ensures that the cache device is actually used for write caching
> >
> > b) turn off the sequential cutoff
> >
> > sequential_cutoff = 0
> >
> > This means that sequential writes will also always go to the cache device
> > rather than the backing device
> >
> > c) disable the congestion read and write thresholds
> >
> > congested_read_threshold_us = congested_write_threshold_us = 0
> >
> > The following repository:
> >
> > https://git.launchpad.net/charm-bcache-tuning/tree/src/files
> >
> > has a python script and systemd configuration todo b) and c)
> automatically
> > on all bcache devices on boot; a) we let the provisioning system take
> care
> > of.
> >
> > HTH
> >
> >
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread James Page
Hi Norman

On Mon, Mar 1, 2021 at 4:38 AM Norman.Kern  wrote:

> Hi, guys
>
> I am testing ceph on bcache devices,  I found the performance is not good
> as expected. Does anyone have any best practices for it?  Thanks.
>

I've used bcache quite a bit with Ceph with the following configuration
options tweaked

a) use writeback mode rather than writethrough (which is the default)

This ensures that the cache device is actually used for write caching

b) turn off the sequential cutoff

sequential_cutoff = 0

This means that sequential writes will also always go to the cache device
rather than the backing device

c) disable the congestion read and write thresholds

congested_read_threshold_us = congested_write_threshold_us = 0

The following repository:

https://git.launchpad.net/charm-bcache-tuning/tree/src/files

has a python script and systemd configuration todo b) and c) automatically
on all bcache devices on boot; a) we let the provisioning system take care
of.

HTH


> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: dashboard module missing dependencies in 15.2.1 Octopus

2020-05-01 Thread James Page
Hi Duncan

Try python3-yaml - this might just be a missing dependency.

Cheers

James

On Fri, May 1, 2020 at 7:32 AM Duncan Bellamy  wrote:

> Hi,
> I have installed ceph on Ubuntu Focal Fossa using the ubuntu repo, instead
> of ceph-deploy (as ceph-deploy install does not work for Focal Fossa yet)
> install I used:
> sudo apt-get install -y ceph ceph-mds radosgw ceph-mgr-dashboard
>
> The rest of the setup was the same as the quickstart on ceph.io <
> http://ceph.io/> with ceph-deploy.
>
> It installed ceph version 15.2.1 (octopus).
>
> If I do a 'ceph -s' I get the warning:
> health: HEALTH_WARN
> 2 mgr modules have failed dependencies
>
> If I run 'ceph mgr module ls', for enabled and active modules I get:
>
>  "always_on_modules": [
> "balancer",
> "crash",
> "devicehealth",
> "orchestrator",
> "osd_support",
> "pg_autoscaler",
> "progress",
> "rbd_support",
> "status",
> "telemetry",
> "volumes"
> ],
> "enabled_modules": [
> "iostat",
> "restful”
>
> Then when I run 'ceph mgr module enable dashboard’ I get the error:
>
> Error ENOENT: module 'dashboard' reports that it cannot run on the active
> manager daemon: No module named 'yaml' (pass --force to force enablement)
>
> I have tried searching, and searching with apt but cannot find any ‘yaml’
> package that might be used by ceph.
>
> Duncan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Upgrade procedure on Ubuntu Bionic with stock packages

2019-08-28 Thread James Page
You could also place the mon, mds and mgr daemons in containers - these
don't have to be docker containers - you can also use LXD on Ubuntu which
gives you a full system container.

This is the approach that the OpenStack Charms deployment tooling takes -
it allows each personality within the deployment to be upgraded
individually.

On Wed, Aug 28, 2019 at 10:41 AM Mark Schouten  wrote:

> Cool, thanks!
>
> --
>
> Mark Schouten 
>
> Tuxis, Ede, https://www.tuxis.nl
>
> T: +31 318 200208
>
>
> - Originele bericht -
> --
> Van: James Page (james.p...@canonical.com)
> Datum: 28-08-2019 11:02
> Naar: Mark Schouten (m...@tuxis.nl)
> Cc: ceph-users@ceph.io
> Onderwerp: Re: [ceph-users] Upgrade procedure on Ubuntu Bionic with stock
> packages
>
> Hi Mark
>
> On Wed, Aug 28, 2019 at 9:51 AM Mark Schouten  wrote:
>
>> Hi,
>>
>> I have a cluster running on Ubuntu Bionic, with stock Ubuntu Ceph
>> packages. When upgrading, I always try to follow the procedure as
>> documented here:
>> https://docs.ceph.com/docs/master/install/upgrading-ceph/
>>
>> However, the Ubuntu packages restart all daemons upon upgrade, per node.
>> So if I upgrade the first node, it will restart mon, osds, rgw, and mds'es
>> on that node, even though the rest of the cluster is running the old
>> version.
>>
>> I tried upgrading a single package, to see how that goes, but due to
>> dependencies in dpkg, all other packages are upgraded as well.
>>
>
> This is a known issue in the Ceph packages in Ubuntu:
>
> https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1840347
>
> the behaviour of debhelper (which generates snippets for maintainer
> scripts) changed and it was missed - fix being worked on at the moment.
>
> Cheers
>
> James
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Upgrade procedure on Ubuntu Bionic with stock packages

2019-08-28 Thread James Page
Hi Mark

On Wed, Aug 28, 2019 at 9:51 AM Mark Schouten  wrote:

> Hi,
>
> I have a cluster running on Ubuntu Bionic, with stock Ubuntu Ceph
> packages. When upgrading, I always try to follow the procedure as
> documented here: https://docs.ceph.com/docs/master/install/upgrading-ceph/
>
> However, the Ubuntu packages restart all daemons upon upgrade, per node.
> So if I upgrade the first node, it will restart mon, osds, rgw, and mds'es
> on that node, even though the rest of the cluster is running the old
> version.
>
> I tried upgrading a single package, to see how that goes, but due to
> dependencies in dpkg, all other packages are upgraded as well.
>

This is a known issue in the Ceph packages in Ubuntu:

  https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1840347

the behaviour of debhelper (which generates snippets for maintainer
scripts) changed and it was missed - fix being worked on at the moment.

Cheers

James
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io