gt; arnaud.mar...@i2bc.paris-saclay.fr> wrote:
>
> > Peter,
> >
> > I had the same error and my workaround was to manually create
> > /usr/lib/sysctl.d directory on all nodes, then resume the upgrade
> >
> > Arnaud Martel
> > - Mail original -
>
/usr/lib/sysctl.d/90-ceph-43fd7d2e-f693-11eb-990a-a4bf01112a34-osd.conf'
The good news this is a pre-production proof of concept cluster still so
I'm attempting to iron out issues, before we try and make it a production
service.
Any ideas would be helpful.
I guess deploy might be an option but that does not feel very future proof.
Thanks
Peter Childs
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have a number of disk trays, with 25 ssd's in them, these are attached to
my servers via a pair of sas cables, so that multipath is used to join the
together again and maximize speed etc.
Using cephadm how can I create the osd's?
It looks like it should be possible to use ceph-volume but I've n
stability before I go there.
I really just want to know where to look for the problems rather than any
exact answers, I'm yet to see any clues that might help
Thanks in advance
Peter Childs
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
gr modules.
>
> Regards
> Eugen
>
> [1] https://docs.ceph.com/en/latest/cephadm/troubleshooting/
>
>
> Zitat von Peter Childs :
>
> > Lets try to stop this message turning into a mass moaning session about
> > Ceph and try and get this newbie able to use it.
>
Lets try to stop this message turning into a mass moaning session about
Ceph and try and get this newbie able to use it.
I've got a Ceph Octopus cluster, its relatively new and deployed using
cephadm.
It was working fine, but now the managers start up run for about 30 seconds
and then die, until
t; > FWIW we had no luck yet with one-by-one OSD daemon additions through
>> ceph
>> > orch either. We also reproduced the issue easily in a virtual lab using
>> > small virtual disks on a single ceph VM with 1 mon.
>> >
>> > We are now looking into whether
lab using
> small virtual disks on a single ceph VM with 1 mon.
>
> We are now looking into whether we can get past this with a manual
> buildout.
>
> If you, or anyone, has hit the same stumbling block and gotten past it, I
> would really appreciate some guidance.
>
> Thank
;add osd" to work, I suspect I just need to fine
> > tune my osd creation rules, so it does not try and create too many osds
> on
> > the same node at the same time.
>
> I agree, no need to do it manually if there is an automated way,
> especially if you're try
adm and you should see at least some output like this:
>
> Mai 26 08:21:48 pacific1 conmon[31446]: 2021-05-26T06:21:48.466+
> 7effc15ff700 0 log_channel(cephadm) log [INF] : Applying service
> osd.ssd-hdd-mix on host pacific2...
> Mai 26 08:21:49 pacific1 conmon[31009]: cephadm
&
vm volumes on them just the osd's daemons are not created or started.
So maybe I'm invoking ceph-volume incorrectly.
On Tue, 25 May 2021 at 06:57, Peter Childs wrote:
>
>
> On Mon, 24 May 2021, 21:08 Marc, wrote:
>
>> >
>> > I'm attempting to use cephadm
On Mon, 24 May 2021, 21:08 Marc, wrote:
> >
> > I'm attempting to use cephadm and Pacific, currently on debian buster,
> > mostly because centos7 ain't supported any more and cenotos8 ain't
> > support
> > by some of my hardware.
>
> Who says centos7 is not supported any more? Afaik centos7/el7 i
I'm attempting to get get ceph up and running, and currently feel like I'm
going around in circles.
I'm attempting to use cephadm and Pacific, currently on debian buster,
mostly because centos7 ain't supported any more and cenotos8 ain't support
by some of my hardware.
Anyway I have a few nodes w
kit so it's not bad kit, it's
just not new either.
Peter.
On Fri, 30 Apr 2021, 23:57 Mark Lehrer, wrote:
> I've had good luck with the Ubuntu LTS releases - no need to add extra
> repos. 20.04 uses Octopus.
>
> On Fri, Apr 30, 2021 at 1:14 PM Peter Childs wrote:
>
I'm trying to set up a new ceph cluster, and I've hit a bit of a blank.
I started off with centos7 and cephadm. Worked fine to a point, except I
had to upgrade podman but it mostly worked with octopus.
Since this is a fresh cluster and hence no data at risk, I decided to jump
straight into Pacifi
:
> Can you share the output of 'ceph log last cephadm'? I'm wondering if
> you are hitting https://tracker.ceph.com/issues/50114
>
> Thanks!
> s
>
> On Mon, Apr 5, 2021 at 4:00 AM Peter Childs wrote:
> >
> > I am attempting to upgrade a Ceph Upgrade c
I am attempting to upgrade a Ceph Upgrade cluster that was deployed with
Octopus 15.2.8 and upgraded to 15.2.10 successfully. I'm not attempting to
upgrade to 16.2.0 Pacific, and it is not going very well.
I am using cephadm. It looks to have upgraded the managers and stopped,
and not moved on to
at behavior do you see? What do logs show? Hopefully that will help
> pinpoint the root cause of your problems.
>
> On Sun, Feb 28, 2021 at 4:21 AM Peter Childs wrote:
> >
> > Currently I'm using the default podman that comes with CentOS7 1.6.4
> which I fear i
ems
> can be identified so it can be addressed directly.
>
> On Sat, Feb 27, 2021 at 11:34 AM Peter Childs wrote:
> >
> > I'm new to ceph, and I've been trying to set up a new cluster with 16
> > computers with 30 disks each and 6 SSD (plus boot disks), 256G
is going very very slowly. I'm currently using
podman if that helps, I'm not sure if docker would be better? (I've mainly
used singularity when I've handled containers before)
Thanks in advance
Peter Childs
___
ceph-users mailing list --
20 matches
Mail list logo