On Mon, 29 Apr 2024 at 09:06, Robert Sander
wrote:
> On 4/29/24 08:50, Alwin Antreich wrote:
>
> > well it says it in the article.
> >
> > The upcoming Squid release serves as a testament to how the Ceph
> > project continues to deliver innova
__
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Alwin Antreich
Head of Training and Infrastructure
Want to meet: https://calendar.app.google/MuA2isCGnh8xBb657
croit GmbH, Freseniuss
Hi Tobias,
April 18, 2024 at 10:43 PM, "Tobias Langner" wrote:
> While trying to dig up a bit more information, I noticed that the mgr web UI
> was down, which is why we failed the active mgr to have one of the standbys
> to take over, without thinking much...
>
> Lo and behold, this
Hi Tobias,
April 18, 2024 at 8:08 PM, "Tobias Langner" wrote:
>
> We operate a tiny ceph cluster (v16.2.7) across three machines, each
>
> running two OSDs and one of each mds, mgr, and mon. The cluster serves
>
> one main erasure-coded (2+1) storage pool and a few other
I'd assume (w/o
On March 26, 2024 5:02:16 PM GMT+01:00, "Szabo, Istvan (Agoda)"
wrote:
>Hi,
>
>Wonder what we are missing from the netplan configuration on ubuntu which ceph
>needs to tolerate properly.
>We are using this bond configuration on ubuntu 20.04 with octopus ceph:
>
>
>bond1:
> macaddress:
Hi,
March 24, 2024 at 8:19 AM, "duluxoz" wrote:
>
> Hi,
>
> Yeah, I've been testing various configurations since I sent my last
>
> email - all to no avail.
>
> So I'm back to the start with a brand new 4T image which is rbdmapped to
>
> /dev/rbd0.
>
> Its not formatted (yet) and so not
Hi,
July 24, 2023 3:02 PM, "wodel youchi" wrote:
> Hi,
>
> Can I define new device classes in ceph, I know that there are hdd, ssd and
> nvme, but can I define other classes?
Certainly We often use dedicated device classes (eg. nvme-meta) to separate
workloads.
Cheers,
Alwin
PS: this time
Hi Istvan,
June 7, 2021 11:54 AM, "Szabo, Istvan (Agoda)" wrote:
> So the client is on 14.2.20 the cluster is on 14.2.21. Seems like the Debian
> buster repo is missing
> the 21 update?
Best ask the Proxmox dev's about a 14.2.21 build. Or you could build it
yourself, there is everything in
On Wed, Oct 14, 2020 at 02:09:22PM +0200, Andreas John wrote:
> Hello Alwin,
>
> do you know if it makes difference to disable "all green computing" in
> the BIOS vs. settings the governor to "performance" in the OS?
Well, for one the governor will not be able to influence all BIOS
settings (eg.
On Tue, Oct 13, 2020 at 09:09:27PM +0200, Maged Mokhtar wrote:
>
> Very nice and useful document. One thing is not clear for me, the fio
> parameters in appendix 5:
> --numjobs=<1|4> --iodepths=<1|32>
> it is not clear if/when the iodepth was set to 32, was it used with all
> tests with numjobs=4
On Tue, Oct 13, 2020 at 11:19:33AM -0500, Mark Nelson wrote:
> Thanks for the link Alwin!
>
>
> On intel platforms disabling C/P state transitions can have a really big
> impact on IOPS (on RHEL for instance using the network or performance
> latency tuned profile). It would be very interesting
Hello fellow Ceph users,
we have released our new Ceph benchmark paper [0]. The used platform and
Hardware is Proxmox VE 6.2 with Ceph Octopus on a new AMD Epyc Zen2 CPU
with U.2 SSDs (details in the paper).
The paper should illustrate the performance that is possible with a 3x
node cluster
Hi Mario,
On Mon, Feb 10, 2020 at 07:50:15PM +0100, Ml Ml wrote:
> Hello List,
>
> first of all: Yes - i made mistakes. Now i am trying to recover :-/
>
> I had a healthy 3 node cluster which i wanted to convert to a single one.
> My goal was to reinstall a fresh 3 Node cluster and start with 2
On Fri, Aug 30, 2019 at 04:39:39PM +0200, Marco Gaiarin wrote:
>
> > But, the 'code' that identify (and change permission) for journal dev
> > are PVE specific? or Ceph generic? I suppose the latter...
>
> OK, trying to identify how OSDs get initialized. If i understood well:
>
> 0) systemd
On Thu, Aug 29, 2019 at 05:02:11PM +0200, Marco Gaiarin wrote:
> Riprendo quanto scritto nel suo messaggio del 29/08/2019...
>
> > Another possibilty is to convert the MBR to GPT (sgdisk --mbrtogpt) and
> > give the partition its UID (also sgdisk). Then it could be linked by
> > its uuid.
> and,
On Thu, Aug 29, 2019 at 03:01:22PM +0200, Alwin Antreich wrote:
> On Thu, Aug 29, 2019 at 02:42:42PM +0200, Marco Gaiarin wrote:
> > Mandi! Alwin Antreich
> > In chel di` si favelave...
> >
> > > > There's something i can do? Thanks.
> > > Did you go t
On Thu, Aug 29, 2019 at 02:42:42PM +0200, Marco Gaiarin wrote:
> Mandi! Alwin Antreich
> In chel di` si favelave...
>
> > > There's something i can do? Thanks.
> > Did you go through our upgrade guide(s)?
>
> Sure!
>
>
> > See the link
Hello Marco,
On Thu, Aug 29, 2019 at 12:55:56PM +0200, Marco Gaiarin wrote:
>
> I've just finished a double upgrade on my ceph (PVE-based) from hammer
> to jewel and from jewel to luminous.
>
> All went well, apart that... OSD does not restart automatically,
> because permission troubles on the
18 matches
Mail list logo