Hi,
For this August Debian testing became a Debian stable with LTS support.
But I see that only sid repo exists, no testing and no new stable bullseye.
May be some one knows, when there are plans to build a bullseye build?
Best regards,
Arūnas
___
c
Hi Patrick,
Any ETA for the same?
On Tue, 24 Aug 2021 at 8:58 AM, Prayank Saxena wrote:
> Thanks Patrick!
>
> Much appreciated
>
> On Tue, 24 Aug 2021 at 5:37 AM, Patrick Donnelly
> wrote:
>
>> Hi Prayank,
>>
>> Jan has a fix in progress here: https://github.com/ceph/ceph/pull/42893
>>
>> --
>
Hi Daniel,
Thanks for the response !!
If we talk about dashboard alerts, these alerts are processed via
alert-manager (using Prometheus). Please correct me if I'm wrong.
Now we have a setup where we may not install alert-manager, so in this
case, is there a way to expose alert metrics without alert
Isn't Rocky Linux 8 supposed to be binary-compatible with RHEL8 ?
Cheers, Massimo
On Tue, Aug 24, 2021 at 12:08 AM Kyriazis, George
wrote:
> Hello,
>
> Are there client packages available for Rocky Linux (specifically 8.4) for
> Pacific? If not, when can we expect them?
>
> I also looked at do
Hi Lokendra
There are a lot of ways to see the status of your cluster. The main way to
see it is to watch the dashboard alerts to see the most pressing matters to
handle. You can also follow the log that the manager will keep as
notifications. I usually use the "ceph health detail" to get the
info
Hello Everyone,
We have deployed Ceph ceph-ansible (Pacific Release).
Query:
Is it possible (if yes then what is the way), to view/verify the alerts
(health/System both) directly without AlertManager?
Or
Can Ceph Dashboard only Only can help us see the Alerts in the Ceph
Cluster(Health/System)?
P
Thanks Patrick!
Much appreciated
On Tue, 24 Aug 2021 at 5:37 AM, Patrick Donnelly
wrote:
> Hi Prayank,
>
> Jan has a fix in progress here: https://github.com/ceph/ceph/pull/42893
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Principal Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28
Hi Prayank,
Jan has a fix in progress here: https://github.com/ceph/ceph/pull/42893
--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -
hello, ceph-users!
We have an old cephfs that is ten different kinds of broken, which we are
attempting to (slowly) pull files from. The most recent issue we've hit is that
the mds will start up, log hundreds of messages like below, then crash. This is
happening in a loop; we can never actuall
On Sun, Aug 22, 2021 at 2:25 AM David Prude wrote:
>
> Patrick,
>
> Thank you so much for writing back.
>
> > Did you set the new "subvolume" flag on your root directory? The
> > probable location for EPERM is here:
> >
> > https://github.com/ceph/ceph/blob/d4352939e387af20531f6bfbab2176dd91916067
Hello,
Are there client packages available for Rocky Linux (specifically 8.4) for
Pacific? If not, when can we expect them?
I also looked at download.ceph.com and I couldn’t find anything relevant. I
only saw rh7 and rh8 packages.
Thank you!
George
__
On Thu, Aug 19, 2021 at 12:40 AM Marc wrote:
>
> >
> > https://docs.google.com/spreadsheets/d/1AXj9h0yDc2ztFWuptqcTrNU2Ui3wMyAn
> > 6QUft3CPdcc/edit?usp=sharing
> >
> >
> > The gist of it is that on the read path, crimson+cyanstore is
> > significantly more efficient than crimson+alienstore and an
Hi,
I have 11 omap in my octopus cluster related to datalog like this:
/var/log/ceph/ceph.log-20210822.gz:2021-08-21T09:06:20.605200+0700 osd.11
(osd.11) 1876 : cluster [WRN] Large omap object found. Object:
22:b040fc05:::data_log.31:head PG: 22.a03f020d (22.d) Key count: 436895 Size
(bytes):
Basically yes, but I would not say supercritical.
If it cannot deliver enough iops for ceph, it will stall even slow
consumer hdds, if it is fast enough, the hdd/cpu/network will be the
bottleneck, so there is not much to gain after that point.
This is more a warning to check before buying a
Hi Venky,
Thank's a lot for these explanations.
I had some trouble when upgrading to v16.2.5. I'm using debian 10 with cephadm
and the 16.2.5 containers use generated a lot of network dropped packets (I
don't know why) on all my OSD hosts. I encountered also some hangs while
reading files in c
Hi Dave,
so may be another bug in Hybid Allocator...
Could you please dump free extents for your "broken" osd(s) by issuing
"ceph-bluestore-tool --path --command free-dump". OSD to
be offline.
Preferably to have these reports after you reproduce the issue with
hybrid allocator once again
Hi Eugen, thanks for the reply.
I've already tried what you wrote in your answer, but still no luck.
The NVMe disk still doesn't have the OSD. Please note I using containers, not
standalone OSDs.
Any ideas?
Regards,
Eric
Message: 2
Date: Fri, 20 Aug 2021 06:56
Hi Eugen, thanks for the reply.
I've already tried what you wrote in your answer, but still no luck.
The NVMe disk still doesn't have the OSD. Please note I using containers, not
standalone OSDs.
Any ideas?
Regards,
Eric
___
ceph-users mailing list
On Mon, Aug 23, 2021 at 5:36 PM Arnaud MARTEL
wrote:
>
> Hi all,
>
> I'm not sure to really understand how cephfs snapshots mirroring is supposing
> to work.
>
> I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for
> only one directory, /ec42/test, in our cephfs filesyte
Hi all,
I'm not sure to really understand how cephfs snapshots mirroring is supposing
to work.
I have 2 ceph clusters (pacific 16.2.4) and snapshots mirroring is set up for
only one directory, /ec42/test, in our cephfs filesytem (it's for test purposes
but we plan to use it with about 50-60
Hello everyone,
Still waiting for response.
Any kind of help is much appreciated.
Thanks
Prayank
On Wed, 18 Aug 2021 at 9:44 AM, Prayank Saxena wrote:
> Hello everyone,
>
> We have a ceph cluster with version Pacific v16.2.4
>
> We are trying to implement the ceph module snap-schedule from thi
Am 23.08.21 um 00:53 schrieb Kai Börnert:
As far as i understand, more important factor (for the ssds) is if they have
power loss protections (so they can use their ondevice write cache) and how
many iops they have when using direct writes with queue depth 1
I just did a test for a hdd with bl
On Mon, 23 Aug 2021 at 00:59, Kai Börnert wrote:
>
> As far as i understand, more important factor (for the ssds) is if they
> have power loss protections (so they can use their ondevice write cache)
> and how many iops they have when using direct writes with queue depth 1
So what you're saying i
On Sat, 21 Aug 2021 at 22:34, Teoman Onay wrote:
>
> You seem to focus only on the controller bandwith while you should also
> consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes from
> 10k to 15k rpm which increases the number of iops.
>
> Sata 80 iops
> Sas 10k 120iops
> S
24 matches
Mail list logo