Hi Team
We have grown out of our current solution, and we plan to migrate to
multiple data centers.
Our setup is a mix of radosgw data and filesystem data. But we have many
legacy systems that require a filesystem at the moment, so we will probably
run it for some of our data for at least 3-5
increase the pg_num doesn’t mean
> you can’t increase it manually. Have you tried that?
>
> Zitat von Daniel Persson :
>
> > Hi Team.
> >
> > We are currently in the process of changing the size of our cache pool.
> > Currently it's set to 32 PGs and distributed
Hi Team.
We are currently in the process of changing the size of our cache pool.
Currently it's set to 32 PGs and distributed weirdly on our OSDs. The
system has tried automatically to scale it up to 256 PGs without succeeding
and I read that cache pools are not automatically scaled so we are in
Hi.
I'm currently trying out cephadm, and I got into a state that was a bit
unexpected for me.
I created three host machines in VirtualBox to try out cephadm. All drives
I made for OSD are 20GB in size for simplicity.
Bootstrapped one host with one drive and then added the other two. Then
they
Hi everyone.
I added a lot more storage to our cluster, and we now have a lot of slower
hard drives that could contain archival data. So I thought setting up a
cache tier for the fast drives should be a good idea.
We want to retain data for about a week in the cache pool as the data could
be
Hi Brian
I'm not sure if it applies to your application, and I'm not an expert.
However, we have been running our solution for about a year now, and we
have one of our MDS's in standby-replay.
Sadly we have found a bug with extensive memory usage, and when we needed
to replay, it took up to a
.
Best regards
Daniel
On Tue, Sep 28, 2021 at 12:00 AM Duncan Bellamy
wrote:
> Hi Daniel,
> Is it a Mac firewall or security access issue for the machine that was
> able to build?
>
> Regards,
>
> Duncan
>
> On 27 Sep 2021, at 22:43, Daniel Persson wrote:
>
&
t; Duncan
>
>
> On 27 Sep 2021, at 22:24, Daniel Persson wrote:
>
> Hi Duncan.
>
> Great suggestion. Thank you for the link. I've run it on both the M1 BigSur
> mac and it did not compile because i
gt;
> Regards,
> Duncan
>
> On 27 Sep 2021, at 17:46, Daniel Persson wrote:
>
> Hi
>
> I'm running some tests on a couple of Mac Mini machines. One of them is an
> M1 with BigSur, and the other one is a regular Intel Mac with Catalina.
>
> I've tried to build Cep
Hi
I'm running some tests on a couple of Mac Mini machines. One of them is an
M1 with BigSur, and the other one is a regular Intel Mac with Catalina.
I've tried to build Ceph Nautilus, Octopus, and Pacific multiple times with
different parameters and added many dependencies to the systems but
Hi Everyone.
I'm "new" to Ceph, only been administrating a cluster for about a year now,
so there is a lot more for me to learn about the subject.
The latest concept I've been looking into is Cache Tiering. I added it to
my home cluster without a problem and didn't see a degradation in
Hi David
It's hard to say with so little information what could be wrong, and I have
not seen any response yet, so I thought I could give you something that
might help you.
I've done a video about setting up the Ceph, Grafana, and Prometheus
triangle from scratch, the components responsible for
Hi Lokendra
There are a lot of ways to see the status of your cluster. The main way to
see it is to watch the dashboard alerts to see the most pressing matters to
handle. You can also follow the log that the manager will keep as
notifications. I usually use the "ceph health detail" to get the
Hi Everyone.
I thought I put in my 5 cents as I believe this is an exciting topic. I'm
also a newbie, only running a cluster for about a year. I did some research
before that and also have created a couple of videos on the topic. One of
them was upgrading a cluster using cephadm.
-
t; time Windows-specific patches applied. I'll try to escalate issue
> and get the linked MSI bundle updated.)
>
> Thanks,
>
> Ilya
>
> >
> > -Original Message-
> > From: Richard Bade
> > Sent: Sunday, August 8, 2021 8:27 PM
> &
eys your
> > clients are using and that cephx is enabled correctly on your cluster.
> > Check your admin key in /etc/ceph as well, as that's what's being used
> > for ceph status.
> >
> > Regards,
> > Rich
> >
> > On Sun, 8 Aug 2021 at 05:0
Hi everyone.
I suggested asking for help here instead of in the bug tracker so that I
will try it.
https://tracker.ceph.com/issues/51821?next_issue_id=51820_issue_id=51824
I have a problem that I can't seem to figure out how to resolve the issue.
AUTH_INSECURE_GLOBAL_ID_RECLAIM: client is
17 matches
Mail list logo