On Tue, May 21, 2024 at 08:54:26PM +, Eugen Block wrote:
> It’s usually no problem to shut down a cluster. Set at least the noout flag,
> the other flags like norebalance, nobackfill etc won’t hurt either. Then
> shut down the servers. I do that all the time with test clusters (they do
> have
-1" on the CLI with "radosgw quota set". But interesting
to see this done in a single step when creating the user.
Matthias
>
> Regards,
>
> Ondrej
>
> > On 6. 10. 2023, at 8:44, Matthias Ferdinand wrote:
> >
> > On Thu, Oct 05, 2023 at 09:22:29AM +0200,
]},
> > "Action": [ "s3:DeleteBucket", "s3:DeleteBucketPolicy",
> > "s3:PutBucketPolicy" ],
> > "Resource": [
> >"arn:aws:s3:::*"
> > ]
> >}]
> > }
> >
> &
On Tue, Oct 03, 2023 at 06:10:17PM +0200, Matthias Ferdinand wrote:
> On Sun, Oct 01, 2023 at 12:00:58PM +0200, Peter Goron wrote:
> > Hi Matthias,
> >
> > One possible way to achieve your need is to set a quota on number of
> > buckets at user level (see
> &
ucket owner, and also the
bucket policy can't be modified/deleted anymore.
This closes the loopholes I could come up with so far; there might still
be some left I am currently not aware of :-)
On Wed, Oct 04, 2023 at 06:20:09PM +0200, Matthias Ferdinand wrote:
> On Tue, Oct 03, 2023 at 06:10:17P
On Tue, Oct 03, 2023 at 06:10:17PM +0200, Matthias Ferdinand wrote:
> On Sun, Oct 01, 2023 at 12:00:58PM +0200, Peter Goron wrote:
> > Hi Matthias,
> >
> > One possible way to achieve your need is to set a quota on number of
> > buckets at user level (see
> &
rol.
thanks a lot, rather an elegant solution.
Matthias
>
> Rgds,
> Peter
>
>
> Le dim. 1 oct. 2023, 10:51, Matthias Ferdinand a
> écrit :
>
> > Hi,
> >
> > I am still evaluating ceph rgw for specific use cases.
> >
> > My question is
Hi,
I am still evaluating ceph rgw for specific use cases.
My question is about keeping the realm of bucket names under control of
rgw admins.
Normal S3 users have the ability to create new buckets as they see fit.
This opens opportunities for creating excessive amounts of buckets, or
for
window of
incoherent behaviour among rgw daemons (one rgw applying old policy to
requests, some other rgw already applying new policy), or will it just
be a very short window?
thanks
Matthias
>
> On Fri, Sep 22, 2023 at 5:53 PM Matthias Ferdinand
> wrote:
> >
> > On Tue, Sep 12, 20
On Tue, Sep 12, 2023 at 07:13:13PM +0200, Matthias Ferdinand wrote:
> On Mon, Sep 11, 2023 at 02:37:59PM -0400, Matt Benjamin wrote:
> > Yes, it's also strongly consistent. It's also last writer wins, though, so
> > two clients somehow permitted to contend for updating policy coul
On Thu, Sep 21, 2023 at 03:49:25PM -0500, Laura Flores wrote:
> Hi Ceph users and developers,
>
> Big thanks to Cory Snyder and Jonas Sterr for sharing your insights with an
> audience of 50+ users and developers!
>
> Cory shared some valuable troubleshooting tools and tricks that would be
>
onfirming this!
Matthias
>
> On Mon, Sep 11, 2023 at 2:21 PM Matthias Ferdinand
> wrote:
>
> > Hi,
> >
> > while I don't currently use rgw, I still am curious about consistency
> > guarantees.
> >
> > Usually, S3 has strong read-after-write con
Hi,
while I don't currently use rgw, I still am curious about consistency
guarantees.
Usually, S3 has strong read-after-write consistency guarantees (for
requests that do not overlap). According to
https://docs.ceph.com/en/latest/dev/radosgw/bucket_index/
in Ceph this is also true for
Hi,
> > Matthias suggest to enable write cache, you suggest to disble it... or i'm
> > cache-confused?! ;-)
there were some discussions about write cache settings last year, e.g.
https://www.spinics.net/lists/ceph-users/msg73263.html
https://www.spinics.net/lists/ceph-users/msg69489.html
spinners are slow anyway, but on top of that SAS disks often default to
writecache=off. In use as a single disk with no risk of raid
write-holes, you can turn on writecache. On SAS, I would assume the
firmware does not lie about writes reaching stable storage (flushes).
# turn on temporarily:
On Thu, Mar 30, 2023 at 08:56:06PM +0400, Konstantin Shalygin wrote:
> Hi,
>
> > On 25 Mar 2023, at 23:15, Matthias Ferdinand wrote:
> >
> > from "ceph daemon osd.X perf dump"?
>
>
> No, from ceph-mgr prometheus exporter
> You can enable
lient->OSD command timing?
- are bluestore/filestore values about OSD->storage op timing?
Please bear with me :-) I just try to get some rough understanding what
the numbers to be collected and graphed actually mean and how they are
related to each other.
Regards
Matthias
> > O
Hi,
I would like to understand how the per-OSD data from "ceph osd perf"
(i.e. apply_latency, commit_latency) is generated. So far I couldn't
find documentation on this. "ceph osd perf" output is nice for a quick
glimpse, but is not very well suited for graphing. Output values are
from the most
uot;# setting rotational=1 on ${r}"
echo "1" >${r}
fi
fi
fi
fi
#---
On Thu, Feb 02, 2023 at 12:18:55AM +0100, Matthias Ferdinand wrote:
> ceph version: 17.2.0 on Ubuntu 22.04
&g
ceph version: 17.2.0 on Ubuntu 22.04
non-containerized ceph from Ubuntu repos
cluster started on luminous
I have been using bcache on filestore on rotating disks for many years
without problems. Now converting OSDs to bluestore, there are some
strange effects.
If I
nt threads are found, everything else says "no email threads
could be found for this month".
Could somebody please look into this?
Regards
Matthias Ferdinand
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Wed, Dec 07, 2022 at 11:13:49AM +0100, Boris Behrens wrote:
> Hi Sven,
> thanks for the input.
>
> So I did some testing and "maybe" optimization.
> The same disk type in two different hosts (one Ubuntu and one Centos7) have
> VERY different iostat %util values:
I guess Centos7 has a rather
the cloudflare settings accordingly? Several
years ago, they would group Tor nodes under a "Tor" pseudo-country. If
that is still the case, it might suffice to tick that pseudo-country
off from a "dangerous countries" list or something like that.
Best regard
On Fri, Jul 22, 2022 at 04:54:23PM +0100, James Page wrote:
> > If I remove the version check (see below), dashboard appears to be working.
>
>
> https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1967139
>
> I just uploaded a fix for cheroot to resolve this issue - the stable
> release update
Hi,
trying to activate ceph dashboard on a 17.2.0 cluster (Ubuntu 22.04
using standard ubuntu repos), the dashboard module crashes because it
cannot understand the python3-cheroot version number '8.5.2+ds1':
root@mceph00:~# ceph crash info
Hi,
just a heads up for others using Ubuntu and both ethernet bonding and
image cloning when provisioning ceph servers: mac address selection for
bond interfaces was changed to only depend on /etc/machine-id. Having
several machines sharing the same /etc/machine-id then wreaks havoc.
I
;
> 2022-07-13T11:43:41.308+0200 7f71c0c86700 1 mgr handle_mgr_map
> respawning because set of enabled modules changed!
>
> Cheers, Dan
>
>
> On Sat, Jul 16, 2022 at 4:34 PM Matthias Ferdinand
> wrote:
> >
> > Hi,
> >
> > while updating a test c
"ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1)
quincy (stable)": 3
}
}
Not sure how problematic this is, but AFAIK it was claimed that ceph
package installs would not restart ceph services by themselves.
Regards
Matthias Ferdinand
__
On Tue, Jun 29, 2021 at 08:37:36AM +, Marc wrote:
>
> Can someone point me to some good doc's describing the dangers of using a
> large amount of disks in a raid5/raid6? (Understandable for less techy people)
Hi,
there are some slides at
On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote:
> Hello List,
>
> oversudden i can not mount a specific rbd device anymore:
>
> root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
> /etc/ceph/ceph.client.admin.keyring
> /dev/rbd0
>
> root@proxmox-backup:~# mount /dev/rbd0
On Thu, May 27, 2021 at 02:54:00PM -0500, Reed Dier wrote:
> Hoping someone may be able to help point out where my bottleneck(s) may be.
>
> I have an 80TB kRBD image on an EC8:2 pool, with an XFS filesystem on top of
> that.
> This was not an ideal scenario, rather it was a rescue mission to
On Tue, Apr 20, 2021 at 08:27:50AM +0200, huxia...@horebdata.cn wrote:
> Dear Mattias,
>
> Very glad to know that your setting with Bcache works well in production.
>
> How long have you been puting XFS on bcache on HDD in production? Which
> bcache version (i mean the kernel) do you use? or
On Sun, Apr 18, 2021 at 10:31:30PM +0200, huxia...@horebdata.cn wrote:
> Dear Cephers,
>
> Just curious about any one who has some experience on using Bcache on top of
> HDD OSD to accelerate IOPS performance?
>
> If any, how about the stability and the performance improvement, and for how
>
On Tue, Mar 02, 2021 at 05:47:29PM +0800, Norman.Kern wrote:
> Matthias,
>
> I agreed with you for tuning. I ask this question just for that my OSDs have
> problems when the
>
> cache_available_percent less than 30, the SSDs almost useless and all I/Os
> bypass to HDDs with large latency.
On Mon, Mar 01, 2021 at 12:37:38PM +0800, Norman.Kern wrote:
> Hi, guys
>
> I am testing ceph on bcache devices, I found the performance is not
> good as expected. Does anyone have any best practices for it? Thanks.
Hi,
sorry to say, but since use cases and workloads differ so much, there is
com does not work anymore. Created a new account, logged
in and to me the subscription settings look ok.
Can you help me here? Maybe it is just the digests that do not work?
Please answer to me directly, as I am currently not receiving any list
messages.
Thank you
Matthias Ferdin
36 matches
Mail list logo