I might be wrong but the object size is 4M in ceph so this might be the case
also. But yeah, on the lower level of data storing I'm not familiar.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
Hello,
I've had a look at the instructions for clean shutdown given at
https://ceph.io/planet/how-to-do-a-ceph-cluster-maintenance-shutdown/, but
I'm not clear about some things on the steps about shutting down the
various Ceph components.
For my current 3-node cluster I have MONs, MDSs, MGRs, an
Hallo,
do you expect that to be better (faster), than having the OSD's Journal
on a different disk (ssd, nvme) ?
rgds,
derjohn
On 01.03.21 05:37, Norman.Kern wrote:
> Hi, guys
>
> I am testing ceph on bcache devices, I found the performance is not good as
> expected. Does anyone have any be
On Mon, Mar 1, 2021 at 3:07 PM Pawel S wrote:
>
> Hello Jason!
>
> On Mon, Mar 1, 2021, 19:48 Jason Dillaman wrote:
>
> > On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
> > >
> > > hello!
> > >
> > > I'm trying to understand how Bluestore cooperates with RBD image clones,
> > so
> > > my test is
Hello Jason!
On Mon, Mar 1, 2021, 19:48 Jason Dillaman wrote:
> On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
> >
> > hello!
> >
> > I'm trying to understand how Bluestore cooperates with RBD image clones,
> so
> > my test is simple
> >
> > 1. create an image (2G) and fill with data
> > 2. crea
I noticed you're trying to connect to an IPv4, but the listening port is on a
IPv6. Is that right? You should have a IPv4 listening, right?
Also, did you check selinux or firewalld? Maybe you need to allow this 5000
port.
--
Salsa
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐
On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
>
> hello!
>
> I'm trying to understand how Bluestore cooperates with RBD image clones, so
> my test is simple
>
> 1. create an image (2G) and fill with data
> 2. create a snapshot
> 3. protect it
> 4. create a clone of the image
> 5. write a small por
hello!
I'm trying to understand how Bluestore cooperates with RBD image clones, so
my test is simple
1. create an image (2G) and fill with data
2. create a snapshot
3. protect it
4. create a clone of the image
5. write a small portion of data (4K) to clone
6. check how it changed and if just 4K a
Verify you have correct values for "trusted_ip_list" [1].
[1] https://github.com/ceph/ceph-iscsi/blob/master/iscsi-gateway.cfg_sample#L29
On Mon, Mar 1, 2021 at 9:45 AM Várkonyi János
wrote:
>
> Hi All,
>
> I2d like to install a Ceph Nautilus on Ubuntu 18.04 LTS and give the storage
> to 2 win
Hi All,
I2d like to install a Ceph Nautilus on Ubuntu 18.04 LTS and give the storage to
2 windows server via ISCSI. I choose the Nautilus because of the deploy
function I don't want to another VM to cephadm. So I can isntall the ceph and
it is working properly but can't setup the icsi gateway.
The release notes do have it, however it's under different PR & issue
numbers, as it's backported into octopus:
mgr/ActivePyModules.cc: always release GIL before attempting to acquire
a lock (pr#38801, Cory Snyder) [https://github.com/ceph/ceph/pull/38801,
https://tracker.ceph.com/issues/48714]
Hi,
Thanks for confirm that this issue no longer appears in 15.2.9, David.
I had a look https://ceph.io/releases/v15-2-9-Octopus-released/ but I
couldn't find any reference to neither https://tracker.ceph.com/issues/39264
nor https://github.com/ceph/ceph/pull/38677 confirming it.
On the other ha
On Mon, Mar 01, 2021 at 12:37:38PM +0800, Norman.Kern wrote:
> Hi, guys
>
> I am testing ceph on bcache devices, I found the performance is not
> good as expected. Does anyone have any best practices for it? Thanks.
Hi,
sorry to say, but since use cases and workloads differ so much, there is
n
13 matches
Mail list logo