Hi,
I was using a standalone rbd image.
Cheers
Nick
On Monday, May 08, 2017 08:55:55 AM Jason Dillaman wrote:
> Thanks. One more question: was the image a clone or a stand-alone image?
>
> On Fri, May 5, 2017 at 2:42 AM, nick wrote:
> > Hi,
> > I used one of the fio example files and changed it
We write many millions of keys into RGW which will never be changed (until
they are deleted) -- it would be interesting if we could somehow indicate
this to RGW and enable reading those from the replicas as well.
-Ben
On Mon, May 8, 2017 at 10:18 AM, Jason Dillaman wrote:
> librbd can optionall
the latest stable version is jewel(lts) and kraken,
http://docs.ceph.com/docs/master/releases/
If you want to install stable version use --stable=jewel flag with
ceph-deploy install command and it will get the packages from
download.ceph.com, It is well tested on latest CentOS and Ubuntu.
On Mon,
WOW!!! Those are some awfully high backfilling settings you have there.
They are 100% the reason that your customers think your system is down.
You're telling each OSD to be able to have 20 backfill operations running
at the exact same time. I bet if you were watching iostat -x 1 on one of
your n
Our ceph system performs very poorly or not even at all while the
remapping procedure is underway. We are using replica 2 and the
following ceph tweaks while it is in process:
1013 ceph tell osd.* injectargs '--osd-recovery-max-active 20'
1014 ceph tell osd.* injectargs '--osd-recovery-thr
librbd can optionally read from replicas for snapshots and parent
images (i.e. known read-only data). This is controlled via the
following configuration options:
rbd_balance_snap_reads
rbd_localize_snap_reads
rbd_balance_parent_reads
rbd_localize_parent_reads
Direct users of the librados API can
Reads will always happen on the Primary OSD for the PG. Writes are
initially written to the primary OSD, but the write is not ack'd until the
write is completed on ALL secondaries. I make that distinction because if
you have size 3 and min_size 2, the write will not come back unless all 3
OSDs ha
Hi,
I thought that Clients do also reads from ceph replicas. Sometimes i Read in
the web that this does only happens from the primary pg like how ceph handle
writes... so what is True?
Greetz
Mehmet___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello,
I don't use Cent, but I've seen the same thing with Ubuntu, I'm going to
assume it's the same problem. The repo url should be download.ceph.com,
instead of just ceph.com, which is uses when it add's it to the repo. My
solution usually is, correct the repo URL to point to download.ceph.com
We also noticed a tremendous gain in latency performance by setting cstates to
processor.max_cstate=1 intel_idle.max_cstate=0. We went from being over 1ms
latency for 4KB writes to well under (.7ms? going off mem). I will note that we
did not have as much of a problem on Intel v3 procs, but on v
Hi Peter,
Am 08.05.2017 um 15:23 schrieb Peter Maloney:
> On 05/08/17 14:50, Stefan Priebe - Profihost AG wrote:
>> Hi,
>> Am 08.05.2017 um 14:40 schrieb Jason Dillaman:
>>> You are saying that you had v2 RBD images created against Hammer OSDs
>>> and client libraries where exclusive lock, objec
On 05/08/17 14:50, Stefan Priebe - Profihost AG wrote:
> Hi,
> Am 08.05.2017 um 14:40 schrieb Jason Dillaman:
>> You are saying that you had v2 RBD images created against Hammer OSDs
>> and client libraries where exclusive lock, object map, etc were never
>> enabled. You then upgraded the OSDs and
Thanks. One more question: was the image a clone or a stand-alone image?
On Fri, May 5, 2017 at 2:42 AM, nick wrote:
> Hi,
> I used one of the fio example files and changed it a bit:
>
> """
> # This job file tries to mimic the Intel IOMeter File Server Access Pattern
> [global]
> description=Emu
Hi,
Am 08.05.2017 um 14:40 schrieb Jason Dillaman:
> You are saying that you had v2 RBD images created against Hammer OSDs
> and client libraries where exclusive lock, object map, etc were never
> enabled. You then upgraded the OSDs and clients to Jewel and at some
> point enabled exclusive lock (a
You are saying that you had v2 RBD images created against Hammer OSDs
and client libraries where exclusive lock, object map, etc were never
enabled. You then upgraded the OSDs and clients to Jewel and at some
point enabled exclusive lock (and I'd assume object map) on these
images -- or were the ex
Hi,
sorry for the delay, but in the meantime we were able to find a
workaround. Inspired by this:
> Side note: Configuring the loopback IP on the physical interfaces is
> workable if you set it on **all** parallel links. Example with server1:
>
>
>
> “iface enp3s0f0 inet static
>
> address
16 matches
Mail list logo