hi, everyone
when I user rest bench testing RGW with cmd : rest-bench --access-key=ak
--secret=sk --bucket=bucket --seconds=360 -t 200 -b 524288 --no-cleanup
write
I found when RGW call the method bucket_prepare_op is very slow. so I
observed from 'dump_historic_ops',to see:
{
if the rbd filesystem ‘belongs’ to you you can do sth like this:
http://www.wogri.com/linux/ceph-vm-backup/
On Jul 3, 2014, at 7:21 AM, Irek Fasikhov malm...@gmail.com wrote:
Hi,All.
Dear community. How do you make backups CEPH RDB?
Thanks
--
Fasihov Irek (aka Kataklysm).
С
Am 03.07.2014 07:21, schrieb Irek Fasikhov:
Dear community. How do you make backups CEPH RDB?
We @ gocept are currently in the process of developing backy, a new-style
backup tool that works directly with block level snapshots / diffs.
The tool is not quite finished, but it is making rapid
Hi guys,
Was wondering if 0.80.2 is coming any time soon? I am planning na upgrade from
Emperor and was wondering if I should wait for 0.80.2 to come out if the
release date is pretty soon. Otherwise, I will go for the 0.80.1.
Cheers
Andrei
___
Hi Wido, thanks for answers - I have mons and OSD on each host... server1:
mon + 2 OSDs, same for server2 and server3.
Any Proposed upgrade path, or just start with 1 server and move along to
others ?
Thanks again.
Andrija
On 2 July 2014 16:34, Wido den Hollander w...@42on.com wrote:
On
On 07/03/2014 10:27 AM, Andrei Mikhailovsky wrote:
Hi guys,
Was wondering if 0.80.2 is coming any time soon? I am planning na
upgrade from Emperor and was wondering if I should wait for 0.80.2 to
come out if the release date is pretty soon. Otherwise, I will go for
the 0.80.1.
Why bother?
On 07/03/2014 10:59 AM, Andrija Panic wrote:
Hi Wido, thanks for answers - I have mons and OSD on each host...
server1: mon + 2 OSDs, same for server2 and server3.
Any Proposed upgrade path, or just start with 1 server and move along to
others ?
Upgrade the packages, but don't restart the
Hi folk,
I am following step by step the test intallation, and checking some
configuration before try to deploy a production cluster.
Now I have a Health cluster with 3 mons + 4 OSDs.
I have created a pool with belonging all osd.x and two more one for two
servers o the other for the other
On 07/03/2014 12:15 AM, Pierre BLONDEAU wrote:
Le 03/07/2014 00:55, Samuel Just a écrit :
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i
/tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d
Thanks a lot Wido, will do...
Andrija
On 3 July 2014 13:12, Wido den Hollander w...@42on.com wrote:
On 07/03/2014 10:59 AM, Andrija Panic wrote:
Hi Wido, thanks for answers - I have mons and OSD on each host...
server1: mon + 2 OSDs, same for server2 and server3.
Any Proposed upgrade
Wido,
one final question:
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
recompile libvirt again now with ceph-devel 0.80 ?
Perhaps not smart question, but need to make sure I don't screw something...
Thanks for your time,
Andrija
On 3 July 2014 14:27, Andrija Panic
Hi,
I have a ceph cluster setup (with 45 sata disk journal on disks) and get only
450mb/sec writes seq (maximum playing around with threads in rados bench) with
replica of 2
Which is about ~20Mb writes per disk (what y see in atop also)
theoretically with replica2 and having journals on disk
On 07/03/2014 03:07 PM, Andrija Panic wrote:
Wido,
one final question:
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
recompile libvirt again now with ceph-devel 0.80 ?
Perhaps not smart question, but need to make sure I don't screw something...
No, no need to. The
Thanks again a lot.
On 3 July 2014 15:20, Wido den Hollander w...@42on.com wrote:
On 07/03/2014 03:07 PM, Andrija Panic wrote:
Wido,
one final question:
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
recompile libvirt again now with ceph-devel 0.80 ?
Perhaps not
On 07/03/2014 03:11 PM, VELARTIS Philipp Dürhammer wrote:
Hi,
I have a ceph cluster setup (with 45 sata disk journal on disks) and get
only 450mb/sec writes seq (maximum playing around with threads in rados
bench) with replica of 2
How many threads?
Which is about ~20Mb writes per disk
hi,all
what is the difference between snapshot and clone in theory?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
HI,
Ceph.conf:
osd journal size = 15360
rbd cache = true
rbd cache size = 2147483648
rbd cache max dirty = 1073741824
rbd cache max dirty age = 100
osd recovery max active = 1
osd max backfills = 1
osd mkfs options xfs = -f -i
when I see the function OSD::OpWQ::_process . I find pg lock locks the whole
function. so when I use multi-thread write the same object , so are they must
serialize from osd handle thread to journal write thread ?
baijia...@126.com___
ceph-users
Hi,
I'm trying to make multi part upload work. I'm using ceph
0.80-702-g9bac31b (from the ceph's github).
I've tried the code provided by Mark Kirkwood here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html
But unfortunately, it gives me the error:
I was at this issue this morning. It seems radosgw requires you to have a
pool named '' to work with multipart. I just created a pool with that name
rados mkpool ''
either that or allow the pool be created by the radosgw...
On 3 July 2014 16:27, Patrycja Szabłowska szablowska.patry...@gmail.com
Le 03/07/2014 13:49, Joao Eduardo Luis a écrit :
On 07/03/2014 12:15 AM, Pierre BLONDEAU wrote:
Le 03/07/2014 00:55, Samuel Just a écrit :
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i
/tmp/crush$i.d; done;
It looks like you're just putting in data faster than your cluster can
handle (in terms of IOPS).
The first big hole (queue_op_wq-reached_pg) is it sitting in a queue
and waiting for processing. The second parallel blocks are
1) write_thread_in_journal_buffer-journaled_completion_queued, and
that
The PG in question isn't being properly mapped to any OSDs. There's a
good chance that those trees (with 3 OSDs in 2 hosts) aren't going to
map well anyway, but the immediate problem should resolve itself if
you change the choose to chooseleaf in your rules.
-Greg
Software Engineer #42 @
On Wed, Jul 2, 2014 at 3:06 PM, Marc m...@shoowin.de wrote:
Hi,
I was wondering, having a cache pool in front of an RBD pool is all fine
and dandy, but imagine you want to pull backups of all your VMs (or one
of them, or multiple...). Going to the cache for all those reads isn't
only
On Thu, Jul 3, 2014 at 8:24 AM, baijia...@126.com baijia...@126.com wrote:
when I see the function OSD::OpWQ::_process . I find pg lock locks the
whole function. so when I use multi-thread write the same object , so are
they must
serialize from osd handle thread to journal write thread ?
I find that the function of OSD::OpWQ::_process use pg-lock lock the whole
function.so this mean that osd threads can't handle op which write for the same
object.
though add log to the ReplicatedPG::op_commit , I find pg lock cost long time
sometimes. but I don't know where lock pg .
where
I put .rgw.buckets.index pool to SSD osd,bucket object must write to the SSD,
and disk use ratio less than 50%. so I don't think disk is bottleneck
baijia...@126.com
From: baijia...@126.com
Date: 2014-07-04 01:29
To: Gregory Farnum
CC: ceph-users
Subject: Re: Re: [ceph-users] RGW performance
Hi Gregory,
Thanks a lot I begin to understand who ceph works.
I add a couple of osd servers, and balance the disk between them.
[ceph@cephadm ceph-cloud]$ sudo ceph osd tree
# idweighttype nameup/downreweight
-716.2root 4x1GbFCnlSAS
-95.4host node02
7
On Thu, Jul 3, 2014 at 11:17 AM, Iban Cabrillo cabri...@ifca.unican.es wrote:
Hi Gregory,
Thanks a lot I begin to understand who ceph works.
I add a couple of osd servers, and balance the disk between them.
[ceph@cephadm ceph-cloud]$ sudo ceph osd tree
# idweighttype name
Do those logs have a higher debugging level than the default? If not
nevermind as they will not have enough information. If they do however,
we'd be interested in the portion around the moment you set the tunables.
Say, before the upgrade and a bit after you set the tunable. If you want to
be
Hi list —
I’ve got a small dev. cluster: 3 OSD nodes with 6 disks/OSDs each and a single
monitor (this, it seems, was my mistake). The monitor node went down hard and
it looks like the monitor’s db is in a funny state. Running ‘ceph-mon’
manually with ‘debug_mon 20’ and ‘debug_ms 20’ gave
On 07/04/2014 12:29 AM, Jason Harley wrote:
Hi list —
I’ve got a small dev. cluster: 3 OSD nodes with 6 disks/OSDs each and a single
monitor (this, it seems, was my mistake). The monitor node went down hard and
it looks like the monitor’s db is in a funny state. Running ‘ceph-mon’
manually
Hi Joao,
On Jul 3, 2014, at 7:57 PM, Joao Eduardo Luis joao.l...@inktank.com wrote:
We don't have a way to repair leveldb. Having multiple monitors usually help
with such tricky situations.
I know this, but for this small dev cluster I wasn’t thinking about corruption
of my mon’s backing
33 matches
Mail list logo