Re: [ceph-users] Ceph RBD and Backup.

2014-07-03 Thread Wolfgang Hennerbichler
if the rbd filesystem ‘belongs’ to you you can do sth like this: http://www.wogri.com/linux/ceph-vm-backup/ On Jul 3, 2014, at 7:21 AM, Irek Fasikhov wrote: > > Hi,All. > > Dear community. How do you make backups CEPH RDB? > > Thanks > > -- > Fasihov Irek (aka Kataklysm). > С уважением, Фа

Re: [ceph-users] Ceph RBD and Backup.

2014-07-03 Thread Christian Kauhaus
Am 03.07.2014 07:21, schrieb Irek Fasikhov: > Dear community. How do you make backups CEPH RDB? We @ gocept are currently in the process of developing "backy", a new-style backup tool that works directly with block level snapshots / diffs. The tool is not quite finished, but it is making rapid pr

[ceph-users] release date for 0.80.2

2014-07-03 Thread Andrei Mikhailovsky
Hi guys, Was wondering if 0.80.2 is coming any time soon? I am planning na upgrade from Emperor and was wondering if I should wait for 0.80.2 to come out if the release date is pretty soon. Otherwise, I will go for the 0.80.1. Cheers Andrei ___ ce

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-03 Thread Andrija Panic
Hi Wido, thanks for answers - I have mons and OSD on each host... server1: mon + 2 OSDs, same for server2 and server3. Any Proposed upgrade path, or just start with 1 server and move along to others ? Thanks again. Andrija On 2 July 2014 16:34, Wido den Hollander wrote: > On 07/02/2014 04:08

Re: [ceph-users] release date for 0.80.2

2014-07-03 Thread Wido den Hollander
On 07/03/2014 10:27 AM, Andrei Mikhailovsky wrote: Hi guys, Was wondering if 0.80.2 is coming any time soon? I am planning na upgrade from Emperor and was wondering if I should wait for 0.80.2 to come out if the release date is pretty soon. Otherwise, I will go for the 0.80.1. Why bother? Upg

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-03 Thread Wido den Hollander
On 07/03/2014 10:59 AM, Andrija Panic wrote: Hi Wido, thanks for answers - I have mons and OSD on each host... server1: mon + 2 OSDs, same for server2 and server3. Any Proposed upgrade path, or just start with 1 server and move along to others ? Upgrade the packages, but don't restart the dae

[ceph-users] Pools do not respond

2014-07-03 Thread Iban Cabrillo
Hi folk, I am following step by step the test intallation, and checking some configuration before try to deploy a production cluster. Now I have a Health cluster with 3 mons + 4 OSDs. I have created a pool with belonging all osd.x and two more one for two servers o the other for the other tw

Re: [ceph-users] Some OSD and MDS crash

2014-07-03 Thread Joao Eduardo Luis
On 07/03/2014 12:15 AM, Pierre BLONDEAU wrote: Le 03/07/2014 00:55, Samuel Just a écrit : Ah, ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush /tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i > /tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d ../ceph/src/osdm

Re: [ceph-users] Bypass Cache-Tiering for special reads (Backups)

2014-07-03 Thread Marc
On 03/07/2014 07:32, Kyle Bader wrote: >> I was wondering, having a cache pool in front of an RBD pool is all fine >> and dandy, but imagine you want to pull backups of all your VMs (or one >> of them, or multiple...). Going to the cache for all those reads isn't >> only pointless, it'll also poten

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-03 Thread Andrija Panic
Thanks a lot Wido, will do... Andrija On 3 July 2014 13:12, Wido den Hollander wrote: > On 07/03/2014 10:59 AM, Andrija Panic wrote: > >> Hi Wido, thanks for answers - I have mons and OSD on each host... >> server1: mon + 2 OSDs, same for server2 and server3. >> >> Any Proposed upgrade path, o

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-03 Thread Andrija Panic
Wido, one final question: since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to recompile libvirt again now with ceph-devel 0.80 ? Perhaps not smart question, but need to make sure I don't screw something... Thanks for your time, Andrija On 3 July 2014 14:27, Andrija Panic wrote:

[ceph-users] write performance per disk

2014-07-03 Thread VELARTIS Philipp Dürhammer
Hi, I have a ceph cluster setup (with 45 sata disk journal on disks) and get only 450mb/sec writes seq (maximum playing around with threads in rados bench) with replica of 2 Which is about ~20Mb writes per disk (what y see in atop also) theoretically with replica2 and having journals on disk sho

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-03 Thread Wido den Hollander
On 07/03/2014 03:07 PM, Andrija Panic wrote: Wido, one final question: since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to recompile libvirt again now with ceph-devel 0.80 ? Perhaps not smart question, but need to make sure I don't screw something... No, no need to. The librado

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-03 Thread Andrija Panic
Thanks again a lot. On 3 July 2014 15:20, Wido den Hollander wrote: > On 07/03/2014 03:07 PM, Andrija Panic wrote: > >> Wido, >> one final question: >> since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to >> recompile libvirt again now with ceph-devel 0.80 ? >> >> Perhaps not sma

Re: [ceph-users] write performance per disk

2014-07-03 Thread Wido den Hollander
On 07/03/2014 03:11 PM, VELARTIS Philipp Dürhammer wrote: Hi, I have a ceph cluster setup (with 45 sata disk journal on disks) and get only 450mb/sec writes seq (maximum playing around with threads in rados bench) with replica of 2 How many threads? Which is about ~20Mb writes per disk (wha

[ceph-users] what is the difference between snapshot and clone in theory?

2014-07-03 Thread yalogr
hi,all what is the difference between snapshot and clone in theory? thanks___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] write performance per disk

2014-07-03 Thread VELARTIS Philipp Dürhammer
HI, Ceph.conf: osd journal size = 15360 rbd cache = true rbd cache size = 2147483648 rbd cache max dirty = 1073741824 rbd cache max dirty age = 100 osd recovery max active = 1 osd max backfills = 1 osd mkfs options xfs = "-f -i size=2

[ceph-users] why lock th whole osd handle thread

2014-07-03 Thread baijia...@126.com
when I see the function "OSD::OpWQ::_process ". I find pg lock locks the whole function. so when I use multi-thread write the same object , so are they must serialize from osd handle thread to journal write thread ? baijia...@126.com___ ceph-users m

[ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-03 Thread Patrycja Szabłowska
Hi, I'm trying to make multi part upload work. I'm using ceph 0.80-702-g9bac31b (from the ceph's github). I've tried the code provided by Mark Kirkwood here: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034940.html But unfortunately, it gives me the error: (multitest)pszab

Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-03 Thread Luis Periquito
I was at this issue this morning. It seems radosgw requires you to have a pool named '' to work with multipart. I just created a pool with that name rados mkpool '' either that or allow the pool be created by the radosgw... On 3 July 2014 16:27, Patrycja Szabłowska wrote: > Hi, > > I'm trying

Re: [ceph-users] Some OSD and MDS crash

2014-07-03 Thread Pierre BLONDEAU
Le 03/07/2014 13:49, Joao Eduardo Luis a écrit : On 07/03/2014 12:15 AM, Pierre BLONDEAU wrote: Le 03/07/2014 00:55, Samuel Just a écrit : Ah, ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush /tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i > /tmp/crush$i.d; done; d

Re: [ceph-users] Performance is really bad when I run from vstart.sh

2014-07-03 Thread Zhe Zhang
That makes sense. Thank you! Zhe From: David Zafman [mailto:david.zaf...@inktank.com] Sent: Wednesday, July 02, 2014 9:46 PM To: Zhe Zhang Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Performance is really bad when I run from vstart.sh By default the vstart.sh setup would put all dat

Re: [ceph-users] RGW performance test , put 30 thousands objects to one bucket, average latency 3 seconds

2014-07-03 Thread Gregory Farnum
It looks like you're just putting in data faster than your cluster can handle (in terms of IOPS). The first big hole (queue_op_wq->reached_pg) is it sitting in a queue and waiting for processing. The second parallel blocks are 1) write_thread_in_journal_buffer->journaled_completion_queued, and that

Re: [ceph-users] Pools do not respond

2014-07-03 Thread Gregory Farnum
The PG in question isn't being properly mapped to any OSDs. There's a good chance that those trees (with 3 OSDs in 2 hosts) aren't going to map well anyway, but the immediate problem should resolve itself if you change the "choose" to "chooseleaf" in your rules. -Greg Software Engineer #42 @ http:/

Re: [ceph-users] Bypass Cache-Tiering for special reads (Backups)

2014-07-03 Thread Gregory Farnum
On Wed, Jul 2, 2014 at 3:06 PM, Marc wrote: > Hi, > > I was wondering, having a cache pool in front of an RBD pool is all fine > and dandy, but imagine you want to pull backups of all your VMs (or one > of them, or multiple...). Going to the cache for all those reads isn't > only pointless, it'll

Re: [ceph-users] why lock th whole osd handle thread

2014-07-03 Thread Gregory Farnum
On Thu, Jul 3, 2014 at 8:24 AM, baijia...@126.com wrote: > when I see the function "OSD::OpWQ::_process ". I find pg lock locks the > whole function. so when I use multi-thread write the same object , so are > they must > serialize from osd handle thread to journal write thread ? It's serialized

Re: [ceph-users] RGW performance test , put 30 thousands objects to one bucket, average latency 3 seconds

2014-07-03 Thread baijia...@126.com
I find that the function of "OSD::OpWQ::_process " use pg-lock lock the whole function.so this mean that osd threads can't handle op which write for the same object. though add log to the ReplicatedPG::op_commit , I find pg lock cost long time sometimes. but I don't know where lock pg . where l

Re: [ceph-users] RGW performance test , put 30 thousands objects to one bucket, average latency 3 seconds

2014-07-03 Thread baijia...@126.com
I put .rgw.buckets.index pool to SSD osd,bucket object must write to the SSD, and disk use ratio less than 50%. so I don't think disk is bottleneck baijia...@126.com From: baijia...@126.com Date: 2014-07-04 01:29 To: Gregory Farnum CC: ceph-users Subject: Re: Re: [ceph-users] RGW performance

Re: [ceph-users] Pools do not respond

2014-07-03 Thread Iban Cabrillo
Hi Gregory, Thanks a lot I begin to understand who ceph works. I add a couple of osd servers, and balance the disk between them. [ceph@cephadm ceph-cloud]$ sudo ceph osd tree # idweighttype nameup/downreweight -716.2root 4x1GbFCnlSAS -95.4host node02 72.

Re: [ceph-users] Pools do not respond

2014-07-03 Thread Gregory Farnum
On Thu, Jul 3, 2014 at 11:17 AM, Iban Cabrillo wrote: > Hi Gregory, > Thanks a lot I begin to understand who ceph works. > I add a couple of osd servers, and balance the disk between them. > > > [ceph@cephadm ceph-cloud]$ sudo ceph osd tree > # idweighttype nameup/downreweight

Re: [ceph-users] Some OSD and MDS crash

2014-07-03 Thread Joao Luis
Do those logs have a higher debugging level than the default? If not nevermind as they will not have enough information. If they do however, we'd be interested in the portion around the moment you set the tunables. Say, before the upgrade and a bit after you set the tunable. If you want to be finer

[ceph-users] mon: leveldb checksum mismatch

2014-07-03 Thread Jason Harley
Hi list — I’ve got a small dev. cluster: 3 OSD nodes with 6 disks/OSDs each and a single monitor (this, it seems, was my mistake). The monitor node went down hard and it looks like the monitor’s db is in a funny state. Running ‘ceph-mon’ manually with ‘debug_mon 20’ and ‘debug_ms 20’ gave the

Re: [ceph-users] mon: leveldb checksum mismatch

2014-07-03 Thread Joao Eduardo Luis
On 07/04/2014 12:29 AM, Jason Harley wrote: Hi list — I’ve got a small dev. cluster: 3 OSD nodes with 6 disks/OSDs each and a single monitor (this, it seems, was my mistake). The monitor node went down hard and it looks like the monitor’s db is in a funny state. Running ‘ceph-mon’ manually

Re: [ceph-users] mon: leveldb checksum mismatch

2014-07-03 Thread Jason Harley
Hi Joao, On Jul 3, 2014, at 7:57 PM, Joao Eduardo Luis wrote: > We don't have a way to repair leveldb. Having multiple monitors usually help > with such tricky situations. I know this, but for this small dev cluster I wasn’t thinking about corruption of my mon’s backing store. Silly me :)