Re: Internal Qemu snapshots with RBD and libvirt

2013-07-19 Thread Josh Durgin
On 07/19/2013 03:47 PM, Marcus Sorensen wrote: Does RBD not honor barriers and do proper sync flushes? Or does this have to do with RBD caching? Just wondering why online snapshots aren't safe. They're safe at the filesystem level, but I think Wido's after more application level consistency. If

Re: Internal Qemu snapshots with RBD and libvirt

2013-07-19 Thread Marcus Sorensen
Does RBD not honor barriers and do proper sync flushes? Or does this have to do with RBD caching? Just wondering why online snapshots aren't safe. Qcow2 can keep snapshots internally, but qemu is also capable of doing external dumps for other backing stores. I was thinking about this, and it seems

Re: Internal Qemu snapshots with RBD and libvirt

2013-07-19 Thread Sage Weil
On Fri, 19 Jul 2013, Josh Durgin wrote: > On 07/18/2013 08:21 AM, Wido den Hollander wrote: > > Hi, > > > > I'm working on the RBD integration for CloudStack 4.2 and now I got to > > the point snapshotting. > > > > The "problem" is that CloudStack uses libvirt for snapshotting > > Instances, but

Re: Internal Qemu snapshots with RBD and libvirt

2013-07-19 Thread Josh Durgin
On 07/18/2013 08:21 AM, Wido den Hollander wrote: Hi, I'm working on the RBD integration for CloudStack 4.2 and now I got to the point snapshotting. The "problem" is that CloudStack uses libvirt for snapshotting Instances, but Qemu/libvirt also tries to store the memory contents of the domain t

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe
Hi, sorry as all my mons were down with the same error - i was in a hurry made sadly no copy of the mons and workaround by hack ;-( but i posted a log to pastebin with debug mon 20. (see last email) Stefan Am 19.07.2013 17:14, schrieb Sage Weil: On Fri, 19 Jul 2013, Stefan Priebe - Profihost

Re: [PATCH] mon: use first_commited instead of latest_full map if latest_bl.length() == 0

2013-07-19 Thread Stefan Priebe
Hi, sorry as all my mons were down with the same error - i was in a hurry made sadly no copy of the mons and workaround by hack ;-( but i posted a log to pastebin with debug mon 20. (see last email) Stefan Mit freundlichen Grüßen Stefan Priebe Bachelor of Science in Computer Science (BSCS)

New Defects reported by Coverity Scan for ceph (fwd)

2013-07-19 Thread Sage Weil
Several new rgw issues from the recent merge...--- Begin Message --- Hi, Please find the latest report on new defect(s) introduced to ceph found with Coverity Scan Defect(s) Reported-by: Coverity Scan Showing 7 of 61 defects ** CID 1049252: Wrapper object use after free (WRAPPER_ESCAP

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Sage Weil
On Fri, 19 Jul 2013, Stefan Priebe - Profihost AG wrote: > crash is this one: Can you post a full lost (debug mon = 20, debug paxos = 20, debug ms = 1), and/or hit us up on irc? > > 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version > 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad9

optimal values for osd threads

2013-07-19 Thread Dominik Mostowiec
Hi, My config: osd op threads = 8 osd disk threads = 4 osd recovery threads = 1 osd recovery max active = 1 osd recovery op priority = 10 osd client op priority = 100 osd max backfills = 1 I set it to maximize client operation priority and sl

Re: [PATCH] mon: use first_commited instead of latest_full map if latest_bl.length() == 0

2013-07-19 Thread Joao Eduardo Luis
On 07/19/2013 09:31 AM, Stefan Priebe wrote: this fixes a failure like: 0> 2013-07-19 09:29:16.803918 7f7fb5f31780 -1 mon/OSDMonitor.cc: In function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread 7f7fb5f31780 time 2013-07-19 09:29:16.803439 mon/OSDMonitor.cc: 132: FAILED asser

[PATCH] mon: use first_commited instead of latest_full map if latest_bl.length() == 0

2013-07-19 Thread Stefan Priebe
this fixes a failure like: 0> 2013-07-19 09:29:16.803918 7f7fb5f31780 -1 mon/OSDMonitor.cc: In function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread 7f7fb5f31780 time 2013-07-19 09:29:16.803439 mon/OSDMonitor.cc: 132: FAILED assert(latest_bl.length() != 0) ceph version 0.61.5

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe - Profihost AG
Am 19.07.2013 09:56, schrieb Dan van der Ster: > Was that 0.61.4 -> 0.61.5? Our upgrade of all mons and osds on SL6.4 > went without incident. It was from a git version in between 0.61.4 / 0.61.5 to 0.61.5. Stefan > > -- > Dan van der Ster > CERN IT-DSS > > On Friday, July 19, 2013 at 9:00 A

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe - Profihost AG
Complete Output / log with debug mon 20 here: http://pastebin.com/raw.php?i=HzegqkFz Stefan Am 19.07.2013 09:00, schrieb Stefan Priebe - Profihost AG: > crash is this one: > > 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version > 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b)

Re: [ceph-users] v0.61.5 Cuttlefish update released

2013-07-19 Thread Stefan Priebe - Profihost AG
crash is this one: 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b), process ceph-mon, pid 22172 2013-07-19 08:59:32.173975 7f484a872780 -1 mon/OSDMonitor.cc: In function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread