On 07/19/2013 03:47 PM, Marcus Sorensen wrote:
Does RBD not honor barriers and do proper sync flushes? Or does this
have to do with RBD caching? Just wondering why online snapshots
aren't safe.
They're safe at the filesystem level, but I think Wido's after
more application level consistency. If
Does RBD not honor barriers and do proper sync flushes? Or does this
have to do with RBD caching? Just wondering why online snapshots
aren't safe.
Qcow2 can keep snapshots internally, but qemu is also capable of doing
external dumps for other backing stores. I was thinking about this,
and it seems
On Fri, 19 Jul 2013, Josh Durgin wrote:
> On 07/18/2013 08:21 AM, Wido den Hollander wrote:
> > Hi,
> >
> > I'm working on the RBD integration for CloudStack 4.2 and now I got to
> > the point snapshotting.
> >
> > The "problem" is that CloudStack uses libvirt for snapshotting
> > Instances, but
On 07/18/2013 08:21 AM, Wido den Hollander wrote:
Hi,
I'm working on the RBD integration for CloudStack 4.2 and now I got to
the point snapshotting.
The "problem" is that CloudStack uses libvirt for snapshotting
Instances, but Qemu/libvirt also tries to store the memory contents of
the domain t
Hi,
sorry as all my mons were down with the same error - i was in a hurry
made sadly no copy of the mons and workaround by hack ;-( but i posted a
log to pastebin with debug mon 20. (see last email)
Stefan
Am 19.07.2013 17:14, schrieb Sage Weil:
On Fri, 19 Jul 2013, Stefan Priebe - Profihost
Hi,
sorry as all my mons were down with the same error - i was in a hurry
made sadly no copy of the mons and workaround by hack ;-( but i posted a
log to pastebin with debug mon 20. (see last email)
Stefan
Mit freundlichen Grüßen
Stefan Priebe
Bachelor of Science in Computer Science (BSCS)
Several new rgw issues from the recent merge...--- Begin Message ---
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan
Defect(s) Reported-by: Coverity Scan
Showing 7 of 61 defects
** CID 1049252: Wrapper object use after free (WRAPPER_ESCAP
On Fri, 19 Jul 2013, Stefan Priebe - Profihost AG wrote:
> crash is this one:
Can you post a full lost (debug mon = 20, debug paxos = 20, debug ms = 1),
and/or hit us up on irc?
>
> 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
> 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad9
Hi,
My config:
osd op threads = 8
osd disk threads = 4
osd recovery threads = 1
osd recovery max active = 1
osd recovery op priority = 10
osd client op priority = 100
osd max backfills = 1
I set it to maximize client operation priority and sl
On 07/19/2013 09:31 AM, Stefan Priebe wrote:
this fixes a failure like:
0> 2013-07-19 09:29:16.803918 7f7fb5f31780 -1 mon/OSDMonitor.cc: In
function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread 7f7fb5f31780
time 2013-07-19 09:29:16.803439
mon/OSDMonitor.cc: 132: FAILED asser
this fixes a failure like:
0> 2013-07-19 09:29:16.803918 7f7fb5f31780 -1 mon/OSDMonitor.cc: In
function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread
7f7fb5f31780 time 2013-07-19 09:29:16.803439
mon/OSDMonitor.cc: 132: FAILED assert(latest_bl.length() != 0)
ceph version 0.61.5
Am 19.07.2013 09:56, schrieb Dan van der Ster:
> Was that 0.61.4 -> 0.61.5? Our upgrade of all mons and osds on SL6.4
> went without incident.
It was from a git version in between 0.61.4 / 0.61.5 to 0.61.5.
Stefan
>
> --
> Dan van der Ster
> CERN IT-DSS
>
> On Friday, July 19, 2013 at 9:00 A
Complete Output / log with debug mon 20 here:
http://pastebin.com/raw.php?i=HzegqkFz
Stefan
Am 19.07.2013 09:00, schrieb Stefan Priebe - Profihost AG:
> crash is this one:
>
> 2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
> 0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b)
crash is this one:
2013-07-19 08:59:32.137646 7f484a872780 0 ceph version
0.61.5-17-g83f8b88 (83f8b88e5be41371cb77b39c0966e79cad92087b), process
ceph-mon, pid 22172
2013-07-19 08:59:32.173975 7f484a872780 -1 mon/OSDMonitor.cc: In
function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread
14 matches
Mail list logo