I try with radosgw and it's reporting very nice output from valgrind,
but still nothing from mon.
desc: (none)
cmd: /usr/bin/ceph-mon -i 0 --pid-file /var/run/ceph/mon.0.pid -c
/etc/ceph/ceph.conf -f
time_unit: i
#---
snapshot=0
#---
time=0
mem_heap_B=0
mem_heap_extra_B=0
Hi Dieter,
Simply run a ceph -w and wait for the output.
Cheers.
On Sat, Sep 8, 2012 at 8:16 PM, Dieter Kasper d.kas...@kabelmail.de wrote:
Hi Sébastien,
when running 'ceph osd tell $i bench'
who/where will I see the results:
osd.0 [INF] bench: wrote 1024 MB in blocks of 4096 KB in
*Disclaimer*: these results are an investigation into potential
bottlenecks in RADOS. The test setup is wholly unrealistic, and these
numbers SHOULD NOT be used as an indication of the performance of OSDs,
messaging, RADOS, or ceph in general.
Executive summary: rados bench has some internal
On 09/10/2012 03:15 PM, Mike Ryan wrote:
*Disclaimer*: these results are an investigation into potential
bottlenecks in RADOS. The test setup is wholly unrealistic, and these
numbers SHOULD NOT be used as an indication of the performance of OSDs,
messaging, RADOS, or ceph in general.
Executive
I sheepishly must ask -- is there any way to recover a deleted RBD?
The RBD was created using the Linux 3.2 kernel client, running 0.48.1.
Removed with rbd rm -p rbd name.
I expect the answer to be no, but wanted to check for sure!
Thanks,
Travis
--
To unsubscribe from this list: send the
On 09/10/2012 01:55 PM, Travis Rhoden wrote:
I sheepishly must ask -- is there any way to recover a deleted RBD?
The RBD was created using the Linux 3.2 kernel client, running 0.48.1.
Removed with rbd rm -p rbd name.
I expect the answer to be no, but wanted to check for sure!
Thanks,
Greetings,
Has anyone seen this or got ideas on how to fix it?
mdsmap e18399: 3/3/3 up {0=b=up:resolve,1=a=up:resolve(laggy or
crashed),2=a=up:resolve(laggy or crashed)}
Notice that the 2nd and 3rd mds are the same letter(a). I'm not sure
how that happened, I'm guessing a typo in my
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 09/07/2012 07:50 AM, Alex Elder wrote:
Move the calls to get the header semaphore out of
rbd_header_set_snap() and into its caller.
Signed-off-by: Alex Elder el...@inktank.com
---
drivers/block/rbd.c |5 ++---
1 file changed, 2
Looking closer, I don't think we need to protect this section at all.
The device isn't initialized yet, so nothing else can access the
rbd_dev-header. It'd be good to have a comment to that effect.
Josh
On 09/07/2012 07:50 AM, Alex Elder wrote:
Expand the region of code in rbd_add() protected
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 09/07/2012 07:50 AM, Alex Elder wrote:
When rbd_bus_add_dev() is called (one spot--in rbd_add()), the rbd
image header has not even been read yet. This means that the list
of snapshots will be empty at the time of the call. As a result,
On 09/07/2012 07:50 AM, Alex Elder wrote:
An rbd_dev structure maintains a list of current snapshots that have
already been fully initialized. The entries on the list have type
struct rbd_snap, and each entry contains a copy of information
that's found in the rbd_dev's snapshot context and
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 09/07/2012 08:44 AM, Alex Elder wrote:
Move the assignment of the header name for an rbd image a bit later,
outside rbd_add_parse_args() and into its caller.
Signed-off-by: Alex Elder el...@inktank.com
---
drivers/block/rbd.c | 22
From: Yan, Zheng zheng.z@intel.com
We need set truncate_seq when redirect the newop to CEPH_OSD_OP_WRITE,
otherwise the code handles CEPH_OSD_OP_WRITE may quietly drop the data.
Signed-off-by: Yan, Zheng zheng.z@intel.com
---
src/osd/ReplicatedPG.cc | 1 +
1 file changed, 1 insertion(+)
HI,
i have a simple question.
for distribution workload among OSDs, does ceph do any online modeling
for OSDs, i.e. collect the online IO latency and try to distribute
more workloads to lower latency OSDs? or only based on
capacity/utilization balance?
Thanks,
Sheng
--
Sheng Qiu
Texas A M
On Mon, 10 Sep 2012, sheng qiu wrote:
HI,
i have a simple question.
for distribution workload among OSDs, does ceph do any online modeling
for OSDs, i.e. collect the online IO latency and try to distribute
more workloads to lower latency OSDs? or only based on
capacity/utilization balance?
15 matches
Mail list logo