Hi,
I'm looking for a way to retrieve the free space from a rbd cluster with rbd
command.
Any hint ?
(something like ceph -w status, but without need to parse the result)
Regards,
Alexandre
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
current/ is a btrfs subvolume.. 'btrfs sub delete current' will remove it.
Ah, that worked, thanks. Unfortunately mkcephfs still fails with the same
error.
The warning in the previous email suggets you're running a fairly old
kernel.. there is probably something handled incorrectly during
There is a call in rbd_dev_image_id() to rbd_req_sync_exec()
to get the image id for an image. Despite the get_id class
method only returning 0 on success, I am getting back a positive
value (I think the number of bytes returned with the call).
That may or may not be how rbd_req_sync_exec() is
The rbd_device structure has an embedded rbd_options structure.
Such a structure is needed to work with the generic ceph argument
parsing code, but there's no need to keep it around once argument
parsing is done.
Use a local variable to hold the rbd options used in parsing in
rbd_get_client(),
On Mon, 15 Oct 2012, Alexandre DERUMIER wrote:
Hi,
I'm looking for a way to retrieve the free space from a rbd cluster with rbd
command.
Any hint ?
(something like ceph -w status, but without need to parse the result)
rados df
is the closest.
sage
Regards,
Alexandre
--
Hi,
inspired from the performance test Mark did, I tried to compile my own one.
I have four OSD processes on one Node, each process has a Intel 710 SSD
for its journal and 4 SAS Disk via an Lsi 9266-8i in Raid 0.
If I test the SSD with fio they are quite fast and the w_wait time is
quite low.
Well, both of my MDSs seem to be down right now, and then continually segfault
(every time I try to start them) with the following:
ceph-mdsmon-a:~ # ceph-mds -n mds.b -c /etc/ceph/ceph.conf -f
starting mds.b at :/0
*** Caught signal (Segmentation fault) **
in thread 7fbe0d61d700
ceph version
Something in the MDS log is bad or is poking at a bug in the code. Can
you turn on MDS debugging and restart a daemon and put that log
somewhere accessible?
debug mds = 20
debug journaler = 20
debug ms = 1
-Greg
On Mon, Oct 15, 2012 at 10:02 AM, Nick Couchman nick.couch...@seakr.com wrote:
Well,
Anywhere in particular I should make it available? It's a little over a
million lines of debug in the file - I can put it on a pastebin, if that works,
or perhaps zip it up and throw it somewhere?
-Nick
On 2012/10/15 at 11:26, Gregory Farnum g...@inktank.com wrote:
Something in the MDS log
Yeah, zip it and post — somebody's going to have to download it and do
fun things. :)
-Greg
On Mon, Oct 15, 2012 at 10:43 AM, Nick Couchman nick.couch...@seakr.com wrote:
Anywhere in particular I should make it available? It's a little over a
million lines of debug in the file - I can put it
Hi. While working on the external journal stuff, for a while I thought
I needed more python code than I ended up needing. To support that
code, I put in the skeleton of import ceph.foo support. While I
ultimately didn't need it, I didn't want to throw away the results. If
you later need to have
Hi Alex,
1) When a replica goes, down the write won't complete until the
replica is detected as down. At that point, the write can complete
without the down replica. Shortly thereafter, if the down replica
does not come back, a new replica will replace it bringing the
replication count to what
Hi Martin,
I haven't tested the 9266-8i specifically, but it may behave similarly
to the 9265-8i. This is just a theory, but I get the impression that
the controller itself introduces some latency getting data to disk, and
that it may get worse as the more data is pushed across the
Do you have a coredump for the crash? Can you reproduce the crash with:
debug filestore = 20
debug osd = 20
and post the logs?
As far as the incomplete pg goes, can you post the output of
ceph pg pgid query
where pgid is the pgid of the incomplete pg (e.g. 1.34)?
Thanks
-Sam
On Thu, Oct
Nothing like that exists at the moment; see
http://tracker.newdream.net/issues/3283 fpr the other side of it.
On 10/15/2012 12:52 AM, Alexandre DERUMIER wrote:
Hi,
I'm looking for a way to retrieve the free space from a rbd cluster with rbd
command.
Any hint ?
(something like ceph -w
Hi Mark,
I think there is no differences between the 9266-8i and the 9265-8i,
except for the cache vault and the angel of the SAS connectors.
In the last test, which I posted, the SSDs where connected to the
onboard SATA ports. Further test showed if I reduce the the object size
(the -b
On Mon, 15 Oct 2012, Travis Rhoden wrote:
Martin,
btw.
Is there a nice way to format the output of ceph --admin-daemon
ceph-osd.0.asok perf_dump?
I use:
ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok perf dump | python
-mjson.tool
There is also
17 matches
Mail list logo