OK. About the LIBRADOS_VER_MINOR, do you want me to bump it and submit a
new patch?
Best regards,
Filippos
On 12/15/2012 09:49 AM, Yehuda Sadeh wrote:
Went through it briefly, looks fine, though I'd like to go over it
some more before picking this up. Note that LIBRADOS_VER_MINOR needs
to be
On 12/19/2012 09:03 AM, Roman Hlynovskiy wrote:
My first problem - I am getting spurious mon's deaths, which usually
looks like this:
--- begin dump of recent events ---
0 2012-12-19 10:35:58.912119 b41eab70 -1 *** Caught signal (Aborted) **
in thread b41eab70
ceph version 0.55.1
On 12/19/2012 10:58 AM, Roman Hlynovskiy wrote:
Hello Joao,
thanks for feedback. is this fix available on the svn? i can provide
heavy testing for it.
Yes, the fix in on github's (not svn ;) master branch.
All testing is most welcome!
Thanks.
-Joao
2012/12/19 Joao Eduardo Luis
This patch renames the --format option to --image-format, for specyfing the RBD
image format, and uses --format to specify the output formating (to be
consistent with the other ceph tools). To avoid breaking backwards compatibility
with existing scripts, rbd will still accept --format [1|2] for
On 12/19/2012 03:03 AM, Roman Hlynovskiy wrote:
Hello,
I have 2 issues with ceph stability and looking for help to resolve them.
My setup is pretty simple - 3 debian 32bit stable systems each running
osd, mon and mds.
the conf is the following:
[global]
auth cluster
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi
I'm seeing a couple of issues with Ceph 0.55.1 on Ubuntu raring
(current development release) whilst testing the keystone integration
with radosgw.
1) Crash in RADOS Gateway when Content-Type not specified in Upload
We had a bunch of disk who failed. That's why ceph was having trouble keeping
OSD up.
And we found that during recovery the rados gateway failed to initialize. The
init_watch function timeout.
As it is only used when cache is activated, we disable cache (rgw cache enable
false) and the
On Wed, 19 Dec 2012, Roman Hlynovskiy wrote:
My second problem - I have 2 systems which mount ceph. Whenever I
mount ceph on any other system it usually mounts but get stuck on
stat* operations (i.e. simple ls -al will hang with read( from the
ceph-mounted directory for ages). This kind of
On Wed, Dec 19, 2012 at 7:10 AM, James Page james.p...@ubuntu.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi
I'm seeing a couple of issues with Ceph 0.55.1 on Ubuntu raring
(current development release) whilst testing the keystone integration
with radosgw.
1) Crash in
On Wed, 19 Dec 2012, Filippos Giannakos wrote:
OK. About the LIBRADOS_VER_MINOR, do you want me to bump it and submit a
new patch?
Yes, please. Also, one other thing: can you add a functional test to
ceph.git/src/test/librados/aio.cc so that all of the the regular testing
and test suites
On 12/19/2012 07:43 AM, Sage Weil wrote:
On Wed, 19 Dec 2012, Filippos Giannakos wrote:
OK. About the LIBRADOS_VER_MINOR, do you want me to bump it and submit a
new patch?
Yes, please. Also, one other thing: can you add a functional test to
ceph.git/src/test/librados/aio.cc so that all of
2012/12/19 Sage Weil s...@inktank.com:
On Wed, 19 Dec 2012, Mark Kirkwood wrote:
On 19/12/12 15:56, Drunkard Zhang wrote:
2012/12/19 Mark Kirkwood mark.kirkw...@catalyst.net.nz:
On 19/12/12 14:44, Drunkard Zhang wrote:
2012/12/16 Drunkard Zhang gongfan...@gmail.com:
I couldn't rm
No more suggestions? :(
--
Regards,
Sébastien Han.
On Tue, Dec 18, 2012 at 6:21 PM, Sébastien Han han.sebast...@gmail.com wrote:
Nothing terrific...
Kernel logs from my clients are full of libceph: osd4
172.20.11.32:6801 socket closed
I saw this somewhere on the tracker.
Does this harm?
Hello CephTeam,Community
I m doing just my first steps with ceph.
I have upgraded my 3 test system to ubuntu/raring and run mkcephfs below is
output,ceph.conf and ceph -s output...any help would be appreciated.
Thanks
Tibet
--
root@host1:/var/lib/ceph# mkcephfs -a -c
On 12/19/2012 04:48 PM, Tibet Himalkaya wrote:
Hello CephTeam,Community
I m doing just my first steps with ceph.
I have upgraded my 3 test system to ubuntu/raring and run mkcephfs below is
output,ceph.conf and ceph -s output...any help would be appreciated.
Thanks
Tibet
Have you started your
Hi List,
how can i delete non existing PGs ?
the OSDs where the PGs was stored are crashed and now i see this
pg 2.80 is stuck stale for 38971.810705, current state
stale+active+clean, last acting [2,0]
pg 0.82 is stuck stale for 38971.810712, current state
stale+active+clean, last acting
cant bring the osds back, thought that ceph replicates data over hosts
not only over osds. so i stopped two OSDs on one host, and deleted the
data/osds, after that i saw the mistake...
On 19.12.2012 22:05, Samuel Just wrote:
Note, however, that it will render the objects previously stored
On 12/18/2012 12:05 PM, Nick Bartos wrote:
I've added the output of ps -ef in addition to triggering a trace
when a hang is detected. Not much is generally running at that point,
but you can have a look:
Ceph can be configured that way using crush. See
http://ceph.com/docs/master/rados/operations/crush-map/
-Sam
On Wed, Dec 19, 2012 at 1:13 PM, norbi no...@rocknob.de wrote:
cant bring the osds back, thought that ceph replicates data over hosts not
only over osds. so i stopped two OSDs on one
Sorry, it's been very busy. The next step would to try to get a heap
dump. You can start a heap profile on osd N by:
ceph osd tell N heap start_profiler
and you can get it to dump the collected profile using
ceph osd tell N heap dump.
The dumps should show up in the osd log directory.
20 matches
Mail list logo