On Fri, 8 Feb 2013, Danny Al-Gaaf wrote:
> Fix some memory leaks in case of error handling due to failed
> client->open() calls.
>
> Error from cppcheck was:
> [src/client/SyntheticClient.cc:1980]: (error) Memory leak: buf
> [src/client/SyntheticClient.cc:2040]: (error) Memory leak: buf
> [src/cli
Hey Danny-
These look good, modulo those 2 comments. Do you have a public git tree
with these patches I can cherry-pick or pull from? It's a bit faster than
yanking them off the list for merge (although posting to the list for
review is still good, of course).
Thanks!
sage
On Thu, 7 Feb 20
On Thu, 7 Feb 2013, Danny Al-Gaaf wrote:
> Fix "(performance) Function parameter 'e' should be passed by reference."
> from cppchecker.
eversion_t is only 12-16 bytes (depending on alignment), so I'm not sure a
pointer indirection (or whatever the compiler turns the & parameter into)
is going to
On Thu, 7 Feb 2013, Danny Al-Gaaf wrote:
> Fix "variable length array of non-POD element type" errors caused by
> using librados::ObjectWriteOperation VLAs. (-Wvla)
>
> Signed-off-by: Danny Al-Gaaf
> ---
> src/key_value_store/kv_flat_btree_async.cc | 14 +++---
> 1 file changed, 7 insert
>>> $ ceph -s
>>> health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
>>> monmap e1: 1 mons at {0=192.168.0.6:6789/0}, election epoch 0,
>>> quorum 0 0
>>> osdmap e3: 1 osds: 1 up, 1 in
>>> pgmap v119: 192 pgs: 192 active+degraded; 0 bytes data, 10204 MB
>>> used, 2740 GB / 2750 GB avail
>>>
From: Wido den Hollander
It's still not clear to end users this should go into the
mon or global section of ceph.conf
Until this gets resolved document it here as well for the people
who look up their settings in the source code.
Signed-off-by: Wido den Hollander
---
src/common/config_opts.h
This sounds very much like what we've been experiencing. Actually,
come to think of it, when I (a month ago or so) enabled more logging,
when one osd crashed, I vaguely remember thinking "it seems to have to
do with leveldb". I guess it can be circumvented by disabling
compression on btrfs (though
On Saturday, February 9, 2013 at 6:23 AM, John Axel Eriksson wrote:
> Three times now, twice on one osd, once on another we've had the osd
> crash. Restarting it wouldn't help - it would crash with the same
> error. The only way I found to get it up again was to reformat both
> the journal disk and
On Saturday, February 9, 2013 at 3:09 AM, Wido den Hollander wrote:
> On 02/09/2013 12:06 PM, Adam Nielsen wrote:
> > Thanks for your quick reply!
> >
> > > Could you show the output of "ceph -s"
> >
> > $ ceph -s
> > health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
> > monmap e1: 1 mon
Three times now, twice on one osd, once on another we've had the osd
crash. Restarting it wouldn't help - it would crash with the same
error. The only way I found to get it up again was to reformat both
the journal disk and the disk ceph is using for storage... basically
recreating the osd.
This ha
Ah, I see you only have one OSD, where the default replication level is 2.
Also, pools don't work by default if only one replica is left.
You better add a second OSD or just do a mkcephfs again with a second OSD in
the configuration.
Ah ok. From my earlier post I think I can add the second OSD
On 02/09/2013 12:06 PM, Adam Nielsen wrote:
Thanks for your quick reply!
Could you show the output of "ceph -s"
$ ceph -s
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {0=192.168.0.6:6789/0}, election epoch 0,
quorum 0 0
osdmap e3: 1 osds: 1 up, 1
Thanks for your quick reply!
Could you show the output of "ceph -s"
$ ceph -s
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {0=192.168.0.6:6789/0}, election epoch 0, quorum 0 0
osdmap e3: 1 osds: 1 up, 1 in
pgmap v119: 192 pgs: 192 active+degraded
On 02/09/2013 11:40 AM, Adam Nielsen wrote:
Hi again,
I'm trying to mount my new Ceph volume on a remote PC using cephfs.
I've followed the quick start guide, but when I try to mount the
filesystem I get this:
remote$ mount -t ceph 192.168.0.6:6789:/ /mnt/ceph/
mount: 192.168.0.6:6789:/: can't
Hi again,
I'm trying to mount my new Ceph volume on a remote PC using cephfs. I've
followed the quick start guide, but when I try to mount the filesystem I get this:
remote$ mount -t ceph 192.168.0.6:6789:/ /mnt/ceph/
mount: 192.168.0.6:6789:/: can't read superblock
remote$ dmesg | tail
[951
Hi Noah,
On 02/08/2013 04:42 PM, Noah Watkins wrote:
On Feb 8, 2013, at 1:06 AM, Wido den Hollander wrote:
Hi,
I knew that there were Java bindings for RADOS, but they weren't linked.
Well, some searching on Github lead me to Noah's bindings [0], but it was a bit
of searching.
The RADOS
16 matches
Mail list logo