Hi Greg,
Yes, Thanks for your advice ,we do turn down the
osd_client_message_size_cap to 100MB/OSD ,both Journal queue and filestore
queue are set to 100MB also.
That's 300MB/OSD in total, but from TOP we see:
16527 1 14:49.01 0 7.1 20 0 S 1147m0
On Tue, Jun 4, 2013 at 2:20 PM, Chen, Xiaoxi xiaoxi.c...@intel.com wrote:
Hi Greg,
Yes, Thanks for your advice ,we do turn down the
osd_client_message_size_cap to 100MB/OSD ,both Journal queue and filestore
queue are set to 100MB also.
That's 300MB/OSD in total, but from TOP
Hi all,
I have 3 monitors in my cluster but 1 of them is a backup and I don't
want to use it as master (too far and on a server where I want to save
resources for something else ...)
But as I can see, mon priority is based on IP (lower to higher, if I'm
not wrong).
I would like to know if it's
Just to add, this doesn't happen in just one pool.
When I change data pool replicate size from 2 to 3, a few PGs (3) got
stuck too.
pg 0.7c is active+clean+degraded, acting [8,2]
pg 0.48 is active+clean+degraded, acting [4,8]
pg 0.1f is active+clean+degraded, acting [5,7]
I am already on
Yip, no, I have not tried them, but I certainly will! Do I need a patched
libvirtd as well, or is this working out of the box?
Thanks
Andrei
- Original Message -
From: YIP Wai Peng yi...@comp.nus.edu.sg
To: Andrei Mikhailovsky and...@arhont.com
Cc: ceph-users@lists.ceph.com
Yavor,
I would highly recommend taking a look at the quick install guide:
http://ceph.com/docs/next/start/quick-start/
As per the guide, you need to precreate the directories prior to starting ceph.
Andrei
- Original Message -
From: Явор Маринов ymari...@neterra.net
To:
That's the exact documentation which i'm using the directory on ceph2 is
created, and the service is starting without any problems on both nodes.
However the health of the cluster is getting WARN and i was able to
mount the cluster
On 06/04/2013 03:43 PM, Andrei Mikhailovsky wrote:
Yavor,
On Tue, Jun 4, 2013 at 2:31 AM, Guilhem Lettron guil...@lettron.fr wrote:
Hi all,
I have 3 monitors in my cluster but 1 of them is a backup and I don't
want to use it as master (too far and on a server where I want to save
resources for something else ...)
But as I can see, mon priority is
-- Forwarded message --
From: Gandalf Corvotempesta gandalf.corvotempe...@gmail.com
Date: 2013/5/31
Subject: Multi Rack Reference architecture
To: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
In reference architecture PDF, downloadable from your website, there was
some
Any experiences with clustered FS on top of RBD devices?
Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
dovecot nodes ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Am 04.06.2013 20:03, schrieb Gandalf Corvotempesta:
Any experiences with clustered FS on top of RBD devices?
Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
dovecot
nodes ?
we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence all
nodes if rbd is
Hi,
VM died, but on root disk i found:
kern.log:
51 2013-06-04T21:18:02.568823+02:00 vm-1 kernel - - - [ 220.717935]
sd 2:0:0:0: Attached scsi generic sg0 type 0
51 2013-06-04T21:18:02.568848+02:00 vm-1 kernel - - - [ 220.718231]
sd 2:0:0:0: [sda] 1048576000 512-byte logical blocks: (536 GB/500
2013/6/4 Smart Weblications GmbH - Florian Wiessner
f.wiess...@smart-weblications.de
we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence
all
nodes if rbd is not responding within defined timeout...
if rbd is not responding to all nodes, having all ocfs2 fenced should
Hi Gandalf,
Am 04.06.2013 21:45, schrieb Gandalf Corvotempesta:
2013/6/4 Smart Weblications GmbH - Florian Wiessner
f.wiess...@smart-weblications.de mailto:f.wiess...@smart-weblications.de
we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence
all
nodes if rbd is
Behind a registration form, but iirc, this is likely what you are
looking for:
http://www.inktank.com/resource/dreamcompute-architecture-blueprint/
- Mike
On 5/31/2013 3:26 AM, Gandalf Corvotempesta wrote:
In reference architecture PDF, downloadable from your website, there was
some
Hi,
I'm trying to get Hadoop tested with the ceph:/// schema, but can't seem to
find a way to make Hadoop ceph-aware :(
Is the only way to get it to work is to build Hadoop off the
https://github.com/ceph/hadoop-common/tree/cephfs/branch-1.0/src or is it
possible to compile/obtain some sort
On Jun 4, 2013, at 2:58 PM, Ilja Maslov ilja.mas...@openet.us wrote:
Is the only way to get it to work is to build Hadoop off the
https://github.com/ceph/hadoop-common/tree/cephfs/branch-1.0/src or is it
possible to compile/obtain some sort of a plugin and feed it to a stable
hadoop
I have a ceph setup with cuttlefish for kernel rbd test. After I mapped rbd to
the clients, I execute 'rbd showmapped', the output looks like as follows:
id pool image snap device
1 ceph node7_1 -/dev/rbd1
2 ceph node7_2 -/dev/rbd2
3 ceph node7_3 -/dev/rbd3
4 ceph node7_4 -
18 matches
Mail list logo