Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named "ceph", the pool is named "rbd".
ceph.xml:
rbd
ceph
secret.xml:
client.admin
[root@xen-b
Hi,
On 07/21/13 09:05, Dan van der Ster wrote:
This is with a 10Gb network -- and we can readily get 2-3GBytes/s in
"normal" rados bench tests across many hosts in the cluster. I wasn't
too concerned with the overall MBps throughput in my question, but
rather the objects/s recovery rate --
the
On Sat, Jul 20, 2013 at 7:28 AM, Mikaël Cluseau wrote:
> HI,
>
>
> On 07/19/13 07:16, Dan van der Ster wrote:
>>
>> and that gives me something like this:
>>
>> 2013-07-18 21:22:56.546094 mon.0 128.142.142.156:6789/0 27984 : [INF]
>> pgmap v112308: 9464 pgs: 8129 active+clean, 398
>> active+remapp
Hi,
On 07/12/13 19:57, Edwin Peer wrote:
Seconds of down time is quite severe, especially when it is a planned
shut down or rejoining. I can understand if an OSD just disappears,
that some requests might be directed to the now gone node, but I see
similar latency hiccups on scheduled shut down
On 07/20/2013 05:16 PM, Sage Weil wrote:
On Sat, 20 Jul 2013, Wido den Hollander wrote:
On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:
On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim
wrote:
hey folks, I was hoping to be able to use xfs on top of RBD for a
deployment of mine. And was hopin
Hi,
On 07/11/13 12:23, ker can wrote:
Unfortunately I currently do not have access to SSDs, so I had a
separate disk for the journal for each data disk for now.
you can try the RAM as a journal (well... not in production of course),
if you want an idea of the performance on SSDs. I tried this
I have just upgraded to 0.61.5
# ceph -v
ceph version 0.61.5 (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979)
and tried to run:
# ceph-deploy disk list ceph10
disk list failed: Traceback (most recent call last):
File "/usr/sbin/ceph-disk", line 2263, in
main()
File "/usr/sbin/ceph-disk", line 2
Hi,
this was the hint to the right direction.
root@ceph03:~# initctl list |grep ceph
ceph-mon (ceph/ceph03) start/running, process 1266
ceph-osd (ceph/4) start/running, process 1593
and
root@ceph03:~# stop ceph-mon id=ceph03
ceph-mon stop/waiting
did the trick. :)
to start the mon again: root
Please help me!
On 07/20/2013 02:11 AM, Ta Ba Tuan wrote:
Hi everyone,
I have *3 nodes (running MON and MDS)*
and *6 data nodes ( 84 OSDs**)*
Each data nodes has configuraions:
- CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
- RAM: 32GB
- Disk: 14*4TB
(14disks *4TB *6
On Sat, 20 Jul 2013, Uwe Grohnwaldt wrote:
> Hi,
>
> anybody an idea how to solve this? Any help would be appreciated.
What OS is this? upstart is used on ubuntu, but nothing else. For
upstart, initctl list | grep ceph will tell you what is running. For
sysvinit, the script enumerates daemon
Hey, Sage. Thanks for replying.
NOTE to the list: sorry about the confusion, guys, but I'm new to the
list! This is actually a double-post, and I hope to leave the activity
and replies to the other thread (since that thread has gotten more
responses anyway). I was confused initially about the list
On Sat, 20 Jul 2013, Wido den Hollander wrote:
> On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:
> > On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim
> > wrote:
> > > hey folks, I was hoping to be able to use xfs on top of RBD for a
> > > deployment of mine. And was hoping for the resize of the
On Sat, 20 Jul 2013, Jeffrey 'jf' Lim wrote:
> hey folks, I'm hoping to be able to use xfs on top of RBD for a
> deployment of mine. And was hoping for the resize of the RBD
> (expansion, actually, would be my use case) in the future to be as
> simple as a "resize on the fly", followed by an 'xfs_g
Hi,
anybody an idea how to solve this? Any help would be appreciated.
Thanks and best regards,
Uwe
- Original Message -
> From: "Uwe Grohnwaldt"
> To: "Jens Kristian Søgaard"
> Cc: "ceph-users" , w...@aixit.com
> Sent: Donnerstag, 18. Juli 2013 20:34:53
> Subject: Re: [ceph-users] ceph
On Sat, Jul 20, 2013 at 3:44 PM, Wido den Hollander wrote:
> On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:
>>
>> On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim
>> wrote:
>>>
>>> hey folks, I was hoping to be able to use xfs on top of RBD for a
>>> deployment of mine. And was hoping for the
On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:
On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim wrote:
hey folks, I was hoping to be able to use xfs on top of RBD for a
deployment of mine. And was hoping for the resize of the RBD
(expansion, actually, would be my use case) in the future to b
16 matches
Mail list logo