Hi everyone,
I'm trying to figure out what is the best OSD solution with an
infrastructure made up of servers with a lot a disks in each. Say, for
exemple you have 4+ nodes like Sun Fire X4500 (code-named Thumper).
Each node is 48 disks.
What are the pros and cons to build a ceph cluster with
Hi!
ceph 0.30
the rate of re-reading the file 10G. 13 OSD servers (ie the file is
already in the cache in OSD servers).
cfuse: 118 MB/s
kenel (3.0.0-rc3): 84.0 MB/s
:(
WBR,
Fyodor.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
On Sat, 2 Jul 2011, Fyodor Ustinov wrote:
Hi!
ceph 0.30
the rate of re-reading the file 10G. 13 OSD servers (ie the file is already in
the cache in OSD servers).
cfuse: 118 MB/s
kenel (3.0.0-rc3): 84.0 MB/s
:(
Yeah, this is still on the list. We can include it in the next sprint
On 07/02/2011 11:45 PM, Sage Weil wrote:
On Sat, 2 Jul 2011, Fyodor Ustinov wrote:
Hi!
ceph 0.30
the rate of re-reading the file 10G. 13 OSD servers (ie the file is already in
the cache in OSD servers).
cfuse: 118 MB/s
kenel (3.0.0-rc3): 84.0 MB/s
:(
Yeah, this is still on the list. We
Hi!
mds - 0.30
I can not to reproduce, sorry.
mds/Locker.cc: In function 'void Locker::file_excl(ScatterLock*,
bool*)', in thread '0x7fefc6c68700'
mds/Locker.cc: 3982: FAILED assert(in-get_loner() = 0
in-mds_caps_wanted.empty())
ceph version 0.30
Which commit were you running?
sage
On Sun, 3 Jul 2011, Fyodor Ustinov wrote:
Hi!
mds - 0.30
I can not to reproduce, sorry.
mds/Locker.cc: In function 'void Locker::file_excl(ScatterLock*, bool*)', in
thread '0x7fefc6c68700'
mds/Locker.cc: 3982: FAILED assert(in-get_loner() = 0
On 07/03/2011 01:03 AM, Sage Weil wrote:
Which commit were you running?
On mds server from here:
deb http://ceph.newdream.net/debian/ natty main
deb-src http://ceph.newdream.net/debian/ natty main
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message