On Mon, Dec 31, 2012 at 3:12 AM, Jens Kristian Søgaard
j...@mermaidconsulting.dk wrote:
Hi Andrey,
Thanks for your reply!
You may try do play with SCHED_RT, I have found it hard to use for
myself, but you can achieve your goal by adding small RT slices via
``cpu'' cgroup to vcpu/emulator
Hi,
On 12/30/2012 10:38 PM, Miles Fidelman wrote:
Hi Folks,
I'm wondering how ceph would work in a small cluster that supports a mix
of engineering and modest production (email, lists, web server for
several small communities).
Specifically, we have a rack with 4 medium-horsepower servers,
Hi,
If I run two clusters on the same network, each with its own set of
monitors and config files (assuming that we didn't make any error
in the config files), would there be anything wrong with that?
Ceph seems to be quite chatty, so would they mess up their
messages?
Just want to make sure,
Am 31.12.2012 02:10, schrieb Samuel Just:
Are you using xfs? If so, what mount options?
Yes,
noatime,nodiratime,nobarrier,logbufs=8,logbsize=256k
Stefan
On Dec 30, 2012 1:28 PM, Stefan Priebe s.pri...@profihost.ag
mailto:s.pri...@profihost.ag wrote:
Am 30.12.2012 19:17, schrieb Samuel
On Mon, Dec 31, 2012 at 2:58 PM, Jens Kristian Søgaard
j...@mermaidconsulting.dk wrote:
Hi Andrey,
As I understood right, you have md device holding both journal and
filestore? What type of raid you have here?
Yes, same md device holding both journal and filestore. It is a raid5.
Ahem, of
Matt, Thanks for the comments. A follow-up if I might (inline):
Matthew Roy wrote:
What I'm not doing that you'd need to test is running VMs on the same
servers as storage. I'd be careful about mounting RBD volumes on the
OSDs, you can run into kernel deadlock trying to write out things to
Wido, Thanks for the comment, a follow-up if I might (below)?
Wido den Hollander wrote:
I build some small Ceph cluster with sometimes just 3 nodes. It works,
but you have to keep in mind that when one node in a 4 node cluster
fails you will loose 25% of the capacity.
This will lead to a
On 12/26/2012 03:36 PM, Alex Elder wrote:
On 12/26/2012 11:45 AM, Nick Bartos wrote:
Here's a log with a hang on the updated branch:
https://gist.github.com/raw/4381750/772476e1bae1e6366347a223f34aa6c440b92765/rdb-hang-1356543132.log
OK, new naming scheme. Please try: wip-nick-1
Now that
Hi,
On 12/31/2012 11:17 AM, Xiaopong Tran wrote:
Hi,
If I run two clusters on the same network, each with its own set of
monitors and config files (assuming that we didn't make any error
in the config files), would there be anything wrong with that?
Ceph seems to be quite chatty, so would
The ceph-osd relies on fs barriers for correctness. You will want to
remove the nobarrier option to prevent future corruption.
-Sam
On Mon, Dec 31, 2012 at 3:59 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 31.12.2012 02:10, schrieb Samuel Just:
Are you using xfs? If so, what mount
We're bringing in the new year with a new release, v0.56, which will form
the basis of the next stable series bobtail. There is little in the way
of new functionality since v0.55, as we've been focusing primarily on
stability, performance, and upgradability from the previous argonaut
stable
11 matches
Mail list logo