Hi,
On 01/14/2013 08:51 AM, Alexis GÜNST HORN wrote:
Hello,
I've a 0.56.1 Ceph cluster up and running. RBD is working fine, but
i've some troubles with CephFS.
Here is my config :
- only 2 OSD nodes, with 10 disks each + SSD for journal.
- OSDs hosts are gigabit (public) + gigabit (private)
Hello,
Thanks for your answer.
Both OSDs and client are CentOS 6.3 with 3.7.1 kernel.
And yes, the script creates empty loop devices of different sizes.
MDS MON are on one of the 2 OSD hosts.
I already try to put them on a separate server.
I know that CephFS is not considered as stable yet,
Hi,
The other problem to consider is the possibility of deadlock under memory
pressure. This is a problem with any network file system or block device
that is backed by a user-level process on the same host. When the VM
system is under memory pressure, it will ask the fs to write out some
Hi everyone,
we ran into an interesting performance issue on Friday that we were
able to troubleshoot with some help from Greg and Sam (thanks guys),
and in the process realized that there's little guidance around for
how to optimize performance in OSD nodes with lots of spinning disks
(and
From: Yan, Zheng zheng.z@intel.com
commit 1174dd3188 (don't retry readdir request after issuing caps)
introduced an bug that wrongly marks 'end' in the the readdir reply.
The code that touches existing dentries re-uses an iterator, and the
iterator is used for checking if readdir is end.
On 01/05/2013 01:50 AM, Josh Durgin wrote:
On 12/19/2012 04:17 AM, Stratos Psomadakis wrote:
This patch renames the --format option to --image-format, for
specyfing the RBD
image format, and uses --format to specify the output formating (to be
consistent with the other ceph tools). To avoid
Hi Tom,
On Mon, Jan 14, 2013 at 2:28 PM, Tom Lanyon t...@netspot.com.au wrote:
On 14/01/2013, at 10:47 PM, Florian Haas flor...@hastexo.com wrote:
snip
http://www.hastexo.com/resources/hints-and-kinks/solid-state-drives-and-ceph-osd-journals
It's probably easiest to comment directly on that
On 01/14/2013 06:17 AM, Florian Haas wrote:
Hi everyone,
we ran into an interesting performance issue on Friday that we were
able to troubleshoot with some help from Greg and Sam (thanks guys),
and in the process realized that there's little guidance around for
how to optimize performance in
On 14/01/2013, at 10:47 PM, Florian Haas flor...@hastexo.com wrote:
snip
http://www.hastexo.com/resources/hints-and-kinks/solid-state-drives-and-ceph-osd-journals
It's probably easiest to comment directly on that page, but if you
prefer instead to just respond in this thread, that's perfectly
Hi Mark,
thanks for the comments.
On Mon, Jan 14, 2013 at 2:46 PM, Mark Nelson mark.nel...@inktank.com wrote:
Hi Florian,
Couple of comments:
OSDs use a write-ahead mode for local operations: a write hits the journal
first, and from there is then being copied into the backing filestore.
Hi Sage,
Thanks for your mail~
Would you have a timetable about when such improvement can be
ready?It's critical for non-btrfs filesystem.
I am thinking about introducing flashcache into my configuration to
cache such meta write, since flashcache working under the filesystem,
Hi,
before I get to my questions, I want to thank for the good work done
with ceph. I learned about ceph in an Admin-Magazin article [1]
and was supprised how easy it was to setup ceph by following the
article. Trying new software and not hitting any error/warning or
other problems is a very
On 01/14/2013 05:19 AM, Stratos Psomadakis wrote:
On 01/05/2013 01:50 AM, Josh Durgin wrote:
On 12/19/2012 04:17 AM, Stratos Psomadakis wrote:
This patch renames the --format option to --image-format, for
specyfing the RBD
image format, and uses --format to specify the output formating (to be
On 01/10/2013 03:07 PM, Loic Dachary wrote:
Hi,
I successfully run teuthology with the proposed 3node_rgw.yaml [1] and changing
the flavor from basic to gcov [2]. I hoped to use cov-init.sh (
https://github.com/ceph/teuthology/blob/master/coverage/cov-init.sh ) and then
coverage.sh but I
On Mon, Jan 14, 2013 at 6:09 AM, Florian Haas flor...@hastexo.com wrote:
Hi Mark,
thanks for the comments.
On Mon, Jan 14, 2013 at 2:46 PM, Mark Nelson mark.nel...@inktank.com wrote:
Hi Florian,
Couple of comments:
OSDs use a write-ahead mode for local operations: a write hits the
On Mon, 14 Jan 2013, Chen, Xiaoxi wrote:
Hi Sage,
Thanks for your mail~
Would you have a timetable about when such improvement can be
ready?It's critical for non-btrfs filesystem.
I am thinking about introducing flashcache into my configuration to
cache such meta write,
This series protects an open of a mapped rbd image from succeeding
once an unmap of that image is underway.
Note: Once committed these should be back-ported.
-Alex
[PATCH 1/2] rbd: define flags field, use it for exists flag
[PATCH 2/2] rbd: prevent open
Define a new rbd device flags field, manipulated using atomic bit
operations. Replace the use of the current exists flag with a
bit in this new flags field.
Signed-off-by: Alex Elder el...@inktank.com
---
drivers/block/rbd.c | 17 -
1 file changed, 12 insertions(+), 5
An open request for a mapped rbd image can arrive while removal of
that mapping is underway. The control mutex and an open count is
protect a mapped device that's in use from being removed. But it
is possible for the removal of the mapping to reach the point of no
return *after* a racing open
On 01/14/2013 06:34 PM, Gregory Farnum wrote:
On Mon, Jan 14, 2013 at 6:09 AM, Florian Haas flor...@hastexo.com wrote:
Hi Mark,
thanks for the comments.
On Mon, Jan 14, 2013 at 2:46 PM, Mark Nelson mark.nel...@inktank.com wrote:
Hi Florian,
Couple of comments:
OSDs use a write-ahead
I see that set_bit is atomic, but I don't see that test_bit is. Am I
missing a subtlety?
On 01/14/2013 10:50 AM, Alex Elder wrote:
Define a new rbd device flags field, manipulated using atomic bit
operations. Replace the use of the current exists flag with a
bit in this new flags field.
On Jan 12, 2013, at 1:59 PM, Danny Al-Gaaf wrote:
Am 11.01.2013 06:13, schrieb Gary Lowell:
[...]
Thanks Danny. Installing sharutils solved that minor issue. We now
get though the build just fine on opensuse 12, but sles 11sp2 gives
more warnings (pasted below). Should we be using a
On 01/14/2013 02:32 PM, Dan Mick wrote:
I see that set_bit is atomic, but I don't see that test_bit is. Am I
missing a subtlety?
That's an interesting observation. I'm certain it's safe, but
I needed to research it a bit, and I still haven't verified it
to my satisfaction.
I *think* (but
I think I agree that the claim is that the onus is on the set, and so
I think the proposed code is safe.
On 01/14/2013 01:23 PM, Alex Elder wrote:
On 01/14/2013 02:32 PM, Dan Mick wrote:
I see that set_bit is atomic, but I don't see that test_bit is. Am I
missing a subtlety?
That's an
24 matches
Mail list logo