ok. Thanks. Is there any documentation for ceph fs client arch as a reference
if I'd like to look into this?
发自我的 iPhone
在 2013-3-14,23:23,Sage Weil s...@inktank.com 写道:
On Thu, 14 Mar 2013, Huang, Xiwei wrote:
Hi, all,
I noticed that CephFS fails to support Direct IO for blocks
Yes that would be a typical scenario. :)
发自我的 iPhone
在 2013-3-14,23:51,Mark Nelson mark.nel...@inktank.com 写道:
On 03/14/2013 10:44 AM, Greg Farnum wrote:
On Thursday, March 14, 2013 at 8:20 AM, Sage Weil wrote:
On Thu, 14 Mar 2013, Huang, Xiwei wrote:
Hi, all,
I noticed that CephFS fails
update: zombie PGs dissapeared after I deleted pool which previously
was holding these PGs.
Thoughts and suggestion after this:
Docs say:Stale Placement groups are in an unknown state - the OSDs
that host them have not reported to the monitor cluster in a while
(configured by
On Fri, 15 Mar 2013, Huang, Xiwei wrote:
ok. Thanks. Is there any documentation for ceph fs client arch as a
reference if I'd like to look into this?
Not really.
The code you are interested in is fs/ceph/file.c. The direct io path
should be pretty clear; it'll build up a list of pages and
On Friday, March 8, 2013 at 3:29 PM, Kevin Decherf wrote:
On Fri, Mar 01, 2013 at 11:12:17AM -0800, Gregory Farnum wrote:
On Tue, Feb 26, 2013 at 4:49 PM, Kevin Decherf ke...@kdecherf.com
(mailto:ke...@kdecherf.com) wrote:
You will find the archive here: snip
The data is not
On Wed, Mar 6, 2013 at 11:27 AM, Alex Sla 4k3...@gmail.com wrote:
Hello guys,
first of all, thank you for Ceph. Such an amazon technology. Great
pleasure to work with.
Then to my question. I review Ceph in scope of security and need to
understand how the single osd is create network port
During my journey of using rados cppool, which is an awesome feature by the
way, I found an interesting behavior related to cephx. I wanted to share it for
anyone else who may be using Openstack, that decides to rename, or copy a pool.
My client.glance entry is currently set to this (with the
On 03/15/2013 02:55 PM, Dave Spano wrote:
During my journey of using rados cppool, which is an awesome feature by the
way, I found an interesting behavior related to cephx. I wanted to share it for
anyone else who may be using Openstack, that decides to rename, or copy a pool.
My
Thank you Josh. Have a great weekend.
Dave Spano
- Original Message -
From: Josh Durgin josh.dur...@inktank.com
To: Dave Spano dsp...@optogenics.com
Cc: Greg Farnum g...@inktank.com, Sébastien Han
han.sebast...@gmail.com, ceph-devel ceph-devel@vger.kernel.org, Sage
Weil
Le 15 mars 2013 à 21:32, Greg Farnum g...@inktank.com a écrit :
On Friday, March 8, 2013 at 3:29 PM, Kevin Decherf wrote:
On Fri, Mar 01, 2013 at 11:12:17AM -0800, Gregory Farnum wrote:
On Tue, Feb 26, 2013 at 4:49 PM, Kevin Decherf ke...@kdecherf.com
(mailto:ke...@kdecherf.com) wrote:
You
On Friday, March 15, 2013 at 3:40 PM, Marc-Antoine Perennou wrote:
Thank you a lot for these explanations, looking forward for these fixes!
Do you have some public bug reports regarding this to link us?
Good luck, thank you for your great job and have a nice weekend
Marc-Antoine Perennou
[Putting list back on cc]
On Friday, March 15, 2013 at 4:11 PM, Jim Schutt wrote:
On 03/15/2013 04:23 PM, Greg Farnum wrote:
As I come back and look at these again, I'm not sure what the context
for these logs is. Which test did they come from, and which behavior
(slow or not slow, etc)
Yeah. In fact I found the bug report via your blog. Thanks for the sharing. :)
发自我的 iPhone
在 2013-3-16,8:05,Sebastien Han
sebastien@enovance.commailto:sebastien@enovance.com 写道:
Hi guys,
I imagine we'll see some applications doing large direct IO writes like this in
the HPC space.
I
13 matches
Mail list logo