Hi, all,
I noticed that CephFS fails to support Direct IO for blocks larger than
8MB, say:
sudo dd if=/dev/zero of=mnt/cephfs/foo bs=16M count=1
oflag=direct
dd: writing `mnt/cephfs/foo: Bad address
1+0 records in
On Thu, 14 Mar 2013, Huang, Xiwei wrote:
Hi, all,
I noticed that CephFS fails to support Direct IO for blocks larger than
8MB, say:
sudo dd if=/dev/zero of=mnt/cephfs/foo bs=16M count=1
oflag=direct
dd: writing `mnt/cephfs/foo: Bad address
On Thursday, March 14, 2013 at 8:20 AM, Sage Weil wrote:
On Thu, 14 Mar 2013, Huang, Xiwei wrote:
Hi, all,
I noticed that CephFS fails to support Direct IO for blocks larger than
8MB, say:
sudo dd if=/dev/zero of=mnt/cephfs/foo bs=16M count=1 oflag=direct
dd: writing `mnt/cephfs/foo:
On 03/14/2013 10:44 AM, Greg Farnum wrote:
On Thursday, March 14, 2013 at 8:20 AM, Sage Weil wrote:
On Thu, 14 Mar 2013, Huang, Xiwei wrote:
Hi, all,
I noticed that CephFS fails to support Direct IO for blocks larger than 8MB,
say:
sudo dd if=/dev/zero of=mnt/cephfs/foo bs=16M count=1
Hey Ceph fans,
The World Hosting Days event in Rust, Germany [1] is fast-approaching,
and a couple of Ceph disciples will be available to beat about the
head and shoulders if you care to stop by the Dell booth.
So far I know both Nigel Thomas and myself will be in attendance from
Inktank, and
The current CephFS API is used to extract locality information as follows:
First we get a list of OSD IDs:
ceph_get_file_extent_osds(offset) - [OSD ID]*
Using the OSD IDs we can then query for the CRUSH bucket hierarchy:
ceph_get_osd_crush_location(osd_id) - path
The path includes
On Thursday, March 14, 2013 at 11:14 AM, Noah Watkins wrote:
The current CephFS API is used to extract locality information as follows:
First we get a list of OSD IDs:
ceph_get_file_extent_osds(offset) - [OSD ID]*
Using the OSD IDs we can then query for the CRUSH bucket hierarchy:
On Mar 14, 2013, at 11:29 AM, Greg Farnum g...@inktank.com wrote:
On Thursday, March 14, 2013 at 11:14 AM, Noah Watkins wrote:
The current CephFS API is used to extract locality information as follows:
First we get a list of OSD IDs:
ceph_get_file_extent_osds(offset) - [OSD ID]*
Using
On Thu, 14 Mar 2013, Noah Watkins wrote:
The current CephFS API is used to extract locality information as follows:
First we get a list of OSD IDs:
ceph_get_file_extent_osds(offset) - [OSD ID]*
Using the OSD IDs we can then query for the CRUSH bucket hierarchy:
On Thursday, March 14, 2013 at 11:33 AM, Noah Watkins wrote:
On Mar 14, 2013, at 11:29 AM, Greg Farnum g...@inktank.com
(mailto:g...@inktank.com) wrote:
On Thursday, March 14, 2013 at 11:14 AM, Noah Watkins wrote:
The current CephFS API is used to extract locality information as
On Thu, 14 Mar 2013, Greg Farnum wrote:
On Thursday, March 14, 2013 at 11:33 AM, Noah Watkins wrote:
On Mar 14, 2013, at 11:29 AM, Greg Farnum g...@inktank.com
(mailto:g...@inktank.com) wrote:
On Thursday, March 14, 2013 at 11:14 AM, Noah Watkins wrote:
The current CephFS API
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 03/12/2013 06:02 PM, Alex Elder wrote:
The pages parameter in read_partial_message_pages() is
unused, so get rid of it.
Signed-off-by: Alex Elder el...@inktank.com
---
net/ceph/messenger.c |8 +---
1 file changed, 5 insertions(+),
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 03/12/2013 06:02 PM, Alex Elder wrote:
There is handling in write_partial_message_data() for the case where
only the length of--and no other information about--the data to be
sent has been specified. It uses the zero page as the source of
On 03/12/2013 06:04 PM, Alex Elder wrote:
This series changes the incoming data path for the messenger
to use the new data item cursors.
-Alex
[PATCH 1/4] libceph: use cursor for bio reads
[PATCH 2/4] libceph: kill ceph message bio_iter, bio_seg
[PATCH
On 03/12/2013 06:08 PM, Alex Elder wrote:
Previously a ceph_msg_pos structure contained information
for iterating through the data in a message. Now we use
information in a data item's cursor for that purpose.
This series completes the switchover to use of the cursor
and then eliminates that
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 03/12/2013 06:10 PM, Alex Elder wrote:
Begin the transition from a single message data item to a list of
them by replacing the data structure in a message with a pointer
to a ceph_msg_data structure.
A null pointer will indicate the message
Reviewed-by: Josh Durgin josh.dur...@inktank.com
On 03/12/2013 06:06 PM, Alex Elder wrote:
It turns out that only one of the data item types is ever used at
any one time in a single message (currently).
- A page array is used by the osd client (on behalf of the file
system) and by
On Mar 14, 2013, at 12:39 PM, Sage Weil s...@inktank.com wrote:
Unless those old bindings are already broken because of the preferred osd
thing…
Well, for preferred_pg EOPNOTSUPP will be ignored by the old bindings, so I
guess it still works :)--
To unsubscribe from this list: send the line
Sebastien,
I just had to restart the OSD about 10 minutes ago, so it looks like all it did
was slow down the process.
Dave Spano
- Original Message -
From: Sébastien Han han.sebast...@gmail.com
To: Dave Spano dsp...@optogenics.com
Cc: Greg Farnum g...@inktank.com, ceph-devel
19 matches
Mail list logo