On 05/04/2011 02:10 PM, Josef Bacik wrote:
On 05/04/2011 03:04 PM, valdis.kletni...@vt.edu wrote:
On Wed, 04 May 2011 13:58:39 EDT, Josef Bacik said:
-SEEK_HOLE: this moves the file pos to the nearest hole in the file
from the
given position.
Nearest, or next? Solaris defines it as next,
On 05/04/2011 04:54 PM, Dave Kleikamp wrote:
The comments in fs.h say closest. You may want to change them to
next as well.
Sorry. Missed some of the replies before I responded. Already addressed.
Shaggy
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
,
not in every filesystem. The same checks have to be made for every
filesystem, so they should be done before calling out the
filesystems regardless of what functionality the filesystem actually
supports.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line
should feel
obliged to implement.
It could, but it still needs better justification.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On Tue, Jan 11, 2011 at 04:13:42PM -0500, Lawrence Greenfield wrote:
On Tue, Nov 9, 2010 at 6:40 PM, Dave Chinner da...@fromorbit.com wrote:
The historical reason for such behaviour existing in XFS was that in
1997 the CPU and IO latency cost of unwritten extent conversion was
significant
Here fix the return value as -EINVAL
Signed-off-by: Dave Young hidave.darks...@gmail.com
---
fs/btrfs/disk-io.c |4 +++-
fs/btrfs/volumes.c |8 +---
2 files changed, 8 insertions(+), 4 deletions(-)
--- linux-2.6.orig/fs/btrfs/disk-io.c 2010-12-29 21:53:17.47338 +0800
+++ linux
as -EINVAL
Signed-off-by: Dave Young hidave.darks...@gmail.com
---
fs/btrfs/disk-io.c |4 +++-
fs/btrfs/volumes.c |8 +---
2 files changed, 8 insertions(+), 4 deletions(-)
--- linux-2.6.orig/fs/btrfs/disk-io.c 2010-12-29 21:53:17.47338 +0800
+++ linux-2.6/fs/btrfs/disk-io.c
, which is set properly for all of their children, see below
A property of NFS fileshandles is that they must be stable across
server reboots. Is this anon dev_t used as part of the NFS
filehandle and if so how can you guarantee that it is stable?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
to simply do what the user
expects. It also is harder to implement and testing becomes much
more intricate. From that perspective, it does not seem desirable to
me...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
On Tue, Nov 09, 2010 at 04:41:47PM -0500, Ted Ts'o wrote:
On Tue, Nov 09, 2010 at 03:42:42PM +1100, Dave Chinner wrote:
Implementation is up to the filesystem. However, XFS does (b)
because:
1) it was extremely simple to implement (one of the
advantages of having
which I
wrote for testing all the edge case of XFS_IOC_ZERO_RANGE (*) would be
good.
Cheers,
Dave.
(*) fallocate() version:
http://git.kernel.org/?p=linux/kernel/git/dgc/xfsdev.git;a=commitdiff;h=45f3e1831e3abc8bd12ec1e6c548f73a8dd9e36d
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from
to change it.
This needs to be defined and documented - can you include a man
page update in this series that defines the expected behaviour
of FALLOC_FL_PUNCH_HOLE?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
On Mon, Nov 08, 2010 at 09:05:09PM -0500, Josef Bacik wrote:
On Tue, Nov 09, 2010 at 12:22:54PM +1100, Dave Chinner wrote:
On Mon, Nov 08, 2010 at 03:32:03PM -0500, Josef Bacik wrote:
This patch simply allows XFS to handle the hole punching flag in fallocate
properly. I've tested
On Mon, Nov 08, 2010 at 10:30:38PM -0500, Ted Ts'o wrote:
On Tue, Nov 09, 2010 at 12:12:22PM +1100, Dave Chinner wrote:
Hole punching was not included originally in fallocate() for a
variety of reasons. IIRC, they were along the lines of:
1 de-allocating of blocks in an allocation
the btrfs transfer was still trying to copy
it. It was able to saturate the 100mbit port.
rsync --progress -v -e'ssh'
a2s55:/home/dmportal/public_html/data3m7_data_wiki.tgz .
data3m7_data_wiki.tgz 8.10MB/s0:22:19
Thanks for any input!
--
Dave Cundiff
System Administrator
A2Hosting, Inc
http
to use.
I'm running kernel 2.6.36-rc4.
Filesystem was built with -m single -d single and using the compress
mount option.
Block device is an Areca RAID5 with 4 Western Digital 2TB drives.
--
Dave Cundiff
System Administrator
A2Hosting, Inc
http://www.a2hosting.com
--
To unsubscribe from this list
144.84 168.01 1.15 100.00
Time: 12:25:01 AM
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sdb 17.50 0.60 204.35 49.70 1776.00 524.00
9.0518.64 74.23 3.74 95.03
--
Dave Cundiff
System Administrator
A2Hosting
requirement are ahead of time or at least be able to
implement a memory freeing function when kmalloc() returns NULL.
Oh, we can determine an upper bound. You might just not like it.
Actually ext3/ext4 shouldn't be as bad as XFS, which Dave estimated to
be around 400k for a transaction
kmalloc() returns NULL.
Oh, we can determine an upper bound. You might just not like it.
Actually ext3/ext4 shouldn't be as bad as XFS, which Dave estimated to
be around 400k for a transaction. My guess is that the worst case for
ext3/ext4 is probably around 256k or so; like XFS, most
and have it complete, freeing the memory from the pool
that it holds.
That is, the guarantee that we will always make progress simply does
not exist in filesystems, so a mempool-like concept seems to me to
be doomed from the start
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Wed, Aug 25, 2010 at 03:35:42PM +0200, Peter Zijlstra wrote:
On Wed, 2010-08-25 at 23:24 +1000, Dave Chinner wrote:
That is, the guarantee that we will always make progress simply does
not exist in filesystems, so a mempool-like concept seems to me to
be doomed from the start
that XFS has been doing these allocations can't fail loop in
kmem_alloc() and kmem_zone_alloc(), well, forever. I can't ever
remember seeing it report a potential deadlock, though
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux
: cannot access .kde4/share/apps/akregator/data/feeds.opml: Structure needs
cleaning
total 4
?? ? ???? feeds.opml
What is the error reported in dmesg when the XFS filesytem shuts down?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from
, struct page *);
+ void (*flush_page)(int, ino_t, pgoff_t);
+ void (*flush_inode)(int, ino_t);
+ void (*flush_fs)(int);
+};
+
How would someone go about testing this code? Is there an example
cleancache implementation?
-- Dave
--
To unsubscribe from this list: send the line
On Tue, May 04, 2010 at 11:27:50AM -0400, Josef Bacik wrote:
On Tue, May 04, 2010 at 10:14:18AM +1000, Dave Chinner wrote:
On Mon, May 03, 2010 at 01:27:02PM -0400, Josef Bacik wrote:
This is similar to what already happens in the write case. If we have a
short
read while doing
that spans EOF (i.e. get a
short read) now attempt a buffered IO (that will fail) before returning?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info
is spinning.
dmesg? lspci -vv?
We have an AGP related issue with WC/UC paging that might be related,
but I'd need more info.
Dave.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
if you try this patch it'll fix it.
Dave.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
these benchmarks run on each filesystem for
each kernel release so ext/xfs/btrfs all get some regular basic
performance regression test coverage?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message
On Wed, Feb 04, 2009 at 07:29:51PM +0100, Pavel Machek wrote:
On Sun 2009-02-01 12:40:50, Dave Chinner wrote:
On Mon, Jan 26, 2009 at 05:27:11PM +0100, Pavel Machek wrote:
On Wed 2009-01-21 15:00:42, Dave Chinner wrote:
+ Turning this option on will result in kernel panicking any time
On Mon, Jan 26, 2009 at 05:27:11PM +0100, Pavel Machek wrote:
On Wed 2009-01-21 15:00:42, Dave Chinner wrote:
On Tue, Jan 20, 2009 at 11:20:19PM +0100, Pavel Machek wrote:
On Tue 2009-01-20 08:28:29, Christoph Hellwig wrote:
I think that was the issue with the debug builds. If you do
, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
Any reason you are not testing XFS in that set?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
On Tue, Jan 20, 2009 at 11:20:19PM +0100, Pavel Machek wrote:
On Tue 2009-01-20 08:28:29, Christoph Hellwig wrote:
On Tue, Jan 20, 2009 at 11:59:44PM +1100, Dave Chinner wrote:
So far the responses from xfs folks have been disappointing, if you are
interested in bugreports i can send
On Wed, 2009-01-07 at 13:58 -0800, Linus Torvalds wrote:
On Wed, 7 Jan 2009, Peter Zijlstra wrote:
Do we really have to re-do all that code every loop?
No, you're right, we can just look up the cpu once. Which makes Andrew's
argument that probe_kernel_address() isn't in any hot path
On Wed, 2008-12-17 at 15:04 -0700, Andreas Dilger wrote:
On Dec 17, 2008 08:23 -0500, Christoph Hellwig wrote:
An alternative way, supported by optionally by ext3 and reiserfs and
exclusively supported by jfs is to open the journal device by the device
number (dev_t) of the block special
601 - 635 of 635 matches
Mail list logo