On Wed, Apr 25, 2007 at 08:54:34PM +1000, David Chinner wrote:
On Tue, Apr 24, 2007 at 04:53:11PM -0500, Amit Gud wrote:
-- --
| cnode 0 |--| cnode 0 |-- to another cnode or NULL
-- --
| cnode 1 |-
Search contiguous free blocks with Alex's mutil-block allocation
and allocate them for the temporary inode.
This patch applies on top of Alex's patches.
[RFC] delayed allocation, mballoc, etc
http://marc.theaimsgroup.com/?l=linux-ext4m=116493228301966w=2
Signed-off-by: Takashi Sato [EMAIL
Move lg_list to s_locality_dirty and mark lg as dirty
if nr_to_write(total page count which has not written in disk yet)
is 0 or less and lg_io is not empty in ext4_lg_sync_single_group().
This makes sure that inode is written to disk.
Signed-off-by: Takashi Sato [EMAIL PROTECTED]
---
diff -Nrup
Hi all,
I have made following changes to the previous online defrag patchset
to improve it. Note that there is no functional change.
1. Change the handling of temporary inode.
Now ext4_ext_defrag() calls ext4_new_inode()/iput() pair instead of
new_inode()/delete_ext_defrag_inode(). Because
Move the blocks on the temporary inode to the original inode
by a page.
1. Read the file data from the old blocks to the page
2. Move the block on the temporary inode to the original inode
3. Write the file data on the page into the new blocks
Signed-off-by: Takashi Sato [EMAIL PROTECTED]
---
The defrag command. Usage is as follows:
o Put the multiple files closer together.
# e4defrag -r directory-name
o Defrag for a single file.
# e4defrag file-name
o Defrag for all files on ext4.
# e4defrag device-name
Signed-off-by: Takashi Sato [EMAIL PROTECTED]
---
/*
* e4defrag, ext4
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
Right, I figure if the normal action is to always do
mnt-user = current-fsuid, then for the special case we
pass a uid in someplace. Of course... do we not have a
place to do that? Would it be a no-no to use 'data' for
a non-fs-specific arg?
On Wed, Apr 25, 2007 at 03:47:10PM -0700, Valerie Henson wrote:
Actually, there is an upper limit on the number of continuation
inodes. Each file can have a maximum of one continuation inode per
chunk. (This is why we need to support sparse files.)
How about this case:
Growing file
On Thu, Apr 26, 2007 at 10:53:16AM -0500, Amit Gud wrote:
Jeff Dike wrote:
How about this case:
Growing file starts in chunk A.
Overflows into chunk B.
Delete file in chunk A.
Growing file overflows chunk B and spots new free space in
chunk A (and nothing anywhere
Preventive measures are taken to limit only one continuation inode per
file per chunk. This can be done easily in the chunk allocation
algorithm for disk space. Although I'm not quite sure what you mean by
How are you handling the allocation in this situation, are you assuming
that a chunk
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
Right, I figure if the normal action is to always do
mnt-user = current-fsuid, then for the special case we
pass a uid in someplace. Of course... do we not have a
place to do that? Would it
So then as far as you're concerned, the patches which were in -mm will
remain unchanged?
Basically yes. I've merged the update patch, which was not yet added
to -mm, did some cosmetic code changes, and updated the patch headers.
There's one open point, that I think we haven't really explored,
Alan Cox wrote:
Preventive measures are taken to limit only one continuation inode per
file per chunk. This can be done easily in the chunk allocation
algorithm for disk space. Although I'm not quite sure what you mean by
How are you handling the allocation in this situation, are you
Jeff Dike wrote:
On Thu, Apr 26, 2007 at 10:53:16AM -0500, Amit Gud wrote:
Jeff Dike wrote:
How about this case:
Growing file starts in chunk A.
Overflows into chunk B.
Delete file in chunk A.
Growing file overflows chunk B and spots new free space in
chunk A
Hi Ben!
Thanks a lot for your comments, and sorry for the late reply, I did more tests
in the meantime...
Am 19.04.07 01:00 schrieb(en) Benjamin LaHaise:
On Wed, Apr 18, 2007 at 07:58:40PM +0200, Albrecht Dreß wrote:
- Are there known issues with VFAT in 2.6.11 which might lead to the
Based on the discussion, this new patchset uses following as the
interface for fallocate() system call:
asmlinkage long sys_fallocate(int fd, int mode, loff_t offset, loff_t len)
It seems that only s390 architecture has a problem with such a layout of
arguments in fallocate(). Thus for s390, we
This patch implements the fallocate() system call and adds support for
i386, x86_64 and powerpc.
NOTE: It is based on 2.6.21 kernel version.
Signed-off-by: Amit Arora [EMAIL PROTECTED]
---
arch/i386/kernel/syscall_table.S |1
arch/powerpc/kernel/sys_ppc32.c |7 ++
This patch implements support of fallocate system call on s390(x)
platform. A wrapper is added to address the issue which s390 ABI has
with preferred ordering of arguments in this system call (i.e. int,
int, loff_t, loff_t).
I will request s390 experts to please review this code and verify if
This is a fix for an extent-overlap bug. The fallocate() implementation
on ext4 depends on this bugfix. Though this fix had been posted earlier,
but because it is still not part of mainline code, I have attached it
here too.
Signed-off-by: Amit Arora [EMAIL PROTECTED]
---
fs/ext4/extents.c
This patch has the ext4 implemtation of fallocate system call.
Signed-off-by: Amit Arora [EMAIL PROTECTED]
---
fs/ext4/extents.c | 201 +++-
fs/ext4/file.c |1
include/linux/ext4_fs.h |7 +
This patch adds write support for preallocated (using fallocate system
call) blocks/extents. The preallocated extents in ext4 are marked
uninitialized, hence they need special handling especially while
writing to them. This patch takes care of that.
Signed-off-by: Amit Arora [EMAIL PROTECTED]
---
Hello,
I've been lately playing with remapping ext2/ext3 blocks (especially how
much it can give us in terms of speed of things like KDE start). For that
I've written two simple tools (you can get them from
ftp.suse.com/pub/people/jack/ext3remapper.tar.gz):
e2block2file to transform
On Apr 25 2007 11:21, Eric W. Biederman wrote:
Why did we want to use fsuid, exactly?
- Because ruid is completely the wrong thing we want mounts owned
by whomever's permissions we are using to perform the mount.
Think nfs. I access some nfs file as an unprivileged user. knfsd, by
nature,
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
So then as far as you're concerned, the patches which were in -mm will
remain unchanged?
Basically yes. I've merged the update patch, which was not yet added
to -mm, did some cosmetic code changes, and updated the patch headers.
There's one
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
So then as far as you're concerned, the patches which were in -mm will
remain unchanged?
Basically yes. I've merged the update patch, which was not yet added
to -mm, did some cosmetic code changes, and updated the patch headers.
On Apr 25 2007 11:21, Eric W. Biederman wrote:
Why did we want to use fsuid, exactly?
- Because ruid is completely the wrong thing we want mounts owned
by whomever's permissions we are using to perform the mount.
Think nfs. I access some nfs file as an unprivileged user. knfsd, by
On Thu, 26 April 2007 10:47:40 +1000, David Chinner wrote:
This assumes that you know a chunk has been corrupted, though.
How do you find that out?
Option 1: you notice something odd while serving userspace.
Option 2: a checking/scrubbing daemon of some sorts.
The first will obviously miss
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
Quoting Miklos Szeredi ([EMAIL PROTECTED]):
So then as far as you're concerned, the patches which were in -mm will
remain unchanged?
Basically yes. I've merged the update patch, which was not yet added
to -mm, did some cosmetic code
Miklos Szeredi [EMAIL PROTECTED] writes:
On Apr 25 2007 11:21, Eric W. Biederman wrote:
Why did we want to use fsuid, exactly?
- Because ruid is completely the wrong thing we want mounts owned
by whomever's permissions we are using to perform the mount.
Think nfs. I access some nfs
On Thu, Apr 26, 2007 at 12:05:04PM -0400, Jeff Dike wrote:
No, I'm referring to a different file. The scenario is that you have
a growing file in a nearly full disk with files being deleted (and
thus space being freed) such that allocations for the growing file
bounce back and forth between
On Thu, Apr 26, 2007 at 10:47:38AM +0200, Jan Kara wrote:
Do I get it right that you just have in each cnode a pointer to the
previous next cnode? But then if two consecutive cnodes get corrupted,
you have no way to connect the chain, do you? If each cnode contained
some unique identifier
31 matches
Mail list logo