Hi Paul,
Thank you for wonderful interestings comment.
your comment is really nice.
I was HPC guy with large NUMA box at past.
I promise i don't ignroe hpc user.
but unfortunately I didn't have experience of use CPUSET
because at that point, it was under development yet.
I hope discuss you that
Hi,
Don't you remember the topic "solid state drive access and context
switching" [1].
I want to measure it is really better performance on SSD?
To write it on ssd synchronously, I hacked the
'generic_make_request()' [2] and got following results.
# echo 3 > /proc/sys/vm/drop_caches
# tiotest -f
The patch looks fine - but since it does not set obj_type any more - I
want to think about it a little more since it may be useful coming
back from the open path (although the mode is probably good enough).
jra added support to Samba for a new POSIX open/create/mkdir request
(which we only use for
Consolidate all inode manipulation code in libfs in a single
source file.
Signed-off-by: Arnd Bergman <[EMAIL PROTECTED]>
Index: linux-2.6/fs/libfs.c
===
--- linux-2.6.orig/fs/libfs.c
+++ linux-2.6/fs/libfs.c
@@ -12,78 +12,6 @@
#i
Consolidate all address space manipulation code in libfs in a single
source file.
Signed-off-by: Arnd Bergman <[EMAIL PROTECTED]>
Index: linux-2.6/fs/libfs.c
===
--- linux-2.6.orig/fs/libfs.c
+++ /dev/null
@@ -1,116 +0,0 @@
-/*
- *
Consolidate all file manipulation code in libfs in a single
source file.
Signed-off-by: Arnd Bergmann <[EMAIL PROTECTED]>
Index: linux-2.6/fs/libfs.c
===
--- linux-2.6.orig/fs/libfs.c
+++ linux-2.6/fs/libfs.c
@@ -421,165 +421,6 @@ ssi
With most of debugfs now copied to generic code in libfs,
we can remove the original copy and replace it with thin
wrappers around libfs.
Signed-off-by: Arnd Bergmann <[EMAIL PROTECTED]>
Index: linux-2.6/fs/Kconfig
===
--- linux-2.6.o
The file operations in debugfs are rather generic and can
be used by other file systems, so it can be interesting to
include them in libfs, with more generic names, and exported
to modules.
This patch adds a new copy of these operations to libfs,
so that the debugfs version can later be cut down.
I noticed that there is a lot of duplication in pseudo
file systems, so I started looking into how to consolidate
them. I ended up with a largish rework of the structure
of libfs and moving almost all of debugfs in there as well.
As an example, I also have patches that reduce debugfs,
securityfs a
Consolidate all dentry manipulation code in libfs in a single
source file.
Signed-off-by: Arnd Bergmann <[EMAIL PROTECTED]>
Index: linux-2.6/fs/libfs.c
===
--- linux-2.6.orig/fs/libfs.c
+++ linux-2.6/fs/libfs.c
@@ -12,188 +12,6 @@
Half of the usbfs code is the same as debugfs, so we can
replace it now with calls to the generic libfs versions.
Signed-off-by: Arnd Bergmann <[EMAIL PROTECTED]>
Index: linux-2.6/drivers/usb/core/inode.c
===
--- linux-2.6.orig/driver
Consolidate all super block manipulation code in libfs in a single
source file.
Signed-off-by: Arnd Bergman <[EMAIL PROTECTED]>
Index: linux-2.6/fs/libfs.c
===
--- linux-2.6.orig/fs/libfs.c
+++ linux-2.6/fs/libfs.c
@@ -12,63 +12,6 @@
With the new simple_fs_type in place, securityfs practically
becomes a nop and we just need to leave code around to manage
its mount point.
Signed-off-by: Arnd Bergmann <[EMAIL PROTECTED]>
Index: linux-2.6/security/inode.c
===
--- lin
With libfs turning into a larger subsystem, it makes
sense to have a separate header that is not included
by the low-level vfs code.
Signed-off-by: Arnd Bergmann <[EMAIL PROTECTED]>
Index: linux-2.6/fs/debugfs/inode.c
===
--- linux-2.
There is a number of pseudo file systems in the kernel
that are basically copies of debugfs, all implementing the
same boilerplate code, just with different bugs.
This adds yet another copy to the kernel in the libfs directory,
with generalized helpers that can be used by any of them.
The most in
On Mon, Feb 18, 2008 at 05:16:55PM +0100, Tomasz Chmielewski wrote:
> Theodore Tso schrieb:
>
>> I'd really need to know exactly what kind of operations you were
>> trying to do that were causing problems before I could say for sure.
>> Yes, you said you were removing unneeded files, but how were y
Theodore Tso schrieb:
Are there better choices than ext3 for a filesystem with lots of hardlinks?
ext4, once it's ready? xfs?
All filesystems are going to have problems keeping inodes close to
directories when you have huge numbers of hard links.
I'd really need to know exactly what kind of o
On Mon, Feb 18, 2008 at 04:57:25PM +0100, Andi Kleen wrote:
> > Use cp
> > or a tar pipeline to move the files.
>
> Are you sure cp handles hardlinks correctly? I know tar does,
> but I have my doubts about cp.
I *think* GNU cp does the right thing with --preserve=links. I'm not
100% sure, thoug
On Mon, Feb 18, 2008 at 10:16:32AM -0500, Theodore Tso wrote:
> On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
> > I tried to copy that filesystem once (when it was much smaller) with "rsync
> > -a -H", but after 3 days, rsync was still building an index and didn't copy
> > a
On Mon, Feb 18, 2008 at 04:02:36PM +0100, Tomasz Chmielewski wrote:
> I tried to copy that filesystem once (when it was much smaller) with "rsync
> -a -H", but after 3 days, rsync was still building an index and didn't copy
> any file.
If you're going to copy the whole filesystem don't use rsync
On Mon, Feb 18, 2008 at 04:18:23PM +0100, Andi Kleen wrote:
> On Mon, Feb 18, 2008 at 09:16:41AM -0500, Theodore Tso wrote:
> > ext3 tries to keep inodes in the same block group as their containing
> > directory. If you have lots of hard links, obviously it can't really
> > do that, especially sin
Theodore Tso schrieb:
(...)
What has helped a bit was to recreate the file system with -O^dir_index
dir_index seems to cause more seeks.
Part of it may have simply been recreating the filesystem, not
necessarily removing the dir_index feature.
You mean, copy data somewhere else, mkfs a new
On Mon, Feb 18, 2008 at 09:16:41AM -0500, Theodore Tso wrote:
> ext3 tries to keep inodes in the same block group as their containing
> directory. If you have lots of hard links, obviously it can't really
> do that, especially since we don't have a good way at mkdir time to
> tell the filesystem,
On Mon, Feb 18, 2008 at 03:03:44PM +0100, Andi Kleen wrote:
> Tomasz Chmielewski <[EMAIL PROTECTED]> writes:
> >
> > Is it normal to expect the write speed go down to only few dozens of
> > kilobytes/s? Is it because of that many seeks? Can it be somehow
> > optimized?
>
> I have similar problems
Tomasz Chmielewski <[EMAIL PROTECTED]> writes:
>
> Is it normal to expect the write speed go down to only few dozens of
> kilobytes/s? Is it because of that many seeks? Can it be somehow
> optimized?
I have similar problems on my linux source partition which also
has a lot of hard linked files (a
I have a 1.2 TB (of which 750 GB is used) filesystem which holds
almost 200 millions of files.
1.2 TB doesn't make this filesystem that big, but 200 millions of files
is a decent number.
Most of the files are hardlinked multiple times, some of them are
hardlinked thousands of times.
Recently
> > > > However David and Christoph are beavering away on the r-o-bind-mounts
> > > > patches and I expect that there will be overlaps with unprivileged
> > > > mounts.
> > > >
> > > > Could we coordinate things a bit please? Decide who goes first, review
> > > > and maybe even test each others
27 matches
Mail list logo