> bh = btrfs_read_dev_super(fs_devices->latest_bdev);
> if (!bh) {
> err = -EINVAL;
> goto fail_alloc;
> }
>
> memcpy(&fs_info->super_copy, bh->b_data, sizeof(fs_info->super_copy));
> memcpy(&fs_info->super_for_commit, &fs_inf
Excerpts from Dave Chinner's message of 2011-06-01 21:11:39 -0400:
> Hi Folks,
>
> Running on 3.0-rc1 on an 8p/4G RAM VM with a 16TB filesystem (12
> disk DM stripe) a 50 million inode 8-way fsmark creation workload
> via:
>
> $ /usr/bin/time ./fs_mark -D 1 -S0 -n 10 -s 0 -L 63 \
> > -d /
On Thu, 02 Jun 2011 13:17:55 -0700
Andi Kleen wrote:
> Sergei Trofimovich writes:
> >
> > Am I too paranoid about the issue?
>
> It sounds weird, because if the kernel would really checksum
> mutexes on disk you would have a lot of on disk
> format incompatibility between different kernel versi
Hi, Kathleen,
On Thu, Jun 02, 2011 at 04:20:55PM -0400, kathleen.ho...@emc.com wrote:
> Hi Hugo, I don't seem to have mkfs.btrfs. Here is the Red Hat 6.0 version
> I'm running, should it be included?
>
> Linux LN164088.LSS.EMC.COM 2.6.32-71.el6.s390x #1 SMP Wed Sep 1 01:38:33 EDT
> 2010 s39
Sergei Trofimovich writes:
>
> Am I too paranoid about the issue?
It sounds weird, because if the kernel would really checksum
mutexes on disk you would have a lot of on disk
format incompatibility between different kernel versions
(e.g. between lockdep and normal kernels or kernels
running on di
On Thu, Jun 02, 2011 at 03:45:07PM -0400, kathleen.ho...@emc.com wrote:
> Hello,
> I'm trying to use mod3 ckds which are already RAID10 protection. (most of the
> doc I'm looking at uses fba instead of ckd, so I didn't know if this was a
> limitation)
> I'm addressing the head device and able to
Hello,
I'm trying to use mod3 ckds which are already RAID10 protection. (most of the
doc I'm looking at uses fba instead of ckd, so I didn't know if this was a
limitation)
I'm addressing the head device and able to use these devices with no problem as
ext3. I've been reading that cache may reme
Dave Chinner writes:
>
> Also, there is massive lock contention while running these workloads.
> perf top shows this for the create after about 5m inodes have been
> created:
We saw pretty much the same thing in some simple tests on large systems
(extent io tree locking and higher level b*tree lo
On Thu, 2 Jun 2011 18:13:22 +0200
David Sterba wrote:
> fs_info is now ~9kb, more than fits into one page. This will cause
> mount failure when memory is too fragmented. Top space consumers are
> super block structures super_copy and super_for_commit, ~2.8kb each.
> Allocate them dynamically. fs
On Thursday 05 May 2011 22:32:42 Chris Mason wrote:
> Excerpts from Konstantinos Skarlatos's message of 2011-05-05 16:27:54 -0400:
> > I think i made some progress. When i tried to remove the directory that
> > i suspect contains the problematic file, i got this on the console
> >
> > rm -rf serve
On Thu, Jun 2, 2011 at 6:40 AM, Geoff Ritter wrote:
> On Thu, 2011-06-02 at 04:20 -0500, C Anthony Risinger wrote:
>>
>> i tried with loop devices at first, then "real" devices -- this is all
>> under KVM/QEMU, and with FSs that are/will be smaller than 1G.
>
> I have tried the seed option as well
fs_info is now ~9kb, more than fits into one page. This will cause
mount failure when memory is too fragmented. Top space consumers are
super block structures super_copy and super_for_commit, ~2.8kb each.
Allocate them dynamically. fs_info will be ~3.5kb. (measured on x86_64)
Add a wrapper for fre
After creating the initial LSM security extended attribute, call
evm_inode_post_init_security() to create the 'security.evm'
extended attribute.
Signed-off-by: Mimi Zohar
---
fs/btrfs/xattr.c | 39 +--
1 files changed, 29 insertions(+), 10 deletions(-)
diff
On Thu, 2011-06-02 at 04:20 -0500, C Anthony Risinger wrote:
>
> i tried with loop devices at first, then "real" devices -- this is all
> under KVM/QEMU, and with FSs that are/will be smaller than 1G.
I have tried the seed option as well. I was able to successfully mount
the read write partition
hello,
i'm trying to setup a seeded FS -- was only able to find this:
http://thread.gmane.org/gmane.comp.file-systems.btrfs/10529
... and announcement-like info from 2009 or so. i keep hitting
bugs/oops, and even though the FS *appears* to work correctly
afterwards, sometimes mount/strace/etc w
On Thu, Jun 02, 2011 at 03:31:16PM +0700, Fajar A. Nugraha wrote:
> On Thu, Jun 2, 2011 at 6:20 AM, Hugo Mills wrote:
> > Over the last few weeks, I've been playing with a foolish idea,
> > mostly triggered by a cluster of people being confused by btrfs's free
> > space reporting (df vs btrfs fi
On Thu, Jun 2, 2011 at 6:20 AM, Hugo Mills wrote:
> Over the last few weeks, I've been playing with a foolish idea,
> mostly triggered by a cluster of people being confused by btrfs's free
> space reporting (df vs btrfs fi df vs btrfs fi show). I also wanted an
> excuse, and some code, to mess a
17 matches
Mail list logo