In this patch, I adjust the seqence of if-conditions.
It will assess the page->private situation.
First, we make sure the page->private is not null.
And then, we can do some with this page->private.
---
Signed-off-by: Liuwenyi
Cc: Chris Mason
Cc: Yan Zheng
Cc: Josef Bacik
Cc: Jens Axboe
Cc: l
On Fri, Jan 22, 2010 at 07:00:29PM +0100, Mattias Säteri wrote:
> Hi,
>
> I ran into a bug when trying to replace a failed device in a RAID-1
> configuration. I did the following:
>
> 1. Create raid1 fs following the description on the btrfs wiki
> (http://btrfs.wiki.kernel.org/index.php/Using_Bt
On Tue, Jan 26, 2010 at 10:16:42PM +0100, briaeros007 wrote:
> Hello,
>
> I have a btrfs volume on a two 1TB disk, and i've been trying to add a
> new (1.5 TB) to this volume. I've done btrfsctl -a /dev/sdb /mnt/btrfs
> without trouble, and btrfs-show see the third disk.
> But I've got segfault wh
On Tue, Jan 26, 2010 at 01:20:30PM +0100, Mr. Tux wrote:
> On Monday 25 January 2010 23:08:39 Josef Bacik wrote:
>
> Hi Josef
>
> >
> > Hmm well I'm trying to reproduce that here and it's working fine, course I
> > can't pull a cable on my disk since I only have one disk to test with.
> > Would
If you have a disk failure in RAID1 and then add a new disk to the array, and
then try to remove the missing volume, it will fail. The reason is the sanity
check only looks at the total number of rw devices, which is just 2 because we
have 2 good disks and 1 bad one. Instead check the total numbe
Hit this problem while testing RAID1 failure stuff. open_bdev_exclusive returns
ERR_PTR(), not NULL. So change the return value properly. This is important if
you accidently specify a device that doesn't exist when trying to add a new
device to an array, you will panic the box dereferencing bdev
If a RAID setup has chunks that span multiple disks, and one of those disks has
failed, btrfs_chunk_readonly will return 1 since one of the disks in that
chunk's stripes is dead and therefore not writeable. So instead if we are in
degraded mode, return 0 so we can go ahead and allocate stuff. Wit
I'm using btrfs for a distribution specialized to be used with USB pen drives.
btrfs ist the first file system (besides nilfs) that is fast enough to be used
with media (flash memory) that has a severe restriction on the number of writes
per second (to be more precise: the number of page deletes p
On Tue, Dec 15, 2009 at 02:54:17PM +0800, Miao Xie wrote:
> This patch removes tree_search() in extent_map.c because it is not called by
> anything.
Thank you, I've queued this one up.
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to maj
On Tue, Dec 15, 2009 at 02:54:15PM +0800, Miao Xie wrote:
> It is unnecessary to get the prev node by getting the next node first, because
> we can get the prev node directly by rb_prev(). And it is also unnecessary to
> use
> while loop to get the prev node.
>
> This patch cleanups those unneces
Hello,
I have a btrfs volume on a two 1TB disk, and i've been trying to add a
new (1.5 TB) to this volume. I've done btrfsctl -a /dev/sdb /mnt/btrfs
without trouble, and btrfs-show see the third disk.
But I've got segfault when I try to balance the data.
And when a try a fsck, he do absolutely not
On Tue, Jan 26, 2010 at 03:15:08PM +0800, Zhu Yanhai wrote:
>Hi Chris,
>According to my understanding, COW in Btrfs services for 1) snapshots
>capacities 2) keeping checksum consistent with FS data,
>that's why nodatacow implies nodatasum. And COW will still happen for
>snapshot
This patch revert's commit
6c090a11e1c403b727a6a8eff0b97d5fb9e95cb5
since it introduces this problem where we can run orphan cleanup on a volume
that can have orphan entries re-added. Instead of my original fix, Yan Zheng
pointed out that we can just revert my original fix and then run the orpha
On Monday 25 January 2010 23:08:39 Josef Bacik wrote:
Hi Josef
>
> Hmm well I'm trying to reproduce that here and it's working fine, course I
> can't pull a cable on my disk since I only have one disk to test with.
> Would you mind sending me your dmesg after you try
>
> mount -o degraded /dev/s
14 matches
Mail list logo