IWT-VM ~ # umount /media/backup/
umount: /media/backup/: not mounted
IWT-VM ~ # btrfs fi show -d
Label: 'Fast' uuid: c41dc6db-6f00-4d60-a2f7-acbceb25e4e7
Total devices 3 FS bytes used 332.62GiB
devid1 size 471.93GiB used 423.02GiB path /dev/sdh1
devid2 size 412.00Gi
can you run,
btrfs fi show -d
when device is unmounted.
some func called by __open_ctree_fd() is failing, we need to figure out
which.
On 11/04/2014 11:27 AM, Paul Jones wrote:
Thanks for the help.
I tried btrfs-progs 3.16, same results.
IWT-VM ~ # blkid
/dev/sdb1: LABEL="Fast" UUID="c4
On Mon, Nov 03, 2014 at 10:11:18AM -0700, Chris Murphy wrote:
>
> On Nov 2, 2014, at 8:43 PM, Zygo Blaxell wrote:
> > btrfs seems to assume the data is correct on both disks (the generation
> > numbers and checksums are OK) but gets confused by equally plausible but
> > different metadata on each
Thanks for the help.
I tried btrfs-progs 3.16, same results.
IWT-VM ~ # blkid
/dev/sdb1: LABEL="Fast" UUID="c41dc6db-6f00-4d60-a2f7-acbceb25e4e7"
UUID_SUB="0d26e72e-3848-455f-a250-56b442aa3bec" TYPE="btrfs"
PARTUUID="000f11d6-01"
/dev/sdb2: LABEL="Root" UUID="61f6ce80-6d05-414f-9f0f-3d540fa82f2e
On 11/04/2014 11:12 AM, Anand Jain wrote:
very strange. I have no clue, yet. also bit concerned if this
turns out to be a real issue. hope you could help to narrow down.
can you send `blkid` output from your system.
And
can you go back to 3.16 and check if you have the same issue.
very strange. I have no clue, yet. also bit concerned if this
turns out to be a real issue. hope you could help to narrow down.
can you send `blkid` output from your system.
And
can you go back to 3.16 and check if you have the same issue.
thanks, anand
IWT-VM ~ # uname -a
Linux IWT-VM 3
Add externs and don't use a reserved keyword.
Signed-off-by: David Sterba
---
rbtree-utils.h | 8
rbtree.h | 10 +-
rbtree_augmented.h | 8
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/rbtree-utils.h b/rbtree-utils.h
index 7298c72eba3d.
On Nov 3, 2014, at 12:48 PM, Florian Lindner wrote:
>
> Ok, problem is that I need to organise another hard disk for that. ;-)
>
> I tried restore for a test run, it gave a lot of messages about wrong
> compression length. I found some discussion about that, but I don't know if
> its indicate
Chris Murphy wrote:
>
> On Nov 2, 2014, at 8:18 AM, Florian Lindner wrote:
>
>> Hello,
>>
>> all after sudden I can't mount my btrfs home partition anymore. System is
>> Arch with kernel 3.17.2, but I use snapper which does snapshopts
>> regularly and I had 3.17.1 before, which afaik had some
On Nov 2, 2014, at 8:18 AM, Florian Lindner wrote:
> Hello,
>
> all after sudden I can't mount my btrfs home partition anymore. System is
> Arch with kernel 3.17.2, but I use snapper which does snapshopts regularly
> and I had 3.17.1 before, which afaik had some problems with snapshops.
It w
On Nov 2, 2014, at 8:43 PM, Zygo Blaxell wrote:
> On Sun, Nov 02, 2014 at 02:57:22PM -0700, Chris Murphy wrote:
>>
>> For example if I have a two device Btrfs raid1 for both data and
>> metadata, and one device is removed and I mount -o degraded,rw one
>> of them and make some small changes, un
Robert White wrote:
> On 11/02/2014 07:18 AM, Florian Lindner wrote:
>> # btrfsck /dev/sdb1
>> # btrfsck --init-extent-tree /dev/sdb1
>> # btrfsck --init-csum-tree /dev/sdb1
>
> Notably missing from all these commands is "--repair"...
>
> I don't know that's your problem for sure, but it's where
We try to allocate an extent state structure before acquiring the extent
state tree's spinlock as we might need a new one later and therefore avoid
doing later an atomic allocation while holding the tree's spinlock. However
we returned -ENOMEM if that initial non-atomic allocation failed, which is
Due to ignoring errors returned by clear_extent_bits (at the moment only
-ENOMEM is possible), we can end up freeing an extent that is actually in
use (i.e. return the extent to the free space cache).
The sequence of steps that lead to this:
1) Cleaner thread starts execution and calls btrfs_dele
Thanks for nice and "replicate at home yourself" example. On my machine it is
behaving precisely like in your:
root@blackdawn:/home/luvar# sync; sysctl vm.drop_caches=1
vm.drop_caches = 1
root@blackdawn:/home/luvar# time cat
/home/luvar/programs/adt-bundle-linux/sdk/system-images/android-L/defa
Our gluster boxes get several thousand statfs() calls per second, which begins
to suck hardcore with all of the lock contention on the chunk mutex and dev list
mutex. We don't really need to hold these things, if we have transient
weirdness with statfs() because of the chunk allocator we don't car
I noticed my data partition had some inode inconsistencies, so I run
btrfs check --repair.
However, it did nothing and errors are still there:
[root@sandiskFit juha]# btrfs check --repair /dev/mapper/HDD--total-data
enabling repair mode
Fixed 0 roots.
Checking filesystem on /dev/mapper/HDD--tot
Hi,
As the topic says, btrfs check - Couldn't open file system. Check runs fine on
all btrfs volumes except one - "Backup". There is nothing special about it, it
uses the same options as all the other ones (raid1, compress).
As you can see in the output below I double check the filesystem is unmo
Robert White posted on Sun, 02 Nov 2014 14:31:36 -0800 as excerpted:
> On 11/02/2014 07:18 AM, Florian Lindner wrote:
>> # btrfsck /dev/sdb1
>> # btrfsck --init-extent-tree /dev/sdb1
>> # btrfsck --init-csum-tree /dev/sdb1
>
> Notably missing from all these commands is "--repair"...
>
> I don't
Robert White posted on Sun, 02 Nov 2014 03:08:46 -0800 as excerpted:
> Confusing bit, for example, from wiki
>
> [QUOTE]
> If you are getting out of space errors due to metadata being full, try
>
> btrfs balance start -v -dusage=0 /mnt/btrfs [/QUOTE]
>
> Combined with "Balances only block group
20 matches
Mail list logo