On Mon, May 01, 2017 at 10:56:06PM -0600, Chris Murphy wrote:
> > Right, of course, I was being way over optimistic here. I kind of forgot
> > that metadata wasn't COW, my bad.
>
> Well it is COW. But there's more to the file system than fs trees, and
> just because an fs tree gets snapshot
Marc MERLIN posted on Mon, 01 May 2017 20:23:46 -0700 as excerpted:
> Also, how is --mode=lowmem being useful?
FWIW, I just watched your talk that's linked from the wiki, and wondered
what you were doing these days as I hadn't seen any posts from you here
for awhile.
Well, that you're asking
On Mon, May 1, 2017 at 9:23 PM, Marc MERLIN wrote:
> Hi Chris,
>
> Thanks for the reply, much appreciated.
>
> On Mon, May 01, 2017 at 07:50:22PM -0600, Chris Murphy wrote:
>> What about btfs check (no repair), without and then also with --mode=lowmem?
>>
>> In theory I like the
Hi Chris,
Thanks for the reply, much appreciated.
On Mon, May 01, 2017 at 07:50:22PM -0600, Chris Murphy wrote:
> What about btfs check (no repair), without and then also with --mode=lowmem?
>
> In theory I like the idea of a 24 hour rollback; but in normal usage
> Btrfs will eventually free up
On 04/30/2017 01:47 PM, Andrei Borzenkov wrote:
I'm chasing issue with btrfs mounts under systemd
(https://github.com/systemd/systemd/issues/5781) - to summarize, systemd
waits for the final device that makes btrfs complete and mounts it using
this device name.
But in
fsck/003-shift-offsets makes valgrinds complaining about memory leaks.
==5910==
==5910== HEAP SUMMARY:
==5910== in use at exit: 1,112 bytes in 11 blocks
==5910== total heap usage: 161 allocs, 150 frees, 164,800 bytes allocated
==5910==
==5910== 216 (72 direct, 144 indirect) bytes in 1 blocks
On 05/01/2017 02:52 PM, Filipe Manana wrote:
On Mon, May 1, 2017 at 4:17 PM, J. Hart wrote:
Just use "btrfs-image -c 9 /dev/whatever image_file", it will create a
compressed image where the data is replaced with zeroes (not needed to
debug this problem anyway). Then
On 05/02/2017 08:20 AM, Qu Wenruo wrote:
At 05/01/2017 06:21 PM, Dmitrii Tcvetkov wrote:
+bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info)
+{
+struct btrfs_mapping_tree *map_tree = _info->mapping_tree;
+struct extent_map *em;
+u64 next_start = 0;
+bool ret =
What about btfs check (no repair), without and then also with --mode=lowmem?
In theory I like the idea of a 24 hour rollback; but in normal usage
Btrfs will eventually free up space containing stale and no longer
necessary metadata. Like the chunk tree, it's always changing, so you
get to a
At 05/01/2017 06:21 PM, Dmitrii Tcvetkov wrote:
+bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info)
+{
+struct btrfs_mapping_tree *map_tree = _info->mapping_tree;
+struct extent_map *em;
+u64 next_start = 0;
+bool ret = true;
+
+read_lock(_tree->map_tree.lock);
+
At 04/28/2017 04:47 PM, Christophe de Dinechin wrote:
On 28 Apr 2017, at 02:45, Qu Wenruo wrote:
At 04/26/2017 01:50 AM, Christophe de Dinechin wrote:
Hi,
I”ve been trying to run btrfs as my primary work filesystem for about 3-4
months now on Fedora 25 systems.
It did it again:
shrapnel share # touch test.txt
touch: cannot touch 'test.txt': No space left on device
shrapnel share # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root35G 19G 15G 56% /
devtmpfs 10M 0 10M 0% /dev
tmpfs 3.2G 1.2M 3.2G 1%
On Mon, May 1, 2017 at 4:17 PM, J. Hart wrote:
> I've got more information on the following error :
>
> At subvol /mnt/ArchPri/backup/primary/thinkcentre/root/backup.0.2
> At snapshot backup.0.2017.04.21.03.11.40
> ERROR: rename o3528-7220-0 -> usr failed: Directory not empty
Hi Christian, thanks for fixing it quickly :) I don't have permission
to change the bug state to fixed, if you or Nazar have those rights, i
think this bug can be closed.
Cheers,
Lakshmipathi.G
http://www.giis.co.in http://www.webminal.org
On Mon, May 1, 2017 at 6:39 PM, Christian Brauner
So, I forgot to mention that it's my main media and backup server that got
corrupted. Yes, I do actually have a backup of a backup server, but it's
going to take days to recover due to the amount of data to copy back, not
counting lots of manual typing due to the number of subvolumes, btrfs
I have a filesystem that sadly got corrupted by a SAS card I just installed
yesterday.
I don't think in a case like this, there is there a way to roll back all
writes across all subvolumes in the last 24H, correct?
Is the best thing to go in each subvolume, delete the recent snapshots and
I've got more information on the following error :
At subvol /mnt/ArchPri/backup/primary/thinkcentre/root/backup.0.2
At snapshot backup.0.2017.04.21.03.11.40
ERROR: rename o3528-7220-0 -> usr failed: Directory not empty
I've filed a bug report with additional details at:
Hi,
The original bug-reporter verified that my patch fixes the bug. See
https://bugzilla.kernel.org/show_bug.cgi?id=195597
Christian
On Sat, Apr 29, 2017 at 11:54:05PM +0200, Christian Brauner wrote:
> Returning -ENODATA is only considered invalid on the first run of the loop.
>
>
> >> +bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info)
> >> +{
> >> +struct btrfs_mapping_tree *map_tree = _info->mapping_tree;
> >> +struct extent_map *em;
> >> +u64 next_start = 0;
> >> +bool ret = true;
> >> +
> >> +read_lock(_tree->map_tree.lock);
> >> +em
19 matches
Mail list logo