On Tue, 14 Nov 2017 10:14:55 +0300
Marat Khalili wrote:
> Don't keep snapshots under rsync target, place them under ../snapshots
> (if snapper supports this):
> Or, specify them in --exclude and avoid using --delete-excluded.
Both are good suggestions, in my case each system does have its own
On Mon, 13 Nov 2017 22:39:44 -0500
Dave wrote:
> I have my live system on one block device and a backup snapshot of it
> on another block device. I am keeping them in sync with hourly rsync
> transfers.
>
> Here's how this system works in a little more detail:
>
> 1. I establish the baseline by
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Martin Steigerwald
> Sent: Tuesday, 14 November 2017 6:35 PM
> To: dste...@suse.cz; linux-btrfs@vger.kernel.org
> Subject: Re: Read before you deploy btrfs + zstd
>
> H
Hello,
after a controller firmware bug / failure i've a broken btrfs.
# parent transid verify failed on 181846016 wanted 143404 found 143399
running repair, fsck or zero-log always results in the same failure message:
extent-tree.c:2725: alloc_reserved_tree_block: BUG_ON `ret` triggered,
value -
Sorry, i just thinking that i can test that and send you some feedback,
But for now, no time.
I will check that later and try adds memory reusing.
So, just ignore patches for now.
Thanks
2017-10-10 20:36 GMT+03:00 David Sterba :
> On Tue, Oct 03, 2017 at 06:06:04PM +0300, Timofey Titovets wrote:
On Mon, Nov 13, 2017 at 10:25:41AM +0800, Anand Jain wrote:
> Make sure missing device is included in the alloc list when it is
> scanned on a mounted FS.
>
> This test case needs btrfs kernel patch which is in the ML
> [PATCH] btrfs: handle dynamically reappearing missing device
> Without the k
On 2017-11-14 02:34, Martin Steigerwald wrote:
Hello David.
David Sterba - 13.11.17, 23:50:
while 4.14 is still fresh, let me address some concerns I've seen on linux
forums already.
The newly added ZSTD support is a feature that has broader impact than
just the runtime compression. The btrfs-
On Tue, Nov 14, 2017 at 10:36:22AM +0200, Klaus Agnoletti wrote:
> I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the
^
> 2TB disks started giving me I/O errors in dmesg like this:
>
> [388659.188988] Add. Sense: Unrecovered read error -
On Tue, 14 Nov 2017 10:36:22 +0200
Klaus Agnoletti wrote:
> Obviously, I want /dev/sdd emptied and deleted from the raid.
* Unmount the RAID0 FS
* copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive
(noting how much of it is actually unreadable -- chances are it's mostl
On 14 November 2017 at 09:36, Klaus Agnoletti wrote:
>
> How do you guys think I should go about this?
I'd clone the disk with GNU ddrescue.
https://www.gnu.org/software/ddrescue/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger
On 2017-11-14 07:48, Roman Mamedov wrote:
On Tue, 14 Nov 2017 10:36:22 +0200
Klaus Agnoletti wrote:
Obviously, I want /dev/sdd emptied and deleted from the raid.
* Unmount the RAID0 FS
* copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive
(noting how much of it i
On 2017-11-14 03:36, Klaus Agnoletti wrote:
Hi list
I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the
2TB disks started giving me I/O errors in dmesg like this:
[388659.173819] ata5.00: exception Emask 0x0 SAct 0x7fff SErr 0x0 action 0x0
[388659.175589] ata5.00: irq_stat
Hi Roman
I almost understand :-) - however, I need a bit more information:
How do I copy the image file to the 6TB without screwing the existing
btrfs up when the fs is not mounted? Should I remove it from the raid
again?
Also, as you might have noticed, I have a bit of an issue with the
entire
Hi Austin
Good points. Thanks a lot.
/klaus
On Tue, Nov 14, 2017 at 2:14 PM, Austin S. Hemmelgarn
wrote:
> On 2017-11-14 03:36, Klaus Agnoletti wrote:
>>
>> Hi list
>>
>> I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the
>> 2TB disks started giving me I/O errors in dmesg lik
Am Tue, 14 Nov 2017 17:48:56 +0500
schrieb Roman Mamedov :
> [1] Note that "ddrescue" and "dd_rescue" are two different programs
> for the same purpose, one may work better than the other. I don't
> remember which. :)
One is a perl implementation and is the one working worse. ;-)
--
Regards,
K
On Tue, 14 Nov 2017 15:09:52 +0100
Klaus Agnoletti wrote:
> Hi Roman
>
> I almost understand :-) - however, I need a bit more information:
>
> How do I copy the image file to the 6TB without screwing the existing
> btrfs up when the fs is not mounted? Should I remove it from the raid
> again?
On Tue, Nov 14, 2017 at 07:39:11AM +0800, Qu Wenruo wrote:
> > - extend mount options to specify zlib compression level, -o compress=zlib:9
>
> However the support for it has a big problem, it will cause wild memory
> access for "-o compress" mount option.
>
> Kernel ASAN can detect it easily and
Hi,
a pre-release has been tagged.
Changes:
* build: libzstd now required by default
* check: more lowmem mode repair enhancements
* subvol set-default: also accept path
* prop set: compression accepts no/none, same as ""
* filesystem usage: enable for filesystem on top of a seed device
Hi Roman,
If you look at the 'show' command, the failing disk is sorta out of
the fs, so maybe removing the 6TB disk again will divide the data
already on the 6TB disk (which isn't more than 300something gigs) to
the 2 well-functioning disks.
Still, as putting the dd-image of the 2TB disk on the
14.11.2017 12:56, Stefan Priebe - Profihost AG пишет:
> Hello,
>
> after a controller firmware bug / failure i've a broken btrfs.
>
> # parent transid verify failed on 181846016 wanted 143404 found 143399
>
> running repair, fsck or zero-log always results in the same failure message:
> extent-t
On Tue, Nov 14, 2017 at 08:34:37AM +0100, Martin Steigerwald wrote:
> Hello David.
>
> David Sterba - 13.11.17, 23:50:
> > while 4.14 is still fresh, let me address some concerns I've seen on linux
> > forums already.
> >
> > The newly added ZSTD support is a feature that has broader impact than
On Mon, Nov 13, 2017 at 11:50:46PM +0100, David Sterba wrote:
> Up to now, there are no bootloaders supporting ZSTD.
I've tried to implement the support to GRUB, still incomplete and hacky
but most of the code is there. The ZSTD implementation is copied from
kernel. The allocators need to be prop
David Sterba - 14.11.17, 19:49:
> On Tue, Nov 14, 2017 at 08:34:37AM +0100, Martin Steigerwald wrote:
> > Hello David.
> >
> > David Sterba - 13.11.17, 23:50:
> > > while 4.14 is still fresh, let me address some concerns I've seen on
> > > linux
> > > forums already.
> > >
> > > The newly added Z
Am 14.11.2017 um 18:45 schrieb Andrei Borzenkov:
> 14.11.2017 12:56, Stefan Priebe - Profihost AG пишет:
>> Hello,
>>
>> after a controller firmware bug / failure i've a broken btrfs.
>>
>> # parent transid verify failed on 181846016 wanted 143404 found 143399
>>
>> running repair, fsck or zero-lo
On Tue, Nov 14, 2017 at 3:50 AM, Roman Mamedov wrote:
>
> On Mon, 13 Nov 2017 22:39:44 -0500
> Dave wrote:
>
> > I have my live system on one block device and a backup snapshot of it
> > on another block device. I am keeping them in sync with hourly rsync
> > transfers.
> >
> > Here's how this sy
From: Josef Bacik
These are counters that constantly go up in order to do bandwidth calculations.
It isn't important what the units are in, as long as they are consistent between
the two of them, so convert them to count bytes written/dirtied, and allow the
metadata accounting stuff to change the
From: Josef Bacik
The flexible proportions were all page based, but now that we are doing
metadata writeout that can be smaller or larger than page size we need
to account for this in bytes instead of number of pages.
Signed-off-by: Josef Bacik
---
mm/backing-dev.c| 2 +-
mm/page-writebac
From: Josef Bacik
The only reason we pass in the mapping is to get the inode in order to see if
writeback cgroups is enabled, and even then it only checks the bdi and a super
block flag. balance_dirty_pages() doesn't even use the mapping. Since
balance_dirty_pages*() works on a bdi level, just
From: Josef Bacik
This helper allows us to add an arbitrary amount to the fprop
structures.
Signed-off-by: Josef Bacik
---
include/linux/flex_proportions.h | 11 +--
lib/flex_proportions.c | 9 +
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/include
From: Josef Bacik
We use this in btrfs for metadata writeback.
Acked-by: Matthew Wilcox
Signed-off-by: Josef Bacik
---
lib/radix-tree.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 8b1feca1230a..0c1cde9fcb69 100644
--- a/lib/radix-tree.c
+++ b/
From: Josef Bacik
Now that we have metadata counters in the VM, we need to provide a way to kick
writeback on dirty metadata. Introduce super_operations->write_metadata. This
allows file systems to deal with writing back any dirty metadata we need based
on the writeback needs of the system. Si
From: Josef Bacik
Btrfs has no bounds except memory on the amount of dirty memory that we have in
use for metadata. Historically we have used a special inode so we could take
advantage of the balance_dirty_pages throttling that comes with using pagecache.
However as we'd like to support differen
From: Josef Bacik
The flexible proportion stuff has been used to track how many pages we
are writing out over a period of time, so counts everything in single
increments. If we wanted to use another base value we need to be able
to adjust the batch size to fit our the units we'll be using for th
From: Josef Bacik
In order to more efficiently support sub-page blocksizes we need to stop
allocating pages from pagecache for our metadata. Instead switch to using the
account_metadata* counters for making sure we are keeping the system aware of
how much dirty metadata we have, and use the ->fr
From: Josef Bacik
Now that the only thing that keeps eb's alive is io_pages and it's
refcount we need to hold the eb ref for the entire end io call so we
don't get it removed out from underneath us. Also the hooks make no
sense for us now, so rework this to be cleaner.
Signed-off-by: Josef Baci
Hi all
I've been following this project on and off for quite a few years, and I wonder
if anyone has looked into tiered storage on it. With tiered storage, I mean hot
data lying on fast storage and cold data on slow storage. I'm not talking about
cashing (where you just keep a copy of the hot d
On Tue, Nov 14, 2017 at 5:38 AM, Adam Borowski wrote:
> On Tue, Nov 14, 2017 at 10:36:22AM +0200, Klaus Agnoletti wrote:
>> I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the
> ^
>> 2TB disks started giving me I/O errors in dmesg like thi
On Tue, Nov 14, 2017 at 5:48 AM, Roman Mamedov wrote:
> On Tue, 14 Nov 2017 10:36:22 +0200
> Klaus Agnoletti wrote:
>
>> Obviously, I want /dev/sdd emptied and deleted from the raid.
>
> * Unmount the RAID0 FS
>
> * copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive
> (n
On Tue, Nov 14, 2017 at 1:36 AM, Klaus Agnoletti wrote:
> Btrfs v3.17
Unrelated to the problem but this is pretty old.
> Linux box 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19)
Also pretty old kernel.
> x86_64 GNU/Linux
> klaus@box:~$ sudo btrfs --version
> Btrfs v3.17
> klaus@
Make sure missing device is included in the alloc list when it is
scanned on a mounted FS.
This test case needs btrfs kernel patch which is in the ML
[PATCH] btrfs: handle dynamically reappearing missing device
Without the kernel patch, the test will run, but reports as
failed, as the device sca
On 11/14/2017 08:12 PM, Eryu Guan wrote:
On Mon, Nov 13, 2017 at 10:25:41AM +0800, Anand Jain wrote:
Make sure missing device is included in the alloc list when it is
scanned on a mounted FS.
This test case needs btrfs kernel patch which is in the ML
[PATCH] btrfs: handle dynamically reapp
On Wed, Nov 15, 2017 at 11:05:15AM +0800, Anand Jain wrote:
> Make sure missing device is included in the alloc list when it is
> scanned on a mounted FS.
>
> This test case needs btrfs kernel patch which is in the ML
> [PATCH] btrfs: handle dynamically reappearing missing device
> Without the k
As a regular BTRFS user I can tell you that there is no such thing as
hot data tracking yet. Some people seem to use bcache together with
btrfs and come asking for help on the mailing list.
Raid5/6 have received a few fixes recently, and it *may* soon me worth
trying out raid5/6 for data, but
Hi Anand,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on btrfs/next]
[also build test ERROR on v4.14 next-20171114]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits
44 matches
Mail list logo