essage-
> From: David C. Partridge [mailto:david.partri...@perdrix.co.uk]
> Sent: 29 April 2018 10:55
> To: 'Qu Wenruo'; 'linux-btrfs@vger.kernel.org'
> Subject: RE: Problems with btrfs
>
> Yes I did use seek=
>
> I attach the new dump-tree - it seems very short
u
>
> Dave
>
> -Original Message-
> From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
> Sent: 29 April 2018 10:36
> To: David C. Partridge; linux-btrfs@vger.kernel.org
> Subject: Re: Problems with btrfs
>
>
>
> On 2018年04月29日 17:20, David C. Partr
ect.
# btrfs inspect dump-tree -t extent
Thanks,
Qu
>
> Dave
> -Original Message-
> From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
> Sent: 29 April 2018 09:36
> To: David C. Partridge
> Subject: Re: Problems with btrfs
>
> Here is the patched bina
inspect dump-tree -t extent
Thanks,
Qu
>
> Dave
> -Original Message-
> From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
> Sent: 29 April 2018 09:36
> To: David C. Partridge
> Subject: Re: Problems with btrfs
>
> Here is the patched binary tree block.
>
>
Here is the result of btrfs check after applying the patch
Dave
-Original Message-
From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
Sent: 29 April 2018 09:36
To: David C. Partridge
Subject: Re: Problems with btrfs
Here is the patched binary tree block.
You could apply them
> From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
> Sent: 29 April 2018 02:35
> To: David C. Partridge; linux-btrfs@vger.kernel.org
> Subject: Re: Problems with btrfs
>
>
>
> On 2018年04月29日 00:02, David C. Partridge wrote:
>> Here are the dumps you requested.
>
>
ays, feel free to use btrfs-restore
to recovery your data.
Thanks,
Qu
>
> -Original Message-
> From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
> Sent: 28 April 2018 15:23
> To: David C. Partridge; linux-btrfs@vger.kernel.org
> Subject: Re: Problems with btrfs
>
>
Here are the dumps you requested.
-Original Message-
From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
Sent: 28 April 2018 15:23
To: David C. Partridge; linux-btrfs@vger.kernel.org
Subject: Re: Problems with btrfs
On 2018年04月28日 22:06, David C. Partridge wrote:
> Here's the log
of=copy2.dump bs=1 count=16k
skip=23456415744
And attach copy1.img and copy2.img.
Thanks,
Qu
>
> -Original Message-
> From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
> Sent: 28 April 2018 14:54
> To: David C. Partridge
> Subject: Re: Problems with btrfs
>
>
&
Here's the log you asked for ...
David
-Original Message-
From: Qu Wenruo [mailto:quwenruo.bt...@gmx.com]
Sent: 28 April 2018 14:54
To: David C. Partridge
Subject: Re: Problems with btrfs
On 2018年04月28日 21:38, David C. Partridge wrote:
> Oh! doing a private build from source a
On Fri, Apr 27, 2018 at 6:20 PM, Qu Wenruo wrote:
>
>
> On 2018年04月28日 02:38, David C. Partridge wrote:
>> I'm running Ubuntu 16.04. I rebooted my server today as it wasn't
>> responding.
>>
>> When I rebooted the root FS was read only.
>>
>> I booted a live Ubuntu CD and
On 2018年04月28日 02:38, David C. Partridge wrote:
> I'm running Ubuntu 16.04. I rebooted my server today as it wasn't
> responding.
>
> When I rebooted the root FS was read only.
>
> I booted a live Ubuntu CD and checked the drive with the results shown in
> attachment btrfs-check.log.
>
> The
I'm running Ubuntu 16.04. I rebooted my server today as it wasn't
responding.
When I rebooted the root FS was read only.
I booted a live Ubuntu CD and checked the drive with the results shown in
attachment btrfs-check.log.
The error was still there after completing the btrfs check --repair :(
On Wed, Sep 13, 2017 at 08:21:01AM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 17:13, Adam Borowski wrote:
> > On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
> > > On 2017-09-12 16:00, Adam Borowski wrote:
> > > > Noted. Both Marat's and my use cases, though, involve
On 09/13/2017 04:15 PM, Marat Khalili wrote:
> On 13/09/17 16:23, Chris Murphy wrote:
>> Right, known problem. To use o_direct implies also using nodatacow (or
>> at least nodatasum), e.g. xattr +C is set, done by qemu-img -o
>> nocow=on
>> https://www.spinics.net/lists/linux-btrfs/msg68244.html
>
On 2017-09-13 10:47, Martin Raiber wrote:
Hi,
On 12.09.2017 23:13 Adam Borowski wrote:
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 16:00, Adam Borowski wrote:
Noted. Both Marat's and my use cases, though, involve VMs that are off most
of the time, and
Hi,
On 12.09.2017 23:13 Adam Borowski wrote:
> On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
>> On 2017-09-12 16:00, Adam Borowski wrote:
>>> Noted. Both Marat's and my use cases, though, involve VMs that are off most
>>> of the time, and at least for me, turned on only
On 13/09/17 16:23, Chris Murphy wrote:
Right, known problem. To use o_direct implies also using nodatacow (or
at least nodatasum), e.g. xattr +C is set, done by qemu-img -o
nocow=on
https://www.spinics.net/lists/linux-btrfs/msg68244.html
Can you please elaborate? I don't have exactly the same
On Tue, Sep 12, 2017 at 10:02 AM, Marat Khalili wrote:
> (3) it is possible that it uses O_DIRECT or something, and btrfs raid1 does
> not fully protect this kind of access.
Right, known problem. To use o_direct implies also using nodatacow (or
at least nodatasum), e.g. xattr +C is
On 2017-09-12 20:52, Timofey Titovets wrote:
No, no, no, no...
No new ioctl, no change in fallocate.
Fisrt: VM can do punch hole, if you use qemu -> qemu know how to do it.
Windows Guest also know how to do it.
Different Hypervisor? -> google -> Make issue to support, all
Linux/Windows/Mac OS
On 2017-09-12 17:13, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 16:00, Adam Borowski wrote:
Noted. Both Marat's and my use cases, though, involve VMs that are off most
of the time, and at least for me, turned on only to test
2017-09-13 0:13 GMT+03:00 Adam Borowski :
> On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
>> On 2017-09-12 16:00, Adam Borowski wrote:
>> > Noted. Both Marat's and my use cases, though, involve VMs that are off
>> > most
>> > of the time, and at least
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 16:00, Adam Borowski wrote:
> > Noted. Both Marat's and my use cases, though, involve VMs that are off most
> > of the time, and at least for me, turned on only to test something.
> > Touching mtime makes rsync
On 2017-09-12 16:00, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 03:11:52PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 14:43, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 13:21, Adam Borowski wrote:
There's fallocate -d, but
On Tue, Sep 12, 2017 at 03:11:52PM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 14:43, Adam Borowski wrote:
> > On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
> > > On 2017-09-12 13:21, Adam Borowski wrote:
> > > > There's fallocate -d, but that for some reason touches
On 2017-09-12 14:47, Christoph Hellwig wrote:
On Tue, Sep 12, 2017 at 08:43:59PM +0200, Adam Borowski wrote:
For now, though, I wonder -- should we send fine folks at util-linux a patch
to make fallocate -d restore mtime, either always or on an option?
Don't do that. Please just add a new
On 2017-09-12 14:43, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 13:21, Adam Borowski wrote:
There's fallocate -d, but that for some reason touches mtime which makes
rsync go again. This can be handled manually but is still not
On Tue, Sep 12, 2017 at 08:43:59PM +0200, Adam Borowski wrote:
> For now, though, I wonder -- should we send fine folks at util-linux a patch
> to make fallocate -d restore mtime, either always or on an option?
Don't do that. Please just add a new ioctl or fallocate command
that punches a hole
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 13:21, Adam Borowski wrote:
> > There's fallocate -d, but that for some reason touches mtime which makes
> > rsync go again. This can be handled manually but is still not nice.
> It touches mtime because it
On 2017-09-12 13:21, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 02:26:39PM +0300, Marat Khalili wrote:
On 12/09/17 14:12, Adam Borowski wrote:
Why would you need support in the hypervisor if cp --reflink=always is
enough?
+1 :)
But I've already found one problem: I use rsync snapshots for
On Tue, Sep 12, 2017 at 02:26:39PM +0300, Marat Khalili wrote:
> On 12/09/17 14:12, Adam Borowski wrote:
> > Why would you need support in the hypervisor if cp --reflink=always is
> > enough?
> +1 :)
>
> But I've already found one problem: I use rsync snapshots for backups, and
> although rsync
On 12/09/17 14:12, Adam Borowski wrote:
Why would you need support in the hypervisor if cp --reflink=always is
enough?
+1 :)
But I've already found one problem: I use rsync snapshots for backups,
and although rsync does have --sparse argument, apparently it conflicts
with --inplace. You
2017-09-12 14:12 GMT+03:00 Adam Borowski :
> On Tue, Sep 12, 2017 at 02:01:53PM +0300, Timofey Titovets wrote:
>> > On 12/09/17 13:32, Adam Borowski wrote:
>> >> Just use raw -- btrfs already has every feature that qcow2 has, and
>> >> does it better. This doesn't mean btrfs
On Tue, Sep 12, 2017 at 02:01:53PM +0300, Timofey Titovets wrote:
> > On 12/09/17 13:32, Adam Borowski wrote:
> >> Just use raw -- btrfs already has every feature that qcow2 has, and
> >> does it better. This doesn't mean btrfs is the best choice for hosting
> >> VM files, just that
On Tue, 12 Sep 2017 12:32:14 +0200
Adam Borowski wrote:
> discard in the guest (not supported over ide and virtio, supported over scsi
> and virtio-scsi)
IDE does support discard in QEMU, I use that all the time.
It got broken briefly in QEMU 2.1 [1], but then fixed again.
2017-09-12 13:39 GMT+03:00 Marat Khalili :
> On 12/09/17 13:01, Duncan wrote:
>>
>> AFAIK that's wrong -- the only time the app should see the error on btrfs
>> raid1 is if the second copy is also bad
>
> So thought I, but...
>
>> IIRC from what I've read on-list, qcow2 isn't the best
On 12/09/17 13:01, Duncan wrote:
AFAIK that's wrong -- the only time the app should see the error on btrfs
raid1 is if the second copy is also bad
So thought I, but...
IIRC from what I've read on-list, qcow2 isn't the best alternative for hosting
VMs on
top of btrfs.
Yeah, I've seen
On Tue, Sep 12, 2017 at 10:01:07AM +, Duncan wrote:
> BTW, I am most definitely /not/ a VM expert, and won't pretend to
> understand the details or be able to explain further, but IIRC from what
> I've read on-list, qcow2 isn't the best alternative for hosting VMs on
> top of btrfs.
Marat Khalili posted on Tue, 12 Sep 2017 11:42:52 +0300 as excerpted:
> On 12/09/17 11:25, Timofey Titovets wrote:
>> AFAIK, if while read BTRFS get Read Error in RAID1, application will
>> also see that error and if application can't handle it -> you got a
>> problems
&
2017-09-12 12:29 GMT+03:00 Marat Khalili :
> On 12/09/17 12:21, Timofey Titovets wrote:
>>
>> Can't reproduce that on latest kernel: 4.13.1
>
> Great! Thank you very much for the test. Do you know if it's fixed in 4.10?
> (or what particular version does?)
> --
>
> With Best Regards,
On 12/09/17 12:21, Timofey Titovets wrote:
Can't reproduce that on latest kernel: 4.13.1
Great! Thank you very much for the test. Do you know if it's fixed in
4.10? (or what particular version does?)
--
With Best Regards,
Marat Khalili
--
To unsubscribe from this list: send the line
2017-09-12 11:42 GMT+03:00 Marat Khalili <m...@rqc.ru>:
> On 12/09/17 11:25, Timofey Titovets wrote:
>>
>> AFAIK, if while read BTRFS get Read Error in RAID1, application will
>> also see that error and if application can't handle it -> you got a
>> problems
&g
On 12/09/17 11:25, Timofey Titovets wrote:
AFAIK, if while read BTRFS get Read Error in RAID1, application will
also see that error and if application can't handle it -> you got a
problems
So Btrfs RAID1 ONLY protect data, not application (qemu in your case).
That's news to me! Why does
FAIK, if while read BTRFS get Read Error in RAID1, application will
also see that error and if application can't handle it -> you got a
problems
So Btrfs RAID1 ONLY protect data, not application (qemu in your case).
--
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "uns
Thanks to the help from the list I've successfully replaced part of
btrfs raid1 filesystem. However, while I waited for best opinions on the
course of actions, the root filesystem of one the qemu-kvm VMs went
read-only, and this root was of course based in a qcow2 file on the
problematic btrfs
That seems to do the trick, thanks
W dniu 29.12.2016 o 17:53, Roman Mamedov pisze:
> On Thu, 29 Dec 2016 16:42:09 +0100
> Michał Zegan wrote:
>
>> I have odroid c2, processor architecture aarch64, linux kernel from
>> master as of today from
On Thu, 29 Dec 2016 16:42:09 +0100
Michał Zegan wrote:
> I have odroid c2, processor architecture aarch64, linux kernel from
> master as of today from http://github.com/torwalds/linux.git.
> It seems that the btrfs module cannot be loaded. The only thing that
>
Resending to btrfs list
--- Treść przekazanej wiadomości ---
To: linux-fsde...@vger.kernel.org
From: Michał Zegan <webczat_...@poczta.onet.pl>
Subject: problems with btrfs filesystem loading
Message-ID: <05893a24-2bf7-d485-1f9c-b10650419...@poczta.onet.pl>
Date: Thu, 29 Dec 2016 16
Feb 14 18:30:21 specialbrew kernel: [27576201.178630] BTRFS: bdev /dev/sdh
errs: wr 128, rd 8, flush 2, corrupt 0, gen 0
Feb 14 18:30:21 specialbrew kernel: [27576201.309583] BTRFS: lost page write
due to I/O error on /dev/sdh
Feb 14 18:30:21 specialbrew kernel: [27576201.315761] BTRFS: bdev
one:
# mount -oremount,degraded /srv/tank
and tried again, but it produces the same response ("mount" now does
show "degraded" as one of the mount flags, however).
I have not yet tried completely unmounting it and mounting it again.
> it really doesn't make sens
hundreds of
bug fixes, many thousands of insertions and deletions in Btrfs since
then, so it really doesn't make sense to me you'd want to increase
risk of more Btrfs problems when such known things are now fixed.
Consider 4.1.15 if you want a stable long term yet currently
supportable kernel.
Hi,
One of my drives died earlier in a fairly emphatic way in that not
only did it show IO errors and got removed as a device by the
kernel, but it was also making audible grinding/screeching noises
until I hot unplugged it.
Feb 14 18:29:36 specialbrew kernel: [27576156.070961] ata6.15: SATA
Here is an simplified excerpt of my backup bash script:
CURRENT_TIME=$(date +%Y-%m-%d_%H:%M-%S)
# LAST_TIME variable contains the timestamp of the last backup in the same
format as $CURRENT_TIME
btrfs subvolume snapshot -r /mnt/root/@home /mnt/root/@home-
backup-$CURRENT_TIME
sync
# Define
Am 15.10.2012 22:14, schrieb Alex Lyakas:
Stefan,
the second issue you're seeing was discussed here:
http://www.spinics.net/lists/linux-btrfs/msg19672.html
You can apply the patch I sent there meanwhile, but as Miao pointed
out, I will need to make a better patch (hope will do it soon,
together
On thu, 11 Oct 2012 21:54:48 +0200, Stefan Priebe wrote:
Am 11.10.2012 21:43, schrieb David Sterba:
On Thu, Oct 11, 2012 at 09:33:54PM +0200, Stefan Priebe wrote:
[server: /btrfs/target]# btrfs send -i /btrfs/src/\@snapshot/1
/btrfs/src/\@snapshot/2 | btrfs receive
Am 15.10.2012 12:16, schrieb Miao Xie:
On thu, 11 Oct 2012 21:54:48 +0200, Stefan Priebe wrote:
Am 11.10.2012 21:43, schrieb David Sterba:
On Thu, Oct 11, 2012 at 09:33:54PM +0200, Stefan Priebe wrote:
[server: /btrfs/target]# btrfs send -i /btrfs/src/\@snapshot/1
/btrfs/src/\@snapshot/2
Hi Stefan,
Is /btrfs/target/\@snapshot/ a subvolume or a directory?
can you pls try the patch that I posted here:
http://www.spinics.net/lists/linux-btrfs/msg19583.html
I feel that you're hitting a similar issue here. Before you apply the
patch, please verify that you have /etc/mtab on your
Am 15.10.2012 21:42, schrieb Alex Lyakas:
Is /btrfs/target/\@snapshot/ a subvolume or a directory?
A simple directory.
can you pls try the patch that I posted here:
http://www.spinics.net/lists/linux-btrfs/msg19583.html
I feel that you're hitting a similar issue here. Before you apply the
Stefan,
the second issue you're seeing was discussed here:
http://www.spinics.net/lists/linux-btrfs/msg19672.html
You can apply the patch I sent there meanwhile, but as Miao pointed
out, I will need to make a better patch (hope will do it soon,
together with this one).
Thanks,
Alex.
On Mon,
Hello list,
i wanted to try to out btrfs send and restore but i'm failing on a
simple step:
[server: /btrfs/target]# btrfs send /btrfs/src/\@snapshot/1 | btrfs
receive /btrfs/target/\@snapshot/
At subvol /btrfs/src/@snapshot/1
At subvol 1
[server: /btrfs/target]# ls -la
On Thu, Oct 11, 2012 at 09:33:54PM +0200, Stefan Priebe wrote:
[server: /btrfs/target]# btrfs send -i /btrfs/src/\@snapshot/1
/btrfs/src/\@snapshot/2 | btrfs receive /btrfs/target/\@snapshot/
At subvol /btrfs/src/@snapshot/2
At subvol 2
ERROR: failed to open
Am 11.10.2012 21:43, schrieb David Sterba:
On Thu, Oct 11, 2012 at 09:33:54PM +0200, Stefan Priebe wrote:
[server: /btrfs/target]# btrfs send -i /btrfs/src/\@snapshot/1
/btrfs/src/\@snapshot/2 | btrfs receive /btrfs/target/\@snapshot/
At subvol /btrfs/src/@snapshot/2
At subvol 2
ERROR: failed
I have three hard-drives attached to a media pc via USB (yes, this is
far from idea and I'm procuring the hardware to resolve this). As USB
seems to be crap in general (or something is not working) the drives
get dropped from time to time. In past times this hasn't been an
issue. I was able to
63 matches
Mail list logo