On Wed, Sep 13, 2017 at 08:21:01AM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 17:13, Adam Borowski wrote:
> > On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
> > > On 2017-09-12 16:00, Adam Borowski wrote:
> > > > Noted. Both Marat's and my use cases, though, involve
On 09/13/2017 04:15 PM, Marat Khalili wrote:
> On 13/09/17 16:23, Chris Murphy wrote:
>> Right, known problem. To use o_direct implies also using nodatacow (or
>> at least nodatasum), e.g. xattr +C is set, done by qemu-img -o
>> nocow=on
>> https://www.spinics.net/lists/linux-btrfs/msg68244.html
>
On 2017-09-13 10:47, Martin Raiber wrote:
Hi,
On 12.09.2017 23:13 Adam Borowski wrote:
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 16:00, Adam Borowski wrote:
Noted. Both Marat's and my use cases, though, involve VMs that are off most
of the time, and
Hi,
On 12.09.2017 23:13 Adam Borowski wrote:
> On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
>> On 2017-09-12 16:00, Adam Borowski wrote:
>>> Noted. Both Marat's and my use cases, though, involve VMs that are off most
>>> of the time, and at least for me, turned on only to
On 13/09/17 16:23, Chris Murphy wrote:
Right, known problem. To use o_direct implies also using nodatacow (or
at least nodatasum), e.g. xattr +C is set, done by qemu-img -o
nocow=on
https://www.spinics.net/lists/linux-btrfs/msg68244.html
Can you please elaborate? I don't have exactly the same pro
On Tue, Sep 12, 2017 at 10:02 AM, Marat Khalili wrote:
> (3) it is possible that it uses O_DIRECT or something, and btrfs raid1 does
> not fully protect this kind of access.
Right, known problem. To use o_direct implies also using nodatacow (or
at least nodatasum), e.g. xattr +C is set, done by
On 2017-09-12 20:52, Timofey Titovets wrote:
No, no, no, no...
No new ioctl, no change in fallocate.
Fisrt: VM can do punch hole, if you use qemu -> qemu know how to do it.
Windows Guest also know how to do it.
Different Hypervisor? -> google -> Make issue to support, all
Linux/Windows/Mac OS su
On 2017-09-12 17:13, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 16:00, Adam Borowski wrote:
Noted. Both Marat's and my use cases, though, involve VMs that are off most
of the time, and at least for me, turned on only to test somethi
2017-09-13 0:13 GMT+03:00 Adam Borowski :
> On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
>> On 2017-09-12 16:00, Adam Borowski wrote:
>> > Noted. Both Marat's and my use cases, though, involve VMs that are off
>> > most
>> > of the time, and at least for me, turned on onl
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 16:00, Adam Borowski wrote:
> > Noted. Both Marat's and my use cases, though, involve VMs that are off most
> > of the time, and at least for me, turned on only to test something.
> > Touching mtime makes rsync r
On 2017-09-12 16:00, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 03:11:52PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 14:43, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 13:21, Adam Borowski wrote:
There's fallocate -d, but t
On Tue, Sep 12, 2017 at 03:11:52PM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 14:43, Adam Borowski wrote:
> > On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
> > > On 2017-09-12 13:21, Adam Borowski wrote:
> > > > There's fallocate -d, but that for some reason touches
On 2017-09-12 14:47, Christoph Hellwig wrote:
On Tue, Sep 12, 2017 at 08:43:59PM +0200, Adam Borowski wrote:
For now, though, I wonder -- should we send fine folks at util-linux a patch
to make fallocate -d restore mtime, either always or on an option?
Don't do that. Please just add a new ioc
On 2017-09-12 14:43, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 13:21, Adam Borowski wrote:
There's fallocate -d, but that for some reason touches mtime which makes
rsync go again. This can be handled manually but is still not nice.
On Tue, Sep 12, 2017 at 08:43:59PM +0200, Adam Borowski wrote:
> For now, though, I wonder -- should we send fine folks at util-linux a patch
> to make fallocate -d restore mtime, either always or on an option?
Don't do that. Please just add a new ioctl or fallocate command
that punches a hole if
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
> On 2017-09-12 13:21, Adam Borowski wrote:
> > There's fallocate -d, but that for some reason touches mtime which makes
> > rsync go again. This can be handled manually but is still not nice.
> It touches mtime because it upda
On 2017-09-12 13:21, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 02:26:39PM +0300, Marat Khalili wrote:
On 12/09/17 14:12, Adam Borowski wrote:
Why would you need support in the hypervisor if cp --reflink=always is
enough?
+1 :)
But I've already found one problem: I use rsync snapshots for b
On Tue, Sep 12, 2017 at 02:26:39PM +0300, Marat Khalili wrote:
> On 12/09/17 14:12, Adam Borowski wrote:
> > Why would you need support in the hypervisor if cp --reflink=always is
> > enough?
> +1 :)
>
> But I've already found one problem: I use rsync snapshots for backups, and
> although rsync do
On 12/09/17 14:12, Adam Borowski wrote:
Why would you need support in the hypervisor if cp --reflink=always is
enough?
+1 :)
But I've already found one problem: I use rsync snapshots for backups,
and although rsync does have --sparse argument, apparently it conflicts
with --inplace. You canno
2017-09-12 14:12 GMT+03:00 Adam Borowski :
> On Tue, Sep 12, 2017 at 02:01:53PM +0300, Timofey Titovets wrote:
>> > On 12/09/17 13:32, Adam Borowski wrote:
>> >> Just use raw -- btrfs already has every feature that qcow2 has, and
>> >> does it better. This doesn't mean btrfs is the best choice for
On Tue, Sep 12, 2017 at 02:01:53PM +0300, Timofey Titovets wrote:
> > On 12/09/17 13:32, Adam Borowski wrote:
> >> Just use raw -- btrfs already has every feature that qcow2 has, and
> >> does it better. This doesn't mean btrfs is the best choice for hosting
> >> VM files, just that raw-over-btrfs
On Tue, 12 Sep 2017 12:32:14 +0200
Adam Borowski wrote:
> discard in the guest (not supported over ide and virtio, supported over scsi
> and virtio-scsi)
IDE does support discard in QEMU, I use that all the time.
It got broken briefly in QEMU 2.1 [1], but then fixed again.
[1] https://bugs.deb
2017-09-12 13:39 GMT+03:00 Marat Khalili :
> On 12/09/17 13:01, Duncan wrote:
>>
>> AFAIK that's wrong -- the only time the app should see the error on btrfs
>> raid1 is if the second copy is also bad
>
> So thought I, but...
>
>> IIRC from what I've read on-list, qcow2 isn't the best alternative f
On 12/09/17 13:01, Duncan wrote:
AFAIK that's wrong -- the only time the app should see the error on btrfs
raid1 is if the second copy is also bad
So thought I, but...
IIRC from what I've read on-list, qcow2 isn't the best alternative for hosting
VMs on
top of btrfs.
Yeah, I've seen discussio
On Tue, Sep 12, 2017 at 10:01:07AM +, Duncan wrote:
> BTW, I am most definitely /not/ a VM expert, and won't pretend to
> understand the details or be able to explain further, but IIRC from what
> I've read on-list, qcow2 isn't the best alternative for hosting VMs on
> top of btrfs. Somethi
Marat Khalili posted on Tue, 12 Sep 2017 11:42:52 +0300 as excerpted:
> On 12/09/17 11:25, Timofey Titovets wrote:
>> AFAIK, if while read BTRFS get Read Error in RAID1, application will
>> also see that error and if application can't handle it -> you got a
>> problems
>>
>> So Btrfs RAID1 ONLY pr
2017-09-12 12:29 GMT+03:00 Marat Khalili :
> On 12/09/17 12:21, Timofey Titovets wrote:
>>
>> Can't reproduce that on latest kernel: 4.13.1
>
> Great! Thank you very much for the test. Do you know if it's fixed in 4.10?
> (or what particular version does?)
> --
>
> With Best Regards,
> Marat Khalil
On 12/09/17 12:21, Timofey Titovets wrote:
Can't reproduce that on latest kernel: 4.13.1
Great! Thank you very much for the test. Do you know if it's fixed in
4.10? (or what particular version does?)
--
With Best Regards,
Marat Khalili
--
To unsubscribe from this list: send the line "unsubsc
2017-09-12 11:42 GMT+03:00 Marat Khalili :
> On 12/09/17 11:25, Timofey Titovets wrote:
>>
>> AFAIK, if while read BTRFS get Read Error in RAID1, application will
>> also see that error and if application can't handle it -> you got a
>> problems
>>
>> So Btrfs RAID1 ONLY protect data, not applicati
On 12/09/17 11:25, Timofey Titovets wrote:
AFAIK, if while read BTRFS get Read Error in RAID1, application will
also see that error and if application can't handle it -> you got a
problems
So Btrfs RAID1 ONLY protect data, not application (qemu in your case).
That's news to me! Why doesn't it tr
2017-09-12 11:02 GMT+03:00 Marat Khalili :
> Thanks to the help from the list I've successfully replaced part of btrfs
> raid1 filesystem. However, while I waited for best opinions on the course of
> actions, the root filesystem of one the qemu-kvm VMs went read-only, and
> this root was of course
Thanks to the help from the list I've successfully replaced part of
btrfs raid1 filesystem. However, while I waited for best opinions on the
course of actions, the root filesystem of one the qemu-kvm VMs went
read-only, and this root was of course based in a qcow2 file on the
problematic btrfs
32 matches
Mail list logo