If you don't need image portability use an LVM logical volume for
backing of the VM. That LV gets partitioned as if it were a disk, and
you can use Btrfs for root home data or whatever.
If you need image portability, e.g. qcow2, then I'd put it on ext4 or
XFS, and you can use Btrfs within the VM
As we do per-chunk missing device number check at read_one_chunk() time,
it's not needed to do global missing device number check.
Just remove it.
Now btrfs can handle the following case:
# mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
Data chunk will be located in sdb, so we should be
Btrfs supports different raid profile for meta/data/sys, and as
different profile support different tolerated missing device, it's
better to check if it can be mounted degraded at a per-chunk base.
So this patch will add check for read_one_chunk() against its profile,
other than checking it
Austin S Hemmelgarn posted on Tue, 15 Sep 2015 14:46:28 -0400 as
excerpted:
> On 2015-09-15 14:42, Tyler Williams wrote:
>> So I only had qgroups enabled because at some point it seemed like it
>> gave me the size of individual snapshots. Would it be likely that if I
>> just removed qgroups from
Gert Menke posted on Tue, 15 Sep 2015 23:34:04 +0200 as excerpted:
> I'm not 100% sure if this is the right place to ask[.]
It is. =:^)
> I want to build a virtualization server to replace my current home
> server. I'm thinking about a Debian system with libvirt/KVM. The system
> will have one
Stéphane Lesimple posted on Tue, 15 Sep 2015 23:47:01 +0200 as excerpted:
> Le 2015-09-15 16:56, Josef Bacik a écrit :
>> On 09/15/2015 10:47 AM, Stéphane Lesimple wrote:
I've been experiencing repetitive "kernel BUG" occurences in the past
few days trying to balance a raid5 filesystem
Le 2015-09-15 16:56, Josef Bacik a écrit :
On 09/15/2015 10:47 AM, Stéphane Lesimple wrote:
I've been experiencing repetitive "kernel BUG" occurences in the past
few days trying to balance a raid5 filesystem after adding a new
drive.
It occurs on both 4.2.0 and 4.1.7, using 4.2 userspace
On Mon, Sep 14, 2015 at 09:29:06AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> If a file has a range pointing to a compressed extent, followed by
> another range that points to the same compressed extent and a read
> operation attempts to read both ranges
I've received several kernel warnings over the last few weeks. I
checked on the #BTRFS irc channel and it was suggested that I post the
relevant information here to see if this was something that I should
be worried about.
[root@tawilliams ~]# uname -a
Linux tawilliams.williamstlr.net
On 2015-09-15 14:42, Tyler Williams wrote:
So I only had qgroups enabled because at some point it seemed like it
gave me the size of individual snapshots. Would it be likely that if I
just removed qgroups from that volume that would prevent that message
in the future?
Maybe, I'm not entirely
On 09/15/15 17:50, Holger Hoffstätte wrote:
> This V2 does indeed seem to fix the issues I reported with snapshot
> deletion & concurrent sync. I've now created/filled/deleted countless
> snapshots while issuing sync(s) in parallel, and the problem that I
> saw fairly frequently with V1 no longer
On 2015-09-15 14:53, Tyler Williams wrote:
I'll give that a shot. This will be a lame questions, but what address
to I need to reply to for these messages to make it to the mailing
list? It looks like I'm replying to you instead of to the mailing list
itself. Thanks
It's not a lame question at
On 09/15/2015 03:08 PM, Holger Hoffstätte wrote:
On 09/15/15 17:50, Holger Hoffstätte wrote:
This V2 does indeed seem to fix the issues I reported with snapshot
deletion & concurrent sync. I've now created/filled/deleted countless
snapshots while issuing sync(s) in parallel, and the problem
On 2015-09-15 14:13, Tyler Williams wrote:
I've received several kernel warnings over the last few weeks. I
checked on the #BTRFS irc channel and it was suggested that I post the
relevant information here to see if this was something that I should
be worried about.
[root@tawilliams ~]# uname
fsid can be mounted multiple times, with different subvolid.
And we don't have to scan a mount point if we already have
that in the scanned list.
And thus nicely avoids the following warning with multiple
subvol mounts on older kernel like 2.6.32 where
BTRFS_IOC_GET_FSLABEL ioctl does not exist.
Old kernel like 2.6.32 does not provide ioctl BTRFS_IOC_GET_FSLABEL.
So we need to provide a fail safe logic for btrfs-progs running
on those kernel.
In this patch when get_label_mounted() fails on the old kernel
it will fail back to the old method and uses get_label_unmounted(),
where it
To fix following bug:
# ./convert-tests.sh
[TEST] ext2 4k nodesize, btrfs defaults
failed: mount /root/btrfsprogs/tests/test.img /root/btrfsprogs/tests/mnt
# tail convert-tests-results.txt
...
### mount /root/btrfsprogs/tests/test.img
/root/btrfsprogs/tests/mnt
mount:
On 09/14/2015 11:32 PM, Darrick J. Wong wrote:
> On Fri, Sep 11, 2015 at 04:30:21PM -0400, Anna Schumaker wrote:
>> The NFS server will need some kind offallback for filesystems that don't
>> have any kind of copy acceleration, and it should be generally useful to
>> have an in-kernel copy to
On Tue, Sep 15, 2015 at 11:58:04AM -0400, Anna Schumaker wrote:
> On 09/14/2015 11:32 PM, Darrick J. Wong wrote:
> > On Fri, Sep 11, 2015 at 04:30:21PM -0400, Anna Schumaker wrote:
> >> The NFS server will need some kind offallback for filesystems that don't
> >> have any kind of copy
On Tue, Sep 15, 2015 at 04:22:08PM +0200, Juergen Sauer wrote:
> Hi!
>
> Due an hibernation event my BTRFS Raid56 failed and is not mountable
> anymore. :(
>
> For Debugging I moved the Devices to an test-hardware and booted this
> system from an Arch Linux ISO, which I created for this purpose.
On 09/15/2015 11:50 AM, Holger Hoffstätte wrote:
On 09/15/15 16:07, Josef Bacik wrote:
When dropping a snapshot we need to account for the qgroup changes. If we drop
the snapshot in all one go then the backref code will fail to find blocks from
the snapshot we dropped since it won't be able to
Hi all,
What is the intended destination of a symlink inside a subvolume after
a snapshot?
When I take a snapshot of a subvolume that contains a symlink, the
symlink points outside the snapshot and into the original subvolume.
Is this the intended behaviour? Or should the symlinks be patched up
Hello Filipe,
your mail comes just in time as I was typing a mail about this patch:
On 09/15/15 04:22, fdman...@kernel.org wrote:
> Btrfs: remove unnecessary locking of cleaner_mutex to avoid deadlock
While it might seem to fix this particular problem, it seems there is either a
new one
From: Filipe Manana
Hi Chris,
Please consider the following fixes for the 4.3 kernel release candidates.
One of them addresses a deadlock introduced in 4.3, another is for a false
enospc condition (which I introduced in a 4.2 commit) that can happen either
on empty
On 2015-09-15 12:38, Darrick J. Wong wrote:
On Tue, Sep 15, 2015 at 11:58:04AM -0400, Anna Schumaker wrote:
On 09/14/2015 11:32 PM, Darrick J. Wong wrote:
On Fri, Sep 11, 2015 at 04:30:21PM -0400, Anna Schumaker wrote:
The NFS server will need some kind offallback for filesystems that don't
On 09/15/15 16:07, Josef Bacik wrote:
> When dropping a snapshot we need to account for the qgroup changes. If we
> drop
> the snapshot in all one go then the backref code will fail to find blocks from
> the snapshot we dropped since it won't be able to find the root in the fs root
> cache.
Hi list,
i've catch a io error, caused by csum mismatch
Can i force fs to read data?
This is really not a cool, if only way is use btrfs restore.
#Info vm machin, after power failure get 2 blocks with errors, and one
mysql table, can't be readed by mysql (and also, i can't just dump it)
--
Have
Fantastic. Thanks a ton
On Tue, Sep 15, 2015 at 1:03 PM, Austin S Hemmelgarn
wrote:
> On 2015-09-15 14:53, Tyler Williams wrote:
>>
>> I'll give that a shot. This will be a lame questions, but what address
>> to I need to reply to for these messages to make it to the
On 09/15/15 21:15, Josef Bacik wrote:
> On 09/15/2015 03:08 PM, Holger Hoffstätte wrote:
>> On 09/15/15 17:50, Holger Hoffstätte wrote:
>>> This V2 does indeed seem to fix the issues I reported with snapshot
>>> deletion & concurrent sync. I've now created/filled/deleted countless
>>> snapshots
Hi everybody,
first off, I'm not 100% sure if this is the right place to ask, so if
it's not, I apologize and I'd appreciate a pointer in the right direction.
I want to build a virtualization server to replace my current home
server. I'm thinking about a Debian system with libvirt/KVM. The
Thanks for tip hugo *_*
2015-09-15 23:38 GMT+03:00 Hugo Mills :
> On Tue, Sep 15, 2015 at 10:59:48PM +0300, Timofey Titovets wrote:
>> Hi list,
>> i've catch a io error, caused by csum mismatch
>> Can i force fs to read data?
>> This is really not a cool, if only way is use
On Tue, Sep 15, 2015 at 10:59:48PM +0300, Timofey Titovets wrote:
> Hi list,
> i've catch a io error, caused by csum mismatch
> Can i force fs to read data?
> This is really not a cool, if only way is use btrfs restore.
>
> #Info vm machin, after power failure get 2 blocks with errors, and one
>
Hi!
Due an hibernation event my BTRFS Raid56 failed and is not mountable
anymore. :(
For Debugging I moved the Devices to an test-hardware and booted this
system from an Arch Linux ISO, which I created for this purpose.
The Problem is:
[ 1086.714109] BTRFS (device sdd1): parent transid verify
On 09/15/2015 09:43 AM, Holger Hoffstätte wrote:
On 09/15/15 14:58, Filipe Manana wrote:
On Tue, Sep 15, 2015 at 12:49 PM, Holger Hoffstätte
wrote:
Hello Filipe,
your mail comes just in time as I was typing a mail about this patch:
On 09/15/15 04:22,
btrfs_raid_array[] holds attributes of all raid types.
Use btrfs_raid_array[].devs_min is best way for request
in btrfs_reduce_alloc_profile(), instead of use complex
condition of each raid types.
Signed-off-by: Zhao Lei
---
fs/btrfs/extent-tree.c | 48
This array is used to record attributes of each raid type,
make it public, and many functions will benifit with this array.
For example, num_tolerated_disk_barrier_failures(), we can
avoid complex conditions in this function, and get raid attribute
simply by accessing above array.
It can also
This array is used to record attributes of each raid type,
make it public, and many functions will benifit with this array.
For example, num_tolerated_disk_barrier_failures(), we can
avoid complex conditions in this function, and get raid attribute
simply by accessing above array.
It can also
btrfs_raid_array[] is used to define all raid attributes, use it
to get tolerated_failures in btrfs_get_num_tolerated_disk_barrier_failures(),
instead of complex condition in function.
It can make code simple and auto-support other possible raid-type in
future.
Signed-off-by: Zhao Lei
On 09/15/15 14:58, Filipe Manana wrote:
> On Tue, Sep 15, 2015 at 12:49 PM, Holger Hoffstätte
> wrote:
>> Hello Filipe,
>>
>> your mail comes just in time as I was typing a mail about this patch:
>>
>> On 09/15/15 04:22, fdman...@kernel.org wrote:
>>>
On 15 September 2015 at 13:38, Kai Krakow wrote:
> Marc O'Morain schrieb:
>
>> Hi all,
>>
>> What is the intended destination of a symlink inside a subvolume after
>> a snapshot?
>>
>> When I take a snapshot of a subvolume that contains a symlink, the
>>
Marc O'Morain schrieb:
> Hi all,
>
> What is the intended destination of a symlink inside a subvolume after
> a snapshot?
>
> When I take a snapshot of a subvolume that contains a symlink, the
> symlink points outside the snapshot and into the original subvolume.
>
> Is
On Tue, Sep 15, 2015 at 12:49 PM, Holger Hoffstätte
wrote:
> Hello Filipe,
>
> your mail comes just in time as I was typing a mail about this patch:
>
> On 09/15/15 04:22, fdman...@kernel.org wrote:
>> Btrfs: remove unnecessary locking of cleaner_mutex to
I've been experiencing repetitive "kernel BUG" occurences in the past
few days trying to balance a raid5 filesystem after adding a new drive.
It occurs on both 4.2.0 and 4.1.7, using 4.2 userspace tools.
I've ran a scrub on this filesystem after the crash happened twice, and
if found no
On 09/15/2015 10:47 AM, Stéphane Lesimple wrote:
I've been experiencing repetitive "kernel BUG" occurences in the past
few days trying to balance a raid5 filesystem after adding a new drive.
It occurs on both 4.2.0 and 4.1.7, using 4.2 userspace tools.
I've ran a scrub on this filesystem after
44 matches
Mail list logo