In walk_down_tree(), we may call btrfs_lookup_extent_info() for same tree
block many times, obviously unnecessary. Here we define a simple struct to
record whether we already have gotten tree block's refs:
struct node_refs {
u64 bytenr[BTRFS_MAX_LEVEL];
u64 r
At 08/16/2016 12:10 AM, Jeff Mahoney wrote:
The qgroup_flags field is overloaded such that it reflects the on-disk
status of qgroups and the runtime state. The BTRFS_QGROUP_STATUS_FLAG_RESCAN
flag is used to indicate that a rescan operation is in progress, but if
the file system is unmounted w
At 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3 devices for
raid5/6 respectively.
Personall
At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
yes, subvol level.
qgroupid rfer excl max_rfer max_excl parent child
-- -
0/5 16.00KiB 16.00KiB none none --- ---
0/
On Mon, Aug 15, 2016 at 5:12 PM, Ronan Chagas wrote:
> Hi guys!
>
> It happened again. The computer was completely unusable. The only useful
> message I saw was this one:
>
> http://img.ctrlv.in/img/16/08/16/57b24b0bb2243.jpg
>
> Does it help?
>
> I decided to format and reinstall tomorrow. This i
On Mon, Aug 15, 2016 at 8:30 PM, Hugo Mills wrote:
> On Mon, Aug 15, 2016 at 10:32:25PM +0800, Anand Jain wrote:
>>
>>
>> On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
>> >On 2016-08-15 10:08, Anand Jain wrote:
>> >>
>> >>
>> IMHO it's better to warn user about 2 devices RAID5 or 3 devic
Le 15/08/16 à 10:16, "Austin S. Hemmelgarn" a écrit :
ASH> With respect to databases, you might consider backing them up separately
ASH> too. In many cases for something like an SQL database, it's a lot more
ASH> flexible to have a dump of the database as a backup than it is to have
ASH> the d
On 03/08/16 22:55, Graham Cobb wrote:
> On 03/08/16 21:37, Adam Borowski wrote:
>> On Wed, Aug 03, 2016 at 08:56:01PM +0100, Graham Cobb wrote:
>>> Are there any btrfs commands (or APIs) to allow a script to create a
>>> list of all the extents referred to within a particular (mounted)
>>> subvolum
yes, subvol level.
qgroupid rfer excl max_rfer max_excl parent child
-- -
0/5 16.00KiB 16.00KiB none none --- ---
0/258 119.48GiB119.48GiB200.00GiB
On Mon, Aug 15, 2016 at 10:32:25PM +0800, Anand Jain wrote:
>
>
> On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
> >On 2016-08-15 10:08, Anand Jain wrote:
> >>
> >>
> IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6.
>
> Any comment is welcomed.
>
> >>
On Sat, Aug 13, 2016 at 03:48:12PM -0700, Deepa Dinamani wrote:
> The series is aimed at getting rid of CURRENT_TIME and CURRENT_TIME_SEC
> macros.
> The macros are not y2038 safe. There is no plan to transition them into being
> y2038 safe.
> ktime_get_* api's can be used in their place. And, the
The qgroup_flags field is overloaded such that it reflects the on-disk
status of qgroups and the runtime state. The BTRFS_QGROUP_STATUS_FLAG_RESCAN
flag is used to indicate that a rescan operation is in progress, but if
the file system is unmounted while a rescan is running, the rescan
operation i
On 2016-08-15 10:32, Anand Jain wrote:
On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices
RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3 devi
On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3 devices for
raid5/6 respectively.
Personall
On 2016-08-15 10:06, Daniel Caillibaud wrote:
Le 15/08/16 à 08:32, "Austin S. Hemmelgarn" a écrit :
ASH> On 2016-08-15 06:39, Daniel Caillibaud wrote:
ASH> > I'm newbie with btrfs, and I have pb with high load after each btrfs
subvolume delete
[…]
ASH> Before I start explaining possible solut
Have a look at this..
http://www.spinics.net/lists/linux-btrfs/msg54779.html
--
RAID5&6 devs_min values are in the context of degraded volume.
RAID1&10.. devs_min values are in the context of healthy volume.
RAID56 is correct. We already have devs_max to know the number
of devices in
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3 devices for
raid5/6 respectively.
Personally, I agree that we should warn when trying to do this,
IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3 devices for
raid5/6 respectively.
Personally, I agree that we should warn when trying to do this, but I
absolutely don't think we should s
Le 15/08/16 à 08:32, "Austin S. Hemmelgarn" a écrit :
ASH> On 2016-08-15 06:39, Daniel Caillibaud wrote:
ASH> > I'm newbie with btrfs, and I have pb with high load after each btrfs
subvolume delete
[…]
ASH> Before I start explaining possible solutions, it helps to explain what's
ASH> actually
On 2016-08-15 09:39, Martin wrote:
That really is the case, there's currently no way to do this with BTRFS.
You have to keep in mind that the raid5/6 code only went into the mainline
kernel a few versions ago, and it's still pretty immature as far as kernel
code goes. I don't know when (if ever)
On Mon, Aug 15, 2016 at 7:38 AM, Martin wrote:
>> Looking at the kernel log itself, you've got a ton of write errors on
>> /dev/sdap. I would suggest checking that particular disk with smartctl, and
>> possibly checking the other hardware involved (the storage controller and
>> cabling).
>>
>> I
On 2016-08-15 09:38, Martin wrote:
Looking at the kernel log itself, you've got a ton of write errors on
/dev/sdap. I would suggest checking that particular disk with smartctl, and
possibly checking the other hardware involved (the storage controller and
cabling).
I would kind of expect BTRFS t
> That really is the case, there's currently no way to do this with BTRFS.
> You have to keep in mind that the raid5/6 code only went into the mainline
> kernel a few versions ago, and it's still pretty immature as far as kernel
> code goes. I don't know when (if ever) such a feature might get put
> Looking at the kernel log itself, you've got a ton of write errors on
> /dev/sdap. I would suggest checking that particular disk with smartctl, and
> possibly checking the other hardware involved (the storage controller and
> cabling).
>
> I would kind of expect BTRFS to crash with that many wri
On Mon, Aug 15, 2016 at 6:19 AM, Martin wrote:
>
> I have now had the first crash, can you take a look if I have provided
> the needed info?
>
> https://bugzilla.kernel.org/show_bug.cgi?id=153141
[337406.626175] BTRFS warning (device sdq): lost page write due to IO
error on /dev/sdap
Anytime th
On Sat, Aug 13, 2016 at 03:48:22PM -0700, Deepa Dinamani wrote:
> btrfs_root_item maintains the ctime for root updates.
> This is not part of vfs_inode.
>
> Since current_time() uses struct inode* as an argument
> as Linus suggested, this cannot be used to update root
> times unless, we modify the
On 2016-08-15 08:19, Martin wrote:
I'm not sure what Arch does any differently to their kernels from
kernel.org kernels. But bugzilla.kernel.org offers a Mainline and
Fedora drop down for identifying the kernel source tree.
IIRC, they're pretty close to mainline kernels. I don't think they hav
On 2016-08-15 08:19, Martin wrote:
The smallest disk of the 122 is 500GB. Is it possible to have btrfs
see each disk as only e.g. 10GB? That way I can corrupt and resilver
more disks over a month.
Well, at least you can easily partition the devices for that to happen.
Can it be done with btrf
On 2016-08-15 06:39, Daniel Caillibaud wrote:
Hi,
I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume
delete
I use snapshots on lxc hosts under debian jessie with
- kernel 4.6.0-0.bpo.1-amd64
- btrfs-progs 4.6.1-1~bpo8
For backup, I have each day, for each subvolume
>> The smallest disk of the 122 is 500GB. Is it possible to have btrfs
>> see each disk as only e.g. 10GB? That way I can corrupt and resilver
>> more disks over a month.
>
> Well, at least you can easily partition the devices for that to happen.
Can it be done with btrfs or should I do it with gd
>> I'm not sure what Arch does any differently to their kernels from
>> kernel.org kernels. But bugzilla.kernel.org offers a Mainline and
>> Fedora drop down for identifying the kernel source tree.
>
> IIRC, they're pretty close to mainline kernels. I don't think they have any
> patches in the fil
On 2016-08-15 03:50, Qu Wenruo wrote:
Hi,
Recently I found that manpage of mkfs is saying minimal device number
for RAID5 and RAID6 is 2 and 3.
Personally speaking, although I understand that RAID5/6 only requires
1/2 devices for parity stripe, it is still quite strange behavior.
Under most ca
On 2016-08-12 11:06, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 12 Aug 2016 08:04:42 -0400 as
excerpted:
On a file server? No, I'd ensure proper physical security is
established and make sure it's properly secured against network based
attacks and then not worry about it. Unless you ha
Hi,
I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume
delete
I use snapshots on lxc hosts under debian jessie with
- kernel 4.6.0-0.bpo.1-amd64
- btrfs-progs 4.6.1-1~bpo8
For backup, I have each day, for each subvolume
btrfs subvolume snapshot -r $subvol $snap
# th
Hi,
Recently I found that manpage of mkfs is saying minimal device number
for RAID5 and RAID6 is 2 and 3.
Personally speaking, although I understand that RAID5/6 only requires
1/2 devices for parity stripe, it is still quite strange behavior.
Under most case, user use raid5/6 for striping A
35 matches
Mail list logo