On 06/03/2016 09:39 PM, Justin Brown wrote:
Here's some thoughts:
Assume a CD sized (680MB) /boot
Some distros carry patches for grub that allow booting from Btrfs,
so no separate /boot file system is required. (Fedora does not;
Ubuntu -- and therefore probably all Debians -- does.)
On Fri, Jun 3, 2016 at 8:13 PM, Christoph Anton Mitterer
wrote:
> If there would be e.g. an kept-up-to-date wiki page about the status
> and current perils of e.g. RAID5/6, people (like me) wouldn't ask every
> weeks, saving the devs' time.
Well up until 4.6, there was a
On Sat, 2016-06-04 at 00:22 +0200, Brendan Hide wrote:
> - RAID5/6 seems far from being stable or even usable,... not to
> > talk
> > about higher parity levels, whose earlier posted patches (e.g.
> > http://thread.gmane.org/gmane.linux.kernel/1654735) seem to have
> > been given up.
> I'm
On Fri, 2016-06-03 at 15:50 -0400, Austin S Hemmelgarn wrote:
> There's no point in trying to do higher parity levels if we can't get
> regular parity working correctly. Given the current state of things,
> it might be better to break even and just rewrite the whole parity
> raid thing from
On Fri, Jun 3, 2016 at 6:48 PM, Nicholas D Steeves wrote:
> On 3 June 2016 at 11:33, Austin S. Hemmelgarn wrote:
>> On 2016-06-03 10:11, Martin wrote:
Make certain the kernel command timer value is greater than the driver
error recovery
Here's some thoughts:
> Assume a CD sized (680MB) /boot
Some distros carry patches for grub that allow booting from Btrfs, so
no separate /boot file system is required. (Fedora does not; Ubuntu --
and therefore probably all Debians -- does.)
> perhaps a 200MB (?) sized EFI partition
Way bigger
On Fri, Jun 3, 2016 at 8:11 AM, Martin wrote:
>> Make certain the kernel command timer value is greater than the driver
>> error recovery timeout. The former is found in sysfs, per block
>> device, the latter can be get and set with smartctl. Wrong
>> configuration is
On Fri, Jun 03, 2016 at 05:41:42PM -0700, Liu Bo wrote:
> We set uptodate flag to pages in the temporary sys_array eb,
> but do not clear the flag after free eb. As the special
> btree inode may still hold a reference on those pages, the
> uptodate flag can remain alive in them.
>
> If
On 06/03/2016 08:41 PM, Liu Bo wrote:
We set uptodate flag to pages in the temporary sys_array eb,
but do not clear the flag after free eb. As the special
btree inode may still hold a reference on those pages, the
uptodate flag can remain alive in them.
If btrfs_super_chunk_root has been
On 3 June 2016 at 11:33, Austin S. Hemmelgarn wrote:
> On 2016-06-03 10:11, Martin wrote:
>>>
>>> Make certain the kernel command timer value is greater than the driver
>>> error recovery timeout. The former is found in sysfs, per block
>>> device, the latter can be get and
We set uptodate flag to pages in the temporary sys_array eb,
but do not clear the flag after free eb. As the special
btree inode may still hold a reference on those pages, the
uptodate flag can remain alive in them.
If btrfs_super_chunk_root has been intentionally changed to the
offset of this
Hi David,
Sorry for the delay. Yes, at this point I feel it would be best to
continue this discussion off-list, or perhaps to shift it to the
debian-doc list. Appologies to linux-btrfs if this should have been
shifted sooner! I'll follow-up with a PM reply momentarily.
Cheers,
Nicholas
On 3
Hi Linus,
My for-linus-4.7 branch has some fixes:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus-4.7
I realized as I was prepping this pull that my tip commit still had
Facebook task numbers and other internal metadata in it. So I had to
reword the description,
Hallo. I'm continuing on sinking in to btrfs, so pointers to concise
help articles appreciated. I've got a couple new home systems, so
perhaps it's time to investigate encryption, and given the bit rot I've
seen here, perhaps time to mirror volumes so the wonderful btrfs
self-healing
> Mitchell wrote:
> With RAID10, there's still only 1 other copy, but the entire "original"
disk is mirrored to another one, right?
No, full disks are never mirrored in any configuration.
Here's how I understand Btrfs' non-parity redundancy profiles:
single: only a single instance of a file
On 2016-06-03 13:38, Christoph Anton Mitterer wrote:
> Hey..
>
> Hm... so the overall btrfs state seems to be still pretty worrying,
> doesn't it?
>
> - RAID5/6 seems far from being stable or even usable,... not to talk
> about higher parity levels, whose earlier posted patches (e.g.
>
Hey.
Does anyone know whether the write hole issues have been fixed already?
https://btrfs.wiki.kernel.org/index.php/RAID56 still mentions it.
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
eb->io_pages is set in read_extent_buffer_pages().
In case of readpage failure, for pages that have been added to bio,
it calls bio_endio and later readpage_io_failed_hook() does the work.
When this eb's page (couldn't be the 1st page) fails to add itself to bio
due to failure in merge_bio(), it
To prevent fuzz filesystem images from panic the whole system,
we need various validation checks to refuse to mount such an image
if btrfs finds any invalid value during loading chunks, including
both sys_array and regular chunks.
Note that these checks may not be sufficient to cover all corner
This adds valid checks for super_total_bytes, super_bytes_used and
super_stripesize, super_num_devices.
Reported-by: Vegard Nossum
Reported-by: Quentin Casasnovas
Signed-off-by: Liu Bo
---
v2:
- Check
On Fri, 2016-06-03 at 13:42 -0500, Mitchell Fossen wrote:
> Thanks for pointing that out, so if I'm thinking correctly, with
> RAID1
> it's just that there is a copy of the data somewhere on some other
> drive.
>
> With RAID10, there's still only 1 other copy, but the entire
> "original"
> disk
Thanks for pointing that out, so if I'm thinking correctly, with RAID1
it's just that there is a copy of the data somewhere on some other
drive.
With RAID10, there's still only 1 other copy, but the entire "original"
disk is mirrored to another one, right?
On Fri, 2016-06-03 at 20:13 +0200,
On Fri, 2016-06-03 at 13:10 -0500, Mitchell Fossen wrote:
> Is there any caveats between RAID1 on all 6 vs RAID10?
Just to be safe: RAID1 in btrfs means not what RAID1 means in any other
terminology about RAID.
The former has only two duplicates, the later means full mirroring of
all devices.
Hello,
I have 6 WD Red Pro drives, each 6TB in space. My question is, what is
the best way to set these up?
The system drive (and root) are on a 500GB SSD, so these drives will
only be used for /home and file storage.
Is there any caveats between RAID1 on all 6 vs RAID10?
Thanks for the help,
Hey..
Hm... so the overall btrfs state seems to be still pretty worrying,
doesn't it?
- RAID5/6 seems far from being stable or even usable,... not to talk
about higher parity levels, whose earlier posted patches (e.g.
http://thread.gmane.org/gmane.linux.kernel/1654735) seem to have
been
On Thu, Jun 02, 2016 at 07:45:49PM +, Omari Stephens wrote:
> [Note: not on list; please reply-all]
>
> I've read everything I can find about running out of space on btrfs, and it
> hasn't helped. I'm currently dead in the water.
>
> Everything I do seems to make the problem monotonically
On 2016-06-03 10:11, Martin wrote:
Make certain the kernel command timer value is greater than the driver
error recovery timeout. The former is found in sysfs, per block
device, the latter can be get and set with smartctl. Wrong
configuration is common (it's actually the default) when using
On 04/01/2016 02:34 AM, Qu Wenruo wrote:
This patchset can be fetched from github:
https://github.com/adam900710/linux.git wang_dedupe_20160401
In this patchset, we're proud to bring a completely new storage backend:
Khala backend.
With Khala backend, all dedupe hash will be restored in the
On 04/01/2016 02:35 AM, Qu Wenruo wrote:
Now on-disk backend can add hash now.
Since all needed on-disk backend functions are added, also allow on-disk
backend to be used, by changing DEDUPE_BACKEND_COUNT from 1(inmemory
only) to 2 (inmemory + ondisk).
Signed-off-by: Wang Xiaoguang
On 04/01/2016 02:35 AM, Qu Wenruo wrote:
Now on-disk backend should be able to search hash now.
Signed-off-by: Wang Xiaoguang
Signed-off-by: Qu Wenruo
---
fs/btrfs/dedupe.c | 167 --
On 04/01/2016 02:35 AM, Qu Wenruo wrote:
Since we will introduce a new on-disk based dedupe method, introduce new
interfaces to resume previous dedupe setup.
And since we introduce a new tree for status, also add disable handler
for it.
Signed-off-by: Wang Xiaoguang
On 04/01/2016 02:35 AM, Qu Wenruo wrote:
Core implement for inband de-duplication.
It reuse the async_cow_start() facility to do the calculate dedupe hash.
And use dedupe hash to do inband de-duplication at extent level.
The work flow is as below:
1) Run delalloc range for an inode
2) Calculate
> I would say it is, but I also don't have quite as much experience with it as
> with BTRFS raid1 mode. The one thing I do know for certain about it is that
> even if it theoretically could recover from two failed disks (ie, if they're
> from different positions in the striping of each mirror),
On 06/01/2016 09:12 PM, Qu Wenruo wrote:
At 06/02/2016 06:08 AM, Mark Fasheh wrote:
On Fri, Apr 01, 2016 at 02:35:00PM +0800, Qu Wenruo wrote:
Core implement for inband de-duplication.
It reuse the async_cow_start() facility to do the calculate dedupe hash.
And use dedupe hash to do inband
On 2016-06-03 09:31, Martin wrote:
In general, avoid Ubuntu LTS versions when dealing with BTRFS, as well as
most enterprise distros, they all tend to back-port patches instead of using
newer kernels, which means it's functionally impossible to provide good
support for them here (because we
> Make certain the kernel command timer value is greater than the driver
> error recovery timeout. The former is found in sysfs, per block
> device, the latter can be get and set with smartctl. Wrong
> configuration is common (it's actually the default) when using
> consumer drives, and inevitably
On Fri, Jun 3, 2016 at 6:55 AM, Austin S. Hemmelgarn
wrote:
>
> That said, there are other options. If you have enough disks, you can run
> BTRFS raid1 on top of LVM or MD RAID5 or RAID6, which provides you with the
> benefits of both.
There is a trade off. Either mdadm
On 06/01/2016 01:48 AM, Lu Fengqi wrote:
Only in the case of different root_id or different object_id, check_shared
identified extent as the shared. However, If a extent was referred by
different offset of same file, it should also be identified as shared.
In addition, check_shared's loop scale
On 06/03/2016 03:31 PM, Martin wrote:
In general, avoid Ubuntu LTS versions when dealing with BTRFS, as well as
most enterprise distros, they all tend to back-port patches instead of using
newer kernels, which means it's functionally impossible to provide good
support for them here (because we
> In general, avoid Ubuntu LTS versions when dealing with BTRFS, as well as
> most enterprise distros, they all tend to back-port patches instead of using
> newer kernels, which means it's functionally impossible to provide good
> support for them here (because we can't know for sure what exactly
On Thu, Jun 02, 2016 at 05:06:37PM +0900, Satoru Takeuchi wrote:
> Remove the following build error.
>
>
>$ make btrfs-crc
>[CC] btrfs-crc.o
>[LD] btrfs-crc
>btrfs-crc.o: In function `usage':
>
On 2016-06-03 05:49, Martin wrote:
Hello,
We would like to use urBackup to make laptop backups, and they mention
btrfs as an option.
https://www.urbackup.org/administration_manual.html#x1-8400010.6
So if we go with btrfs and we need 100TB usable space in raid6, and to
have it replicated each
On 2016-06-02 18:45, Henk Slager wrote:
On Thu, Jun 2, 2016 at 3:55 PM, MegaBrutal wrote:
2016-06-02 0:22 GMT+02:00 Henk Slager :
What is the kernel version used?
Is the fs on a mechanical disk or SSD?
What are the mount options?
How old is the fs?
> Before trying RAID5/6 in production, be sure to read posts like these:
>
> http://www.spinics.net/lists/linux-btrfs/msg55642.html
Very interesting post and very recent even.
If I decide to try raid6 and of course everything is replicated each
day (for a bit of a safety net), and disks begin to
Hi Martin,
On 06/03/2016 11:49 AM, Martin wrote:
We would like to use urBackup to make laptop backups, and they mention
btrfs as an option.
[...]
And a bonus question: How stable is raid6 and detecting and replacing
failed drives?
Before trying RAID5/6 in production, be sure to read posts
> Do you plan to use Snapshots? How many of them?
Yes, minimum 7 for each day of the week.
Nice to have would be 4 extra for each week of the month and then 12
for each month of the year.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
On Fri, Jun 03, 2016 at 11:49:09AM +0200, Martin wrote:
> We would like to use urBackup to make laptop backups, and they mention
> btrfs as an option.
>
> https://www.urbackup.org/administration_manual.html#x1-8400010.6
>
> So if we go with btrfs and we need 100TB usable space in raid6, and to
>
Hello,
We would like to use urBackup to make laptop backups, and they mention
btrfs as an option.
https://www.urbackup.org/administration_manual.html#x1-8400010.6
So if we go with btrfs and we need 100TB usable space in raid6, and to
have it replicated each night to another btrfs server for
48 matches
Mail list logo