On 2017-02-08 16:45, Peter Grandi wrote:
[ ... ]
The issue isn't total size, it's the difference between total
size and the amount of data you want to store on it. and how
well you manage chunk usage. If you're balancing regularly to
compact chunks that are less than 50% full, [ ... ] BTRFS on
On 2017-02-08 20:42, Ian Kelling wrote:
I had a file read fail repeatably, in syslog, lines like this
kernel: BTRFS warning (device dm-5): csum failed ino 2241616 off
51580928 csum 4redacted expected csum 2redacted
I rmed the file.
Another error more recently, 5 instances which look like
On 2017-02-09 06:49, Adam Borowski wrote:
On Wed, Feb 08, 2017 at 02:21:13PM -0500, Austin S. Hemmelgarn wrote:
- maybe deduplication (cyrus does it by hardlinking of same content messages
now) later
Deduplication beyond what Cyrus does is probably not worth it. In most
cases about 10
On 2017-02-16 15:13, E V wrote:
It would be nice if there was an easy way to tell btrfs to allocate
another metadata chunk. For example, the below fs is full due to
exhausted metadata:
Device size:1013.28GiB
Device allocated: 1013.28GiB
Device unallocated:
On 2017-02-16 15:36, Chris Murphy wrote:
Hi,
This man page contains a list for pretty much every other file system,
with a oneliner description: ext4, XFS is in there, and even NTFS, but
not Btrfs.
Also, /etc/filesystems doesn't contain Btrfs. Anyone know if either,
or both, ought to contain
On 2017-02-09 08:25, Adam Borowski wrote:
On Wed, Feb 08, 2017 at 11:48:04AM +0800, Qu Wenruo wrote:
Just don't believe the vanilla df output for btrfs.
For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
and has different metadata/data profile, we can only get a clear
On 2017-02-09 22:58, Andrei Borzenkov wrote:
07.02.2017 23:47, Austin S. Hemmelgarn пишет:
...
Sadly, freezefs (the generic interface based off of xfs_freeze) only
works for block device snapshots. Filesystem level snapshots need the
application software to sync all it's data and then stop
On 2017-02-10 09:21, Peter Zaitsev wrote:
Hi,
As I have been reading btrfs whitepaper it speaks about autodefrag in very
generic terms - once random write in the file is detected it is put in the
queue to be defragmented. Yet I could not find any specifics about this
process described
On 2017-02-17 03:26, Duncan wrote:
Imran Geriskovan posted on Thu, 16 Feb 2017 13:42:09 +0200 as excerpted:
Opps.. I mean 4.9/4.10 Experiences
On 2/16/17, Imran Geriskovan wrote:
What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
I'm still on
On 2017-01-16 06:10, Christoph Groth wrote:
Hi,
I’ve been using a btrfs RAID1 of two hard disks since early 2012 on my
home server. The machine has been working well overall, but recently
some problems with the file system surfaced. Since I do have backups, I
do not worry about the data, but
On 2017-01-16 10:42, Christoph Groth wrote:
Austin S. Hemmelgarn wrote:
On 2017-01-16 06:10, Christoph Groth wrote:
root@mim:~# btrfs fi df /
Data, RAID1: total=417.00GiB, used=344.62GiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=40.00MiB, used=68.00KiB
System, single
On 2017-01-16 23:50, Janos Toth F. wrote:
BTRFS uses a 2 level allocation system. At the higher level, you have
chunks. These are just big blocks of space on the disk that get used for
only one type of lower level allocation (Data, Metadata, or System). Data
chunks are normally 1GB, Metadata
On 2017-01-17 04:18, Christoph Groth wrote:
Austin S. Hemmelgarn wrote:
There's not really much in the way of great documentation that I know
of. I can however cover the basics here:
(...)
Thanks for this explanation. I'm sure it will be also useful to others.
Glad I could help
On 2017-01-18 09:21, Steven Hum wrote:
Added 2 drives to my RAID10, then ran btrfs balance. The system appears
to have crashed after several hours (I was ssh'd in at the time on my
local network). When I reboot the Arch system, I ran btrfs check and no
errors were reported.
However, attempting
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is
On 2017-01-19 11:39, Alejandro R. Mosteo wrote:
Hello list,
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some protection against the
typical aging HDD that
On 2016-08-15 21:32, Qu Wenruo wrote:
At 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
On 2016-08-15 10:08, Anand Jain wrote:
IMHO it's better to warn user about 2 devices RAID5 or 3 devices
RAID6.
Any comment is welcomed.
Based on looking at the code, we do in fact support 2/3
On 2017-02-27 14:15, John Marrett wrote:
Liubo correctly identified direct IO as a solution for my test
performance issues, with it in use I achieved 908 read and 305 write,
not quite as fast as ZFS but more than adequate for my needs. I then
applied Peter's recommendation of switching to raid10
On 2016-08-25 05:38, Holger Hoffstätte wrote:
Automatically resuming an interrupted balance has repeatedly caused all
sorts of problems because it creates a possible failure mode when a user
can least use it: after a crash/power loss/sudden reboot (which, like it
or not, is the de facto "fix
On 2016-09-01 12:44, Kyle Gates wrote:
-Original Message-
From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
ow...@vger.kernel.org] On Behalf Of Austin S. Hemmelgarn
Sent: Thursday, September 01, 2016 6:18 AM
To: linux-btrfs@vger.kernel.org
Subject: Re: your mail
On 2016-09-01
On 2016-09-05 05:59, Graham Cobb wrote:
Does anyone know of a security analysis of btrfs receive?
I'm not a developer, and definitely not a security specialist, just a
security minded sysadmin who has some idea what's going on, but I can at
least try and answer this.
I assume that just using
On 2016-09-02 06:55, Duncan wrote:
Kai Krakow posted on Thu, 01 Sep 2016 21:45:19 +0200 as excerpted:
Am Sat, 20 Aug 2016 06:30:11 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
There's at least three other options to try to get what you mention,
however. FWIW, I'm a gentooer and thus
On 2016-09-01 13:12, Jeff Mahoney wrote:
On 9/1/16 1:04 PM, Austin S. Hemmelgarn wrote:
On 2016-09-01 12:34, Ronan Arraes Jardim Chagas wrote:
Em Qui, 2016-09-01 às 09:21 -0400, Austin S. Hemmelgarn escreveu:
Yes, you can just run `btrfs quota disable /` and it should
work. This
ironically
On 2016-09-01 12:34, Ronan Arraes Jardim Chagas wrote:
Em Qui, 2016-09-01 às 09:21 -0400, Austin S. Hemmelgarn escreveu:
Yes, you can just run `btrfs quota disable /` and it should
work. This
ironically reiterates that one of the bigger problems with BTRFS is
that
distros are enabling unstable
On 2016-08-31 19:04, Gareth Pye wrote:
ro,degraded has mounted it nicely and my rsync of the more useful data
is progressing at the speed of WiFi.
There are repeated read errors from one drive still but the rsync
hasn't bailed yet, which I think means there isn't any overlapping
errors in any
On 2016-09-01 03:44, M G Berberich wrote:
Am Mittwoch, den 31. August schrieb Fennec Fox:
Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC
2016 x86_64 GNU/Linux
btrfs-progs v4.7
Data, single: total=30.01GiB, used=18.95GiB
System, single: total=4.00MiB, used=16.00KiB
On 2016-09-01 08:57, Ronan Arraes Jardim Chagas wrote:
Hi!
Em Qua, 2016-08-31 às 17:09 -0600, Chris Murphy escreveu:
OK so Ronan, I'm gonna guess the simplest work around for your
problem
is to disable quota support, and see if the problem happens again.
Look at the output of the command
On 2016-09-06 13:20, Graham Cobb wrote:
Thanks to Austin and Duncan for their replies.
On 06/09/16 13:15, Austin S. Hemmelgarn wrote:
On 2016-09-05 05:59, Graham Cobb wrote:
Does the "path" argument of btrfs-receive mean that *all* operations are
confined to that path? For example,
On 2016-09-07 15:34, Chris Murphy wrote:
On Wed, Sep 7, 2016 at 1:08 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
I think I covered it already in the last thread on this, but the best way I
see to fix the whole auto-assembly issue is:
1. Stop the damn auto-scanning of new d
On 2016-09-07 10:44, Christoph Anton Mitterer wrote:
On Wed, 2016-09-07 at 07:58 -0400, Austin S. Hemmelgarn wrote:
if you want proper security you should
be
using a real container system
Won't these probably use the same filesystems?
That depends on how it's set up. Most container software
On 2016-09-07 10:41, Christoph Anton Mitterer wrote:
On Tue, 2016-09-06 at 18:20 +0100, Graham Cobb wrote:
they know the UUID of the subvolume?
Unfortunately, btrfs seems to be pretty problematic when anyone knows
your UUIDs...
This is an issue with any filesystem, it is just a bigger issue
On 2016-09-07 07:58, Austin S. Hemmelgarn wrote:
On 2016-09-06 13:20, Graham Cobb wrote:
Thanks to Austin and Duncan for their replies.
On 06/09/16 13:15, Austin S. Hemmelgarn wrote:
On 2016-09-05 05:59, Graham Cobb wrote:
Does the "path" argument of btrfs-receive mean that *all*
On 2016-09-09 12:18, David Sterba wrote:
On Wed, Sep 07, 2016 at 07:58:30AM -0400, Austin S. Hemmelgarn wrote:
On 2016-09-06 13:20, Graham Cobb wrote:
Thanks to Austin and Duncan for their replies.
On 06/09/16 13:15, Austin S. Hemmelgarn wrote:
On 2016-09-05 05:59, Graham Cobb wrote:
Does
On 2016-09-09 14:58, Chris Murphy wrote:
On Thu, Sep 8, 2016 at 5:48 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-09-07 15:34, Chris Murphy wrote:
I like the idea of matching WWN as part of the check, with a couple of
caveats:
1. We need to keep in mind that i
On 2016-09-09 15:23, moparisthebest wrote:
On 09/09/2016 02:47 PM, Austin S. Hemmelgarn wrote:
On 2016-09-09 12:12, moparisthebest wrote:
Hi,
I'm hoping to get some help with mounting my btrfs array which quit
working yesterday. My array was in the middle of a balance, about 50%
remaining
On 2016-09-11 09:02, Hugo Mills wrote:
On Sun, Sep 11, 2016 at 02:39:14PM +0200, Waxhead wrote:
Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 13:43:59 CEST schrieb Martin Steigerwald:
Thing is: This just seems to be when has a feature been implemented
matrix.
Not when it is
On 2016-09-11 13:11, Duncan wrote:
Martin Steigerwald posted on Sun, 11 Sep 2016 14:05:03 +0200 as excerpted:
Just add another column called "Production ready". Then research / ask
about production stability of each feature. The only challenge is: Who
is authoritative on that? I´d certainly
On 2016-09-12 08:59, Michel Bouissou wrote:
Le lundi 12 septembre 2016, 08:20:20 Austin S. Hemmelgarn a écrit :
FWIW, here's a list of what I personally consider stable (as in, I'm
willing to bet against reduced uptime to use this stuff on production
systems at work and personal systems at home
On 2016-09-11 15:21, Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 21:56:07 CEST schrieb Imran Geriskovan:
On 9/11/16, Duncan <1i5t5.dun...@cox.net> wrote:
Martin Steigerwald posted on Sun, 11 Sep 2016 17:32:44 +0200 as excerpted:
What is the smallest recommended fs size for
On 2016-09-11 15:51, Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 19:46:32 CEST schrieb Hugo Mills:
On Sun, Sep 11, 2016 at 09:13:28PM +0200, Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 16:44:23 CEST schrieb Duncan:
* Metadata, and thus mixed-bg, defaults to DUP
On 2016-09-12 08:33, Jeff Mahoney wrote:
On 9/9/16 8:47 PM, Austin S. Hemmelgarn wrote:
A couple of other things to comment about on this:
1. 'can_overcommit' (the function that the Arch kernel choked on) is
from the memory management subsystem. The fact that that's throwing a
null pointer
On 2016-09-12 08:54, Imran Geriskovan wrote:
On 9/11/16, Chris Murphy wrote:
Something else that's screwy in that bug that I just realized, why is
it not defaulting to mixed-block groups on a 100MiB fallocated file? I
thought mixed-bg was the default below a certain
On 2016-09-13 16:39, Cesar Strauss wrote:
On 13-09-2016 16:49, Austin S. Hemmelgarn wrote:
I'd be kind of curious to see the results from btrfs check run without
repair, but I doubt that will help narrow things down any further.
Attached.
As of right now, the absolute first thing I'd do
On 2016-09-13 15:20, Cesar Strauss wrote:
Hello,
I have a BTRFS filesystem that is reverting to read-only after a few
moments of use. There is a stack trace visible in the kernel log, which
is attached.
Here is my system information:
# uname -a
Linux rescue 4.7.2-1-ARCH #1 SMP PREEMPT Sat
On 2016-09-09 12:12, moparisthebest wrote:
Hi,
I'm hoping to get some help with mounting my btrfs array which quit
working yesterday. My array was in the middle of a balance, about 50%
remaining, when it hit an error and remounted itself read-only [1].
btrfs fi show output [2], btrfs df output
On 2016-09-09 14:32, moparisthebest wrote:
On 09/09/2016 01:51 PM, Chris Murphy wrote:
On Fri, Sep 9, 2016 at 10:12 AM, moparisthebest
wrote:
Hi,
I'm hoping to get some help with mounting my btrfs array which quit
working yesterday. My array was in the middle of a
On 2016-09-09 12:33, David Sterba wrote:
On Wed, Sep 07, 2016 at 03:08:18PM -0400, Austin S. Hemmelgarn wrote:
On 2016-09-07 14:07, Christoph Anton Mitterer wrote:
On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:
This is an issue with any filesystem,
Not really... any other
On 2016-09-12 14:46, Imran Geriskovan wrote:
Wait wait wait a second:
This is 256 MB SINGLE created
by GPARTED, which is the replacement of MANUALLY
CREATED 127MB DUP which is now non-existant..
Which I was not aware it was a DUP at the time..
Peeww... Small btrfs is full of surprises.. ;)
On 2016-09-12 12:27, David Sterba wrote:
On Mon, Sep 12, 2016 at 04:27:14PM +0200, David Sterba wrote:
I therefore would like to propose that some sort of feature / stability
matrix for the latest kernel is added to the wiki preferably somewhere
where it is easy to find. It would be nice to
On 2016-09-12 12:51, David Sterba wrote:
On Mon, Sep 12, 2016 at 10:54:40AM -0400, Austin S. Hemmelgarn wrote:
Somebody has put that table on the wiki, so it's a good starting point.
I'm not sure we can fit everything into one table, some combinations do
not bring new information and we'd need
On 2016-09-12 13:29, Filipe Manana wrote:
On Mon, Sep 12, 2016 at 5:56 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-09-12 12:27, David Sterba wrote:
On Mon, Sep 12, 2016 at 04:27:14PM +0200, David Sterba wrote:
I therefore would like to propose that some sort of f
On 2016-09-12 16:44, Chris Murphy wrote:
On Mon, Sep 12, 2016 at 2:35 PM, Martin Steigerwald wrote:
Am Montag, 12. September 2016, 23:21:09 CEST schrieb Pasi Kärkkäinen:
On Mon, Sep 12, 2016 at 09:57:17PM +0200, Martin Steigerwald wrote:
Am Montag, 12. September 2016,
On 2016-09-12 16:08, Chris Murphy wrote:
On Mon, Sep 12, 2016 at 10:56 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
Things listed as TBD status:
1. Seeding: Seems to work fine the couple of times I've tested it, however
I've only done very light testing, and the whole f
On 2016-09-13 04:38, Timofey Titovets wrote:
https://btrfs.wiki.kernel.org/index.php/Status
I suggest to mark RAID1/10 as 'mostly ok'
as on btrfs RAID1/10 is safe to data, but not for application that uses it.
i.e. it not hide I/O error even if it's can be masked.
On 2016-09-12 16:25, Chris Murphy wrote:
On Mon, Sep 12, 2016 at 5:24 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
After device discovery, specify UUID= instead of a device node.
Oh yeah good point, -U --uuid is also doable. I'm not sure what the
benefit is of using sysfs to
On 2016-09-15 10:06, Anand Jain wrote:
Thanks for comments.
Pls see inline as below.
On 09/15/2016 07:37 PM, Austin S. Hemmelgarn wrote:
On 2016-09-13 09:39, Anand Jain wrote:
This patchset adds btrfs encryption support.
The main objective of this series is to have bugs fixed and stability
On 2016-09-15 14:01, Chris Murphy wrote:
On Tue, Sep 13, 2016 at 5:35 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-09-12 16:08, Chris Murphy wrote:
- btrfsck status
e.g. btrfs-progs 4.7.2 still warns against using --repair, and lists
it under dangerous options also;
On 2016-09-12 10:51, Chris Murphy wrote:
On Mon, Sep 12, 2016 at 8:09 AM, Henk Slager wrote:
FWIW, I use BTRFS for /boot, but it's not for snapshotting or even the COW,
it's for DUP mode and the error recovery it provides. Most people don't
think about this if it hasn't
On 2016-09-12 10:09, Henk Slager wrote:
FWIW, I use BTRFS for /boot, but it's not for snapshotting or even the COW,
it's for DUP mode and the error recovery it provides. Most people don't
think about this if it hasn't happened to them, but if you get a bad read
from /boot when loading the
On 2016-09-12 10:27, David Sterba wrote:
Hi,
first, thanks for choosing a catchy subject, this always helps. While it
will serve as another beating stick to those who enjoy bashing btrfs,
I'm glad to see people answer in a constructive way.
On Sun, Sep 11, 2016 at 10:55:21AM +0200, Waxhead
On 2016-09-12 09:27, Jeff Mahoney wrote:
On 9/12/16 2:54 PM, Austin S. Hemmelgarn wrote:
On 2016-09-12 08:33, Jeff Mahoney wrote:
On 9/9/16 8:47 PM, Austin S. Hemmelgarn wrote:
A couple of other things to comment about on this:
1. 'can_overcommit' (the function that the Arch kernel choked
On 2016-10-09 19:12, Charles Zeitler wrote:
Is there any advantage to using NAS drives
under RAID levels, as oppposed to regular
'desktop' drives for BTRFS?
Before I answer the question, it is worth explaining the differences
between the marketing terms 'desktop', 'enterprise', 'NAS', and
On 2016-09-18 13:28, Chris Murphy wrote:
On Sun, Sep 18, 2016 at 2:34 AM, Anand Jain wrote:
(updated the subject, was [1])
IMO the hot-spare feature makes most sense with the raid56,
Why. ?
Raid56 is not scalable, has less redundancy in most all
configurations,
On 2016-09-18 22:57, Zygo Blaxell wrote:
On Fri, Sep 16, 2016 at 08:00:44AM -0400, Austin S. Hemmelgarn wrote:
To be entirely honest, both zero-log and super-recover could probably be
pretty easily integrated into btrfs check such that it detects when they
need to be run and does so. zero-log
On 2016-09-18 23:47, Zygo Blaxell wrote:
On Mon, Sep 12, 2016 at 12:56:03PM -0400, Austin S. Hemmelgarn wrote:
4. File Range Cloning and Out-of-band Dedupe: Similarly, work fine if the FS
is healthy.
I've found issues with OOB dedup (clone/extent-same):
1. Don't dedup data that has not been
On 2016-09-13 09:39, Anand Jain wrote:
This patchset adds btrfs encryption support.
The main objective of this series is to have bugs fixed and stability.
I have verified with fstests to confirm that there is no regression.
A design write-up is coming next, however here below is the quick
On 2016-09-15 05:49, Hans van Kranenburg wrote:
On 09/15/2016 04:14 AM, Christoph Anton Mitterer wrote:
Hey.
As for the stability matrix...
In general:
- I think another column should be added, which tells when and for
which kernel version the feature-status of each row was
On 2016-09-18 22:25, Anand Jain wrote:
Chris Murphy,
Thanks for writing in detail, it makes sense..
Generally hot spare is to reduce the risk of double disk failures
leading to the data lose at the data centers before the data is
reconstructed again for redundancy.
On 09/19/2016 01:28
On 2016-09-19 11:27, David Sterba wrote:
Hi,
On Thu, Sep 15, 2016 at 04:14:04AM +0200, Christoph Anton Mitterer wrote:
In general:
- I think another column should be added, which tells when and for
which kernel version the feature-status of each row was
revised/updated the last time and
On 2016-09-19 00:08, Zygo Blaxell wrote:
On Thu, Sep 15, 2016 at 01:02:43PM -0600, Chris Murphy wrote:
Right, well I'm vaguely curious why ZFS, as different as it is,
basically take the position that if the hardware went so batshit that
they can't unwind it on a normal mount, then an fsck
On 2016-09-19 14:27, Chris Murphy wrote:
On Mon, Sep 19, 2016 at 11:38 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
ReiserFS had no working fsck for all of the 8 years I used it (and still
didn't last year when I tried to use it on an old disk). "Not working"
here
necessary, I only listed it as that will provide
automatic recovery of things the FEC support in dm-verity can't fix. In
a situation where I can be relatively sure that the errors will be
infrequent and probably not co-located, I would probably skip it myself.
On Fri, Sep 16, 2016 at 7:45 AM, Austin S
On 2016-09-15 17:23, Christoph Anton Mitterer wrote:
On Thu, 2016-09-15 at 14:20 -0400, Austin S. Hemmelgarn wrote:
3. Fsck should be needed only for un-mountable filesystems. Ideally,
we
should be handling things like Windows does. Preform slightly
better
checking when reading data
On 2016-09-15 22:58, Duncan wrote:
E V posted on Thu, 15 Sep 2016 11:48:13 -0400 as excerpted:
I'm investigating using btrfs for archiving old data and offsite
storage, essentially put 2 drives in btrfs RAID-1, copy the data to the
filesystem and then unmount, remove a drive and take it to an
On 2016-09-15 16:26, Chris Murphy wrote:
On Thu, Sep 15, 2016 at 2:16 PM, Hugo Mills <h...@carfax.org.uk> wrote:
On Thu, Sep 15, 2016 at 01:02:43PM -0600, Chris Murphy wrote:
On Thu, Sep 15, 2016 at 12:20 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
2. We're developing
On 2016-09-07 14:07, Christoph Anton Mitterer wrote:
On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:
This is an issue with any filesystem,
Not really... any other filesystem I'd know (not sure about ZFS) keeps
working when there are UUID collisions... or at least it won't cause
On 2016-09-07 12:10, Graham Cobb wrote:
On 07/09/16 16:20, Austin S. Hemmelgarn wrote:
I should probably add to this that you shouldn't be accepting
send/receive data streams from untrusted sources anyway. While it
probably won't crash your system, it's not intended for use as something
like
On 2016-08-25 18:32, Gert Menke wrote:
Hi,
On 2016-08-25 20:26, Justin Kilpatrick wrote:
I'm not sure why you want to avoid a balance,
I didn't check, but I imagined it would slow down my rsync significantly.
It will slow it down, but I can't tell you exactly how much (there are
too many
On 2016-08-29 07:18, Imran Geriskovan wrote:
I can't find any fstab setting for systemd to higher this timeout.
There's just the x-systemd.device-timeout but this controls how long to
wait for the device and not for the mount command.
Is there any solution for big btrfs volumes and systemd?
On 2016-08-22 22:43, Chris Murphy wrote:
On Mon, Aug 22, 2016 at 5:06 PM, Darrick J. Wong
wrote:
[add Dave and Christoph to cc]
On Mon, Aug 22, 2016 at 04:14:19PM -0400, Jeff Mahoney wrote:
On 8/21/16 2:59 PM, Tomokhov Alexander wrote:
Btrfs wiki FAQ gives a link to
On 2016-11-08 11:57, Darrick J. Wong wrote:
On Tue, Nov 08, 2016 at 08:26:02AM -0500, Austin S. Hemmelgarn wrote:
On 2016-11-07 21:40, Christoph Anton Mitterer wrote:
On Mon, 2016-11-07 at 15:02 +0100, David Sterba wrote:
I think adding a whole-file dedup mode to duperemove would be better
On 2016-11-08 18:15, Ian Kelling wrote:
On Tue, Nov 8, 2016, at 03:00 PM, Hugo Mills wrote:
On Tue, Nov 08, 2016 at 02:48:56PM -0800, Ian Kelling wrote:
It seems to be an artificially imposed limitation which hurts which
hurts its usefulness. Let me know if this makes sense. If so, perhaps it
speed, I'd upgrade RAM before upgrading the CPU most of the
time for most systems).
--
Tom Arild Naess
On 03. nov. 2016 12:51, Austin S. Hemmelgarn wrote:
On 2016-11-02 17:55, Tom Arild Naess wrote:
Hello,
I have been running btrfs on a file server and backup server for a
couple of years now
On 2016-11-09 12:30, Tom Arild Naess wrote:
On 09. nov. 2016 14:04, Austin S. Hemmelgarn wrote:
On 2016-11-09 07:40, Tom Arild Naess wrote:
Thanks for your lengthy answer. Just after posting my question I
realized that the last reboot I did resulted in the filesystem being
mounted RO. I
On 2016-11-07 21:40, Christoph Anton Mitterer wrote:
On Mon, 2016-11-07 at 15:02 +0100, David Sterba wrote:
I think adding a whole-file dedup mode to duperemove would be better
(from user's POV) than writing a whole new tool
What would IMO be really good from a user's POV was, if one of the
On 2016-11-09 21:29, Qu Wenruo wrote:
At 11/10/2016 06:57 AM, Andreas Dilger wrote:
On Nov 9, 2016, at 1:56 PM, Jaegeuk Kim wrote:
This patch implements multiple devices support for f2fs.
Given multiple devices by mkfs.f2fs, f2fs shows them entirely as one big
volume
On 2016-10-20 05:29, Timofey Titovets wrote:
Hi, i use btrfs for NFS VM replica storage and for NFS shared VM storage.
At now i have a small problem what VM image deletion took to long time
and NFS client show a timeout on deletion
(ESXi Storage migration as example).
Kernel: Linux nfs05
On 2016-10-13 17:21, Alberto Bursi wrote:
Hi, I'm using OpenSUSE on a btrfs volume spanning 2 disks (set as raid1
for both metadata and data), no separate /home partition.
The distro loves to create dozens of subvolumes for various things and
makes snapshots, see:
alby@openSUSE-xeon:~> sudo
On 2016-10-14 06:11, Hiroshi Honda wrote:
That's the proper answer. In practice... all hope isn't yet lost.
I understood the proper answer.
I'll take care it in the future.
Is there something step/method can I do from this situation?
You should probably look at `btrfs restore`. I'm not sure
On 2016-10-14 02:28, Stefan Priebe - Profihost AG wrote:
Hello list,
while running the same workload on two machines (single xeon and a dual
xeon) both with 64GB RAM.
I need to run echo 3 >/proc/sys/vm/drop_caches every 15-30 minutes to
keep the speed as good as on the non numa system. I'm not
On 2016-10-20 11:26, Roman Mamedov wrote:
On Thu, 20 Oct 2016 08:09:14 -0400
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
So, it's possible to return unlink() early? or this a bad idea(and why)?
I may be completely off about this, but I could have sworn that unlin
On 2016-10-17 23:23, Anand Jain wrote:
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it should
have 6 devices, which is how it knows a device is missing.
Missing - means missing at the time of mount. So how
On 2016-10-17 16:40, Chris Murphy wrote:
May be better to use /sys/fs/btrfs//devices to find the device
to monitor, and then monitor them with blktrace - maybe there's some
courser granularity available there, I'm not sure. The thing is, as
far as Btrfs alone is concerned, a drive can be "bad"
On 2016-10-18 11:02, Stefan Malte Schumacher wrote:
Hello
One of the drives which I added to my array two days ago was most
likely already damaged when I bought it - 312 read errors while
scrubbing and lots of SMART errors. I want to take the drive out, go
to my hardware vendor and have it
On 2016-10-20 13:33, ronnie sahlberg wrote:
On Thu, Oct 20, 2016 at 7:44 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-10-20 09:47, Timofey Titovets wrote:
2016-10-20 15:09 GMT+03:00 Austin S. Hemmelgarn <ahferro...@gmail.com>:
On 2016-10-20 05:29, Timofey Ti
On 2016-10-21 18:13, Peter Becker wrote:
if you have >750 GB free you can simply remove one of the drives.
btrfs device delete /dev/sd[x] /mnt
#power off, replace device
btrfs device add /dev/sd[y] /mnt
Make sure to balance afterwards if you do this, the new disk will be
pretty much unused
On 2016-10-19 09:06, Anand Jain wrote:
On 10/19/16 19:15, Austin S. Hemmelgarn wrote:
On 2016-10-18 17:36, Anand Jain wrote:
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it
should
have 6 devices, which
On 2016-10-18 17:36, Anand Jain wrote:
I would like to monitor my btrfs-filesystem for missing drives.
This is actually correct behavior, the filesystem reports that it
should
have 6 devices, which is how it knows a device is missing.
Missing - means missing at the time of mount. So
On 2016-11-28 14:01, Christoph Anton Mitterer wrote:
On Mon, 2016-11-28 at 19:45 +0100, Goffredo Baroncelli wrote:
I am understanding that the status of RAID5/6 code is so badly
Just some random thought:
If the code for raid56 is really as bad as it's often claimed (I
haven't read it, to be
On 2016-11-16 05:55, Martin Steigerwald wrote:
Am Mittwoch, 16. November 2016, 15:43:36 CET schrieb Roman Mamedov:
On Wed, 16 Nov 2016 11:25:00 +0100
Martin Steigerwald wrote:
merkaba:~> mount -o degraded,clear_cache /dev/satafp1/backup /mnt/zeit
mount:
701 - 800 of 1331 matches
Mail list logo