On 2014-11-26 08:38, Brendan Hide wrote:
On 2014/11/25 18:47, David Sterba wrote:
We could provide an interface for external applications that would make
use of the strong checksums. Eg. external dedup, integrity db. The
benefit here is that the checksum is always up to date, so there's no
need
On 2014-11-29 16:21, John Williams wrote:
On Sat, Nov 29, 2014 at 1:07 PM, Alex Elsayed eternal...@gmail.com wrote:
I'd suggest looking more closely at the crypto api section of menuconfig -
it already has crc32c, among others. Just because it's called the crypto
api doesn't mean it only has
On 2014-11-30 20:58, Qu Wenruo wrote:
[BACKGROUND]
I'm trying to implement the function to repair missing inode item.
Under that case, inode type must be salvaged(although it can be fallback to
FILE).
One case should be, if there is any dir_item/index or inode_ref refers the
inode as parent,
On 2014-11-29 23:23, Marc MERLIN wrote:
On Sun, Nov 30, 2014 at 09:03:14AM +0530, Shriramana Sharma wrote:
IIUC with BtrFS while it is possible to easily undelete a file or
ordinary directory if a snapshot of the containing subvol exists, it
seems that it's not elementary to undelete a subvol
On 2014-12-01 08:38, MegaBrutal wrote:
2014-12-01 14:12 GMT+01:00 Austin S Hemmelgarn ahferro...@gmail.com:
We might want to consider adding an option to btrfs subvol del to ask for
confirmation (or make it do so by default and add an option to disable
asking for confirmation).
I've also
On 2014-12-01 08:54, MegaBrutal wrote:
2014-12-01 14:47 GMT+01:00 Roman Mamedov r...@romanrm.net:
On Mon, 1 Dec 2014 14:38:16 +0100
MegaBrutal megabru...@gmail.com wrote:
I've also noticed, a subvolume can just be deleted with an rm -r,
just like an ordinary directory. I'd consider to only
On 2014-12-01 12:22, John Williams wrote:
On Mon, Dec 1, 2014 at 4:39 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Just because it's a filesystem doesn't always mean that speed is the most
important thing. Personally, I can think of multiple cases where using a
cryptographically strong
On 2014-12-01 13:37, David Sterba wrote:
On Wed, Nov 26, 2014 at 08:58:50AM -0500, Austin S Hemmelgarn wrote:
On 2014-11-26 08:38, Brendan Hide wrote:
On 2014/11/25 18:47, David Sterba wrote:
We could provide an interface for external applications that would make
use of the strong checksums
On 2014-12-02 06:54, Anand Jain wrote:
On 02/12/2014 19:14, Goffredo Baroncelli wrote:
I further investigate this issue.
MegaBrutal, reported the following issue: doing a lvm snapshot of the
device of a
mounted btrfs fs, the new snapshot device name replaces the name of
the original
device
On 2014-12-02 10:11, Shriramana Sharma wrote:
On Tue, Dec 2, 2014 at 6:58 PM, David Sterba dste...@suse.cz wrote:
A subvolume is also a snapshotting barrier, so it's convenient to create
subvolumes in well-known paths that contain data that should not be
rolled back (/var/log, /srv,
On 2014-12-04 08:53, Shriramana Sharma wrote:
I observe that whenever I create a BtrFS instance using mkfs.btrfs,
there is always the leftover cruft of two System/Metadata-Single
allocation profiles:
btrfs fi df /run/media/samjnaa/BRIHATII/
Data, single: total=460.01GiB, used=458.47GiB
System,
On 2014-12-04 09:06, Shriramana Sharma wrote:
On Thu, Dec 4, 2014 at 12:23 AM, David Sterba dste...@suse.cz wrote:
On Tue, Dec 02, 2014 at 08:45:10PM +0530, Shriramana Sharma wrote:
On Tue, Dec 2, 2014 at 6:26 PM, David Sterba dste...@suse.cz wrote:
Works for me without the root password on
On 2014-12-04 09:13, Austin S Hemmelgarn wrote:
On 2014-12-04 08:53, Shriramana Sharma wrote:
I observe that whenever I create a BtrFS instance using mkfs.btrfs,
there is always the leftover cruft of two System/Metadata-Single
allocation profiles:
btrfs fi df /run/media/samjnaa/BRIHATII/
Data
I've recently noticed on some of my systems, that btrfs fi df doesn't
consistently show all of the chunk types. I'll occasionally not see the
GlobalReserve, or even anything but System, although the behavior seems
to be consistent for a given filesystem. I'm using btrfs-progs 3.17.1
and
On 2014-12-04 09:25, Shriramana Sharma wrote:
On Thu, Dec 4, 2014 at 7:43 PM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
SuSE may have an old version of btrfs-progs then (which wouldn't surprise
me, it is an 'enterprise' distribution after all), because I haven't seen
this on anything
On 2014-12-05 02:42, Satoru Takeuchi wrote:
Hi Austin,
(2014/12/04 23:31), Austin S Hemmelgarn wrote:
I've recently noticed on some of my systems, that btrfs fi df
doesn't consistently show all of the chunk types.
I'll occasionally not see the GlobalReserve, or even anything
but System
On 2014-12-05 07:19, Austin S Hemmelgarn wrote:
On 2014-12-05 02:42, Satoru Takeuchi wrote:
Hi Austin,
(2014/12/04 23:31), Austin S Hemmelgarn wrote:
I've recently noticed on some of my systems, that btrfs fi df
doesn't consistently show all of the chunk types.
I'll occasionally not see
On 2014-12-05 13:11, Shriramana Sharma wrote:
OK so from https://forums.opensuse.org/showthread.php/440209-ifconfig
I learnt that it's because /sbin, /usr/sbin etc is not on the normal
user's path on openSUSE (they are, on Kubuntu). Adding them to PATH
fixes the situation. (I wasn't even able to
On 2014-12-08 09:16, Shriramana Sharma wrote:
On Mon, Dec 8, 2014 at 6:31 PM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Personally, I prefer a somewhat hybrid approach where everyone has *sbin in
their path, but file permissions are used to control what non-administrators
can run
On 2014-12-08 09:47, Martin Steigerwald wrote:
Hi,
Am Sonntag, 7. Dezember 2014, 21:32:01 schrieb Robert White:
On 12/07/2014 07:40 AM, Martin Steigerwald wrote:
Well what would be possible I bet would be a kind of system call like
this:
I need to write 5 GB of data in 100 of files to
On 2014-12-13 21:59, Ali AlipourR wrote:
Hi,
1- Do setting compression flag per subvolume is implemented?
(I did read on wiki that it is not implemented, but I can set it via
btrfs property)
AFAIK, it's the compression related mount options that don't work
per-subvolume. Using chattr +c or
On 2014-12-17 11:49, David Sterba wrote:
On Sat, Dec 13, 2014 at 03:35:09PM +0100, Merlijn Wajer wrote:
[snip]
Please let me know if musl-libc (or any other libc) is a supported
platform, and if so, if and how I can improve on said patches.
I'm not aware of non-glibc users, but I don't see
On 2014-12-21 17:53, Charles Cazabon wrote:
Hi, Robert,
My performance issues with btrfs are more-or-less resolved now -- the
performance under btrfs still seems quite variable compared to other
filesystems -- my rsync speed is now varying between 40MB and ~90MB/s, with
occasional intervals
On 2014-12-19 21:07, Richard Sharpe wrote:
Hi folks,
I need a Linux file system that supports XATTRs up to 64K.
Can BTRFS support that or is XFS the only Linux file system with such support?
At the moment, BTRFS is limited to xattrs that fit inline in the
metadata nodes (so ~3900 bytes for a
On 2014-12-22 12:27, Richard Sharpe wrote:
On Mon, Dec 22, 2014 at 6:28 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-12-19 21:07, Richard Sharpe wrote:
Hi folks,
I need a Linux file system that supports XATTRs up to 64K.
Can BTRFS support that or is XFS the only Linux file
On 2014-12-22 13:43, Chris Murphy wrote:
On Mon, Dec 22, 2014 at 11:09 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Personally, I'd love to see unlimited length xattr's like NTFS and HFS+ do,
as that would greatly improve interoperability (both Windows and OS X use
xattrs, although
On 2014-12-22 15:04, Richard Sharpe wrote:
On Mon, Dec 22, 2014 at 10:09 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-12-22 12:27, Richard Sharpe wrote:
On Mon, Dec 22, 2014 at 6:28 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-12-19 21:07, Richard Sharpe wrote
On 2014-12-22 15:06, Richard Sharpe wrote:
On Mon, Dec 22, 2014 at 10:43 AM, Chris Murphy li...@colorremedies.com wrote:
On Mon, Dec 22, 2014 at 11:09 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Personally, I'd love to see unlimited length xattr's like NTFS and HFS+ do,
as that would
On 2014-12-22 19:08, Robert White wrote:
On 12/22/2014 02:55 PM, Richard Sharpe wrote:
On Mon, Dec 22, 2014 at 2:52 PM, Robert White rwh...@pobox.com wrote:
So skipping the full ADS, what's the current demand/payoff for large
XATTR space?
Windows Security Descriptors (sometimes incorrectly
On 2014-12-29 16:53, Chris Murphy wrote:
On Sat, Dec 27, 2014 at 8:12 PM, Phillip Susi ps...@ubuntu.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 12/23/2014 05:09 PM, Chris Murphy wrote:
The timer in /sys is a kernel command timer, it's not a device
timer even though it's
On 2013-11-24 22:45, Jim Salter wrote:
TL;DR scrub's ioprio argument isn't really helpful - a scrub murders
system performance til it's done.
My system:
3.11 kernel (from Ubuntu Saucy)
btrfs-tools from 2013-07 (from Debian Sid)
Opteron 8-core CPU
32GB RAM
4 WD 1TB Black drives in a
pv's bw-limiting feature to make btrfs send | btrfs receive
tolerable.
On 11/25/2013 07:25 AM, Austin S Hemmelgarn wrote:
2. allow the user to set a reasonable I/O bandwidth limit on the scrub
processes (you could already do this with the BIO cgroup, but it would
be nice to not need
On 12/29/2013 04:11 PM, Kai Krakow wrote:
Hello list!
I'm planning to buy a small SSD (around 60GB) and use it for bcache in front
of my 3x 1TB HDD btrfs setup (mraid1+draid0) using write-back caching. Btrfs
is my root device, thus the system must be able to boot from bcache using
init
On 12/30/2013 09:24 PM, Aastha Mehta wrote:
Hello,
I have some questions regarding caching in BTRFS. When a file
system is unmounted and mounted again, would all the previously
cached content be removed from the cache after flushing to disk?
After remounting, would the initial requests
On 12/30/2013 11:02 AM, Austin S Hemmelgarn wrote:
As an alternative to using bcache, you might try something simmilar to
the following:
64G SSD with /boot, /, and /usr
Other HDD with /var, /usr/portage, /usr/src, and /home
tmpfs or ramdisk for /tmp and /var/tmp
On 2014-01-03 03:39, Sander wrote:
Austin S Hemmelgarn wrote (ao):
The data is probably still cached in the block layer, so after
unmounting, you could try 'echo 1 /proc/sys/vm/drop_caches'
before mounting again, but make sure to run sync right before
doing that, otherwise you might lose
On 2014-01-09 07:41, Duncan wrote:
Hugo Mills posted on Thu, 09 Jan 2014 10:42:47 + as excerpted:
On Thu, Jan 09, 2014 at 11:26:26AM +0100, Clemens Eisserer wrote:
Hi,
I am running write-intensive (well sort of, one write every 10s)
workloads on cheap flash media which proved to be
On 2014-01-09 12:31, Chris Murphy wrote:
On Jan 9, 2014, at 5:52 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Just a thought, you might consider running btrfs on top of LVM in
the interim, it isn't quite as efficient as btrfs by itself, but
it does allow N-way mirroring
On 2014-01-09 13:08, Chris Murphy wrote:
On Jan 9, 2014, at 5:41 AM, Duncan 1i5t5.dun...@cox.net wrote:
Having checksumming is good, and a second
copy in case one fails the checksum is nice, but what if they BOTH do?
I'd love to have the choice of (at least) three-way-mirroring, as for me
On 01/17/2014 01:33 PM, valleysmail-l...@yahoo.de wrote:
I'd like to know if there are drawbacks in using btrfs with non-ECC
RAM instead of using ext4 with non-ECC RAM. I know that some
features of btrfs may rely on ECC RAM but is the chance of data
corruption or even a damaged filesystem
On 01/19/2014 07:17 PM, George Eleftheriou wrote:
I have been wondering the same thing for quite some time after
having read this post (which makes a pretty clear case in favour of
ECC RAM)...
hxxp://forums.freenas.org/threads/ecc-vs-non-ecc-ram-and-zfs.15449/
... and the ZFS on Linux FAQ
On 2014-01-16 14:23, Toggenburger Lukas wrote:
Hi all
I'm a student of ICT currently doing my master's degree besides working as a
research assistant. Currently I'm looking for topics for my master thesis.
One of my ideas was to work on Btrfs. I studied the list of project ideas at
On 2014-01-20 10:36, Bob Marley wrote:
On 20/01/2014 15:57, Ian Hinder wrote:
i.e. that there is parity information stored with every piece of data,
and ZFS will correct errors automatically from the parity information.
So this is not just parity data to check correctness but there are many
On 2014-01-21 01:42, Sandy McArthur wrote:
On Mon, Jan 20, 2014 at 7:20 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-01-16 14:23, Toggenburger Lukas wrote:
3. Improving subvolume handling regarding taking recursive snapshots (
https://btrfs.wiki.kernel.org/index.php
On 2014-01-21 11:52, Hugo Mills wrote:
On Tue, Jan 21, 2014 at 07:25:43AM -0500, Austin S Hemmelgarn wrote:
On 2014-01-21 01:42, Sandy McArthur wrote:
On Mon, Jan 20, 2014 at 7:20 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-01-16 14:23, Toggenburger Lukas wrote:
3. Improving
I just recently discovered something about btrfs filesystem balance that
(as far as I can see) isn't documented anywhere, and doesn't necessarily
have an obvious (to the average user) explanation.
Apparently, trying to use -mconvert=dup or -sconvert=dup on a
multi-device filesystem using one of
On 2014-02-10 08:41, Brendan Hide wrote:
On 2014/02/10 04:33 AM, Austin S Hemmelgarn wrote:
snip
Apparently, trying to use -mconvert=dup or -sconvert=dup on a
multi-device filesystem using one of the RAID profiles for metadata
fails with a statement to look at the kernel log, which doesn't
On 2014-12-31 12:27, ashf...@whisperpc.com wrote:
Phillip
I had a similar question a year or two ago (
specifically about raid10 ) so I both experimented and read the code
myself to find out. I was disappointed to find that it won't do
raid10 on 3 disks since the chunk metadata describes
On 2015-01-02 12:45, Brendan Hide wrote:
On 2015/01/02 15:42, Austin S Hemmelgarn wrote:
On 2014-12-31 12:27, ashf...@whisperpc.com wrote:
I see this as a CRITICAL design flaw. The reason for calling it
CRITICAL
is that System Administrators have been trained for 20 years that
RAID-10
can
On 2015-01-25 23:22, Zygo Blaxell wrote:
It seems that the rate of spurious I/O errors varies most according to
the vm.vfs_cache_pressure sysctl. At '10' the I/O errors occur so often
that building a kernel is impossible. At '100' I can't reproduce even
a single I/O error.
I guess this is own
On 2015-02-05 06:04, Juergen Fitschen wrote:
Hey,
It’s me again.
First of all: Thanks for the reply, Duncan :)
After detecting the deadlock und posting the stack trace yesterday evening, I
left the machine alone and didn’t rebooted it. The monitoring told me that the
whole server (including
On 2015-02-05 10:24, Juergen Fitschen wrote:
On 05 Feb 2015, at 13:47, Austin S Hemmelgarn ahferro...@gmail.com wrote:
I've actually seen similar behavior without the virtualization when doing large
filesystem intensive operations with compression enabled.
I don't know if this is significant
On 2015-02-11 23:33, Kai Krakow wrote:
Duncan 1i5t5.dun...@cox.net schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O_DIRECT should guarantee that the
filesystem cache
On 2015-01-04 15:26, Jérôme Poulin wrote:
Happy holiday everyone,
TL;DR: Hardware corruption is really bad, if btrfs-restore work,
kernel Btrfs can!
I'm cross-posting this message since the root cause for this problem
is the Ceph RBD device however, my main concern is data loss from a
BTRFS
On 2015-01-05 06:31, Lennart Poettering wrote:
On Mon, 05.01.15 10:46, Harald Hoyer (har...@redhat.com) wrote:
We have BTRFS_IOC_DEVICES_READY to report, if all devices are present, so that
a udev rule can report ID_BTRFS_READY and SYSTEMD_READY.
I think we need a third state here for a
On 2015-01-12 08:51, P. Remek wrote:
Hello,
we are currently investigating possiblities and performance limits of
the Btrfs filesystem. Now it seems we are getting pretty poor
performance for the writes and I would like to ask, if our results
makes sense and if it is a result of some well
On 2015-01-12 10:35, P. Remek wrote:
Another thing to consider is that the kernel's default I/O scheduler and the
default parameters for that I/O scheduler are almost always suboptimal for SSD's,
and this tends to show far more with BTRFS than anything else. Personally
I've found that using
On 2015-01-12 10:11, Patrik Lundquist wrote:
On 12 January 2015 at 15:54, Austin S Hemmelgarn ahferro...@gmail.com wrote:
Another thing to consider is that the kernel's default I/O scheduler and the
default parameters for that I/O scheduler are almost always suboptimal for
SSD's
On 2015-02-09 12:26, P. Remek wrote:
Hello,
I am benchmarking Btrfs and when benchmarking random writes with fio
utility, I noticed following two things:
Based on what I know about BTRFS, I think that these issues actually
have distinct causes.
1) On first run when target file doesn't exist
On 2015-01-06 23:11, Jérôme Poulin wrote:
On Mon, Jan 5, 2015 at 6:59 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Secondly, I would highly recommend not using ANY non-cluster-aware FS on top
of a clustered block device like RBD
For my use-case, this is just a single server using
On 2015-01-07 13:55, Kyle Gates wrote:
What issues would arise if ssd mode is activated because of a block layer
setting the rotational flag to zero? This happens for me running btrfs on
bcache. Would it be beneficial to pass the no_ssd flag?
Thanks,
Kyle
In theory, it would result in a
On 2015-03-16 07:46, Russell Coker wrote:
On Sun, 15 Mar 2015, peer@gmx.net wrote:
Following common recommendations [1], I use these mount options on my
main developing machine: noatime,autodefrag. This is desktop machine and
it works well so far. Now, I'm also going to install several KVM
On 2015-03-13 23:26, Robert White wrote:
Is there any practical reason to prefer bind mounts or separately
mounting a subvolume?
e.g. assuming /locationA and /locationB are arbitrarily far apart in the
file system tree, is there any reason to prefer one of the following
over the other
On 2015-02-20 21:56, Theodore Ts'o wrote:
On Fri, Feb 20, 2015 at 09:49:34AM -0600, Eric Sandeen wrote:
This mount option significantly reduces writes to the
inode table for workloads that perform frequent random
writes to preallocated files.
On 2015-04-22 07:19, sri wrote:
Hi,
I btrfs file system created with one device /dev/sdb and mounted under
/btrfs1.
created one file /btrfs1/errno.h, one directory /btrfs1/dir1 and
2 subvolumes /btrfs1/subvol1 and /btrfs/subvol2
create directories and files under subvolume /btrfs1/subvol1.
On 2015-04-21 05:38, Russell Coker wrote:
On Tue, 21 Apr 2015, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Although we may add extra check for such problem to improve robustness,
but IMHO it's not a real world problem.
Some of the ReiserFS developers gave a similar reaction to some of my bug
On 2015-04-24 10:26, Lentes, Bernd wrote:
Hi,
it should be just a small problem, but it is one. How can I rollback to a
snapshot of my root filesystem ?
Googeling, I found a lot of solutions, each different.
I finally choosed this one:
On 2015-04-14 08:28, David Sterba wrote:
On Tue, Apr 14, 2015 at 01:44:32PM +0300, Lauri Võsandi wrote:
This patch forces btrfs receive to issue chroot before
parsing the btrfs stream to confine the process and
minimize damage that could be done via malicious
btrfs stream.
Thanks.
As we've
On 2015-04-16 14:48, Miguel Negrão wrote:
Hello,
I'm running a laptop, macbook pro 8,2, with ubuntu, on kernel
3.13.0-49-lowlatency. I have a USB enclosure containing two harddrives
(Icydock JBOD). Each harddrive runs their own btrfs file system, on top of
luks partitions. I backup one
On 2015-04-06 21:28, 인정식 wrote:
Hello BTRFS developers,
I am requesting your opion.
I am planning to design and implement DFS version of BTRFS.
Roughly it will be done by
1. Extending current DeviceID to NodeID:DeviceID to support multi-node,
and
2. Implementing inter-node data
On 2015-04-07 19:57, 인정식 wrote:
Thank you for the information.
I just found that btrfs-progs includes several files that seem modified from
btrfs kernel source.
I am not sure exactly what they are.
Web pages say libbtrfs is to provide interface for apps that use btrfs.
Why should there be
On 2015-06-25 08:52, David Sterba wrote:
On Wed, Jun 24, 2015 at 04:17:32PM -0400, Zygo Blaxell wrote:
Is there any sane use case where we would _want_ EXTENT_SAME to change
the mtime? We do a lot of work to make sure that none of the files
involved have any sort of content change. Why do we
On 2015-06-16 09:13, Holger Hoffstätte wrote:
Forking from the other thread..
On Tue, 16 Jun 2015 12:25:45 +, Hugo Mills wrote:
Yes. It's an artefact of the way that mkfs works. If you run a
balance on those chunks, they'll go away. (btrfs balance start
-dusage=0 -musage=0
On 2015-06-15 10:44, Tovo Rabemanantsoa wrote:
On 06/15/2015 03:29 PM, David Sterba wrote:
On Mon, Jun 15, 2015 at 02:47:24PM +0200, Tovo Rabemanantsoa wrote:
Hi all,
By browsing this list's archive, I've found a thread initiated by
Charles Cazabon entitled: Oddly slow read performance with
On 2015-06-17 11:40, Christian wrote:
On 06/17/2015 11:28 AM, Chris Murphy wrote:
However, fstrim still gives me 0 B (0 bytes) trimmed, so that may be
another problem. Is there a way to check if trim works?
That sounds like maybe your SSD is blacklisted for trim, is all I can
think of. So
On 2015-06-16 12:58, Hugo Mills wrote:
On Tue, Jun 16, 2015 at 06:43:23PM +0200, Arnaud Kapp wrote:
Hello,
Consider the following situation: I have a RAID 1 array with 4 drives.
I want to replace one the drive by a new one, with greater capacity.
However, let's say I only have 4 HDD slots so
On 2015-06-01 09:03, Neal Becker wrote:
So I think what I need to do is:
1. boot off some rescue media
In theory, you may be able to boot do the equivalent of this from single
user mode or by logging in as root, although it may be safer to do it
from rescue media (just make sure the rescue
On 2015-06-30 17:45, Dave Chinner wrote:
On Tue, Jun 30, 2015 at 09:32:20AM -0700, Omar Sandoval wrote:
In some cases, we may not want to enable automatic defragmentation for
the whole filesystem with the autodefrag mount option but we still
want to defragment specific files or directories. Add
On 2015-07-03 13:51, Chris Murphy wrote:
On Fri, Jul 3, 2015 at 9:05 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I did some more digging and found that I had a lot of errors basically
every drive.
Ick. Sucks for you but then makes this less of a Btrfs problem because
it can really
A couple of observations:
1. BTRFS currently has no knowledge of multipath or anything like that.
In theory it should work fine as long as the multiple device instances
all point to the same storage directly (including having identical block
addresses), but we still need to add proper
On 2015-08-12 15:30, Chris Murphy wrote:
On Wed, Aug 12, 2015 at 12:44 PM, Konstantin Svist fry@gmail.com wrote:
On 08/06/2015 04:10 AM, Austin S Hemmelgarn wrote:
On 2015-08-05 17:45, Konstantin Svist wrote:
Hi,
I've been running btrfs on Fedora for a while now, with bedup --defrag
On 2015-08-16 23:35, Tyler Bletsch wrote:
I just wanted to drop you guys a line to say that I am stunned with how
excellent btrfs is. I did some testing, and the things that it did were
amazing. I took a 4-disk RAID 5 and walked it all the way down to a
one-disk volume and back again, mixed in
On 2015-08-15 02:30, Duncan wrote:
Austin S Hemmelgarn posted on Fri, 14 Aug 2015 15:58:30 -0400 as
excerpted:
FWIW, running BTRFS on top of MDRAID actually works very well,
especially for BTRFS raid1 on top of MD-RAID0 (I get an almost 50%
performance increase for this usage over BTRFS raid10
On 2015-08-15 17:46, Timothy Normand Miller wrote:
To those of you who have been helping out with my 4-drive RAID1
situation, is there anything further we should do to investigate this,
in case we can uncover any more bugs, or should I just wipe everything
out and restore from backup?
If you
On 2015-08-17 15:18, Tyler Bletsch wrote:
Thanks. I will be trying raid5 in production, but production in this
case just means my home file server, with btrfs snapshot+sync for all
data and appropriate offsite non-btrfs backups for critical data. If it
hoses up, I'll post a bug report.
So far,
On 2015-08-17 14:52, Timothy Normand Miller wrote:
I'm not sure if I'm doing this wrong. Here's what I'm seeing:
# btrfs-image -c9 -t4 -w /mnt/btrfs ~/btrfs_dump.z
Superblock bytenr is larger than device size
Open ctree failed
create failed (No such file or directory)
For the source, you
On 2015-08-17 19:06, Duncan wrote:
Austin S Hemmelgarn posted on Mon, 17 Aug 2015 07:38:13 -0400 as
excerpted:
I've also found that BTRFS raid5/6 on top of MD RAID0 mitigates (to a
certain extent that is) the performance penalty of doing raid5/6 if you
aren't on ridiculously fast storage
On 2015-08-18 13:36, Timothy Normand Miller wrote:
Maybe this is a dumb question, but there are always corner cases.
I have a subvolume where I want to disable CoW for VM disks. Maybe
that's a dumb idea, but that's a recommendation I've seen here and
there. Now, in the docs I've seen, +C
On 2015-08-18 22:55, Timothy Normand Miller wrote:
On Tue, Aug 18, 2015 at 10:48 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Timothy Normand Miller wrote on 2015/08/18 22:46 -0400:
On Tue, Aug 18, 2015 at 9:32 PM, Qu Wenruo quwen...@cn.fujitsu.com
wrote:
Hi Timothy,
Although I have
On 2015-08-20 00:40, Jonathan Panozzo wrote:
Zhao,
Thank you for your response. Two quick follow-up questions:
1: What happens on an unrecoverable data error case? Does the volume get put
into read-only mode?
Yes.
2: Out of curiosity, why is data checksumming tied to COW?
There's no
On 2015-08-20 07:52, Austin S Hemmelgarn wrote:
On 2015-08-19 13:24, Tyler Bletsch wrote:
Thanks. I'd consider raid6, but since I'll be backing up to a second
btrfs raid5 array, I think I have sufficient redundancy, since
equivalent to raid 5+1 on paper. I'm doing that rather than something
On 2015-08-18 17:09, Chris Murphy wrote:
On Tue, Aug 18, 2015 at 5:21 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2015-08-17 14:52, Timothy Normand Miller wrote:
I'm not sure if I'm doing this wrong. Here's what I'm seeing:
# btrfs-image -c9 -t4 -w /mnt/btrfs ~/btrfs_dump.z
On 2015-08-19 13:24, Tyler Bletsch wrote:
Thanks. I'd consider raid6, but since I'll be backing up to a second
btrfs raid5 array, I think I have sufficient redundancy, since
equivalent to raid 5+1 on paper. I'm doing that rather than something
like raid10 in a single box because I want the
On 2015-08-20 12:44, Chris Murphy wrote:
On Wed, Aug 19, 2015 at 9:43 PM, Russell Coker russ...@coker.com.au wrote:
On Thu, 20 Aug 2015 11:55:43 AM Chris Murphy wrote:
Question 1: If I apply the NOCOW attribute to a file or directory, how
does that affect my ability to run btrfs scrub?
On 2015-08-20 09:08, Timothy Normand Miller wrote:
On Thu, Aug 20, 2015 at 7:38 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Just for reference, I've found that it is usually safer to delete the
missing device first if possible, then add the new one and re-balance. There
seem to be some
On 2015-08-18 11:10, Timothy Normand Miller wrote:
I ran the following command. It spent a lot of time creating a
1672450048 byte file. Then it stopped writing to the file and started
using 100% CPU. It's currently doing no I/O, and it's been doing that
for a while now. Is that supposed to
On 2015-08-04 00:58, John Ettedgui wrote:
On Mon, Aug 3, 2015 at 8:01 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Although the best practice is staying away from such converted fs, either
using pure, newly created btrfs, or convert back to ext* before any balance.
Unfortunately I don't have
On 2015-08-04 13:36, John Ettedgui wrote:
On Tue, Aug 4, 2015 at 4:28 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2015-08-04 00:58, John Ettedgui wrote:
On Mon, Aug 3, 2015 at 8:01 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Although the best practice is staying away from
On 2015-07-22 07:00, Russell Coker wrote:
On Tue, 23 Jun 2015 02:52:43 AM Chris Murphy wrote:
OK I actually don't know what the intended block layer behavior is
when unplugging a device, if it is supposed to vanish, or change state
somehow so that thing that depend on it can know it's missing
On 2015-08-11 07:08, Juan Orti Alcaine wrote:
Hello,
I have added a new disk to my filesystem and I'm doing a balance right
now, but I'm a bit worried that the disk usage does not get updated as
it should. I remember from earlier versions that you could see the
disk usage being balanced across
101 - 200 of 1331 matches
Mail list logo