Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-08-12 Thread Chris Murphy
On Fri, Aug 12, 2016 at 1:37 PM, Chris Murphy  wrote:
> On Fri, Aug 12, 2016 at 1:00 PM, Ronan Arraes Jardim Chagas
>  wrote:

>
> d. Run journalctl -f from a 2nd computer.

Hopefully it's obvious I mean run journalctl -f on the affected
computer remotely via ssh.

>
>> Do you
>> think that if I reinstall my openSUSE it will be fixed?
>
> Probably but the nature of this probem isn't well understood as far as
> I know. It's not that common or it'd be easy for a dev to reproduce
> and then figure out what's going on.

Since this file system has relatively small metadata size, just under
2GiB, it might be useful to take a btrfs-image of it and put it up
somewhere like a google drive, or wherever it can remain for a while.
Options -t 4 -c9 -s are fairly standard and sanitize file names. Data
itself is not included in the image. From this I think a dev might be
able to figure out what's unique about this file system that results
in the bogus enospc. If you do this, I recommend filing a
bugzilla.kernel.org bug and include URL to the image and URL to this
thread, and then the bugzilla URL in a post on this thread that way
everything is cross referenced.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-08-12 Thread Chris Murphy
On Fri, Aug 12, 2016 at 1:00 PM, Ronan Arraes Jardim Chagas
 wrote:
> Em Sex, 2016-08-12 às 12:02 -0600, Chris Murphy escreveu:
>> Tons of unallocated space. What kernel messages do you get for the
>> enospc? It sounds like this will be one of the mystery -28 error file
>> systems. So far as I recall the only work around is recreating the
>> file system. There are two additional things you can try: mount with
>> enospc_debug mount option and see if you can gather more information
>> about the problem. Or try a 4.8rc1 kernel which as a large number of
>> enospc changes.
>>
>>
>
> Unfortunately no log was written due to the lack of space :)

a. journalctl -f in a Terminal window or tab should still record
everything. So long as the OS isn't totally face planting when the
enospc happens, you may still be able to copy paste it into a file
that you can save on another file system volume. It might have some
noisy messages from systemd-journald being unable to flush to disk but
the enospc itself should all be in the window even though they don't
get committed to disk.

b. Modify /etc/systemd/journald.conf so that Storage=volatile and now
the journal is only in memory, and you can flush it to another file
system yourself with something like 'journalctl -b -o short-monotonic
> journal.log'

c. create a ~1GiB separate file system and mount it at /var/log/

d. Run journalctl -f from a 2nd computer.



> Next time it happens, I will take a screenshot of the message.

Maybe. enospc_debug tends to spit out more than the usual amount of
stuff that'll fit on a single screen.

> Do you
> think that if I reinstall my openSUSE it will be fixed?

Probably but the nature of this probem isn't well understood as far as
I know. It's not that common or it'd be easy for a dev to reproduce
and then figure out what's going on.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-08-12 Thread Ronan Arraes Jardim Chagas
Em Sex, 2016-08-12 às 12:02 -0600, Chris Murphy escreveu:
> Tons of unallocated space. What kernel messages do you get for the
> enospc? It sounds like this will be one of the mystery -28 error file
> systems. So far as I recall the only work around is recreating the
> file system. There are two additional things you can try: mount with
> enospc_debug mount option and see if you can gather more information
> about the problem. Or try a 4.8rc1 kernel which as a large number of
> enospc changes.
> 
> 

Unfortunately no log was written due to the lack of space :)
Next time it happens, I will take a screenshot of the message. Do you
think that if I reinstall my openSUSE it will be fixed?

Regards,
Ronan Arraes
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2016-08-12 Thread Gulf Energy Electro-Mech.Works
Dear Sir/Madam

I have a business proposal for you that will be of mutual benefit to
both of us. It’s about the death of my late client and some money he
left behind before his death. I want you to stand as his next of kin
since you bear the same surname with him, so that the bank can
release/transfer his money to you as his next of kin. Contact me for
more details contact us via our official email address:
alfredmarc1...@hotmail.com

I look forward hearing from you as soon as possible If you are willing
to proceed with me

Barrister Alfred Marc
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-08-12 Thread Chris Murphy
On Fri, Aug 12, 2016 at 11:36 AM, Ronan Arraes Jardim Chagas
 wrote:
> Hi guys,
>
> I'm facing a daily problem with BTRFS. Almost everyday, I get the
> message "No space left on device". Sometimes I can recover by balancing
> the system but sometimes even balancing does not work due to the lack
> of space. In this case, only a hard reset works if I can't delete some
> files. The problem is that I have a huge unallocated space as you can
> see here:
>
> # btrfs fi usage /
> Overall:
> Device size:   1.26TiB
> Device allocated:119.07GiB
> Device unallocated:1.14TiB

Tons of unallocated space. What kernel messages do you get for the
enospc? It sounds like this will be one of the mystery -28 error file
systems. So far as I recall the only work around is recreating the
file system. There are two additional things you can try: mount with
enospc_debug mount option and see if you can gather more information
about the problem. Or try a 4.8rc1 kernel which as a large number of
enospc changes.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2016-08-12 Thread Alfred Marc
Dear Sir/Madam

I have a business proposal for you that will be of mutual benefit to
both of us. It’s about the death of my late client and some money he
left behind before his death. I want you to stand as his next of kin
since you bear the same surname with him, so that the bank can
release/transfer his money to you as his next of kin. Contact me for
more details contact us via our official email address:
alfredmarc1...@hotmail.com

I look forward hearing from you as soon as possible If you are willing
to proceed with me

Barrister Alfred Marc
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-08-12 Thread Ronan Arraes Jardim Chagas
Hi guys,

I'm facing a daily problem with BTRFS. Almost everyday, I get the
message "No space left on device". Sometimes I can recover by balancing
the system but sometimes even balancing does not work due to the lack
of space. In this case, only a hard reset works if I can't delete some
files. The problem is that I have a huge unallocated space as you can
see here:

# btrfs fi usage /
Overall:
Device size:   1.26TiB
Device allocated:    119.07GiB
Device unallocated:    1.14TiB
Device missing:  0.00B
Used:    115.08GiB
Free (estimated):      1.14TiB  (min: 586.21GiB)
Data ratio:   1.00
Metadata ratio:   2.00
Global reserve:  512.00MiB  (used: 0.00B)

Data,single: Size:113.01GiB, Used:111.19GiB
   /dev/sda6 113.01GiB

Metadata,DUP: Size:3.00GiB, Used:1.94GiB
   /dev/sda6   6.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sda6  64.00MiB

Unallocated:
   /dev/sda6   1.14TiB

It is not easy to trigger the problem. But I do find some correlation
between two things:

1) When I started to create jails to build openSUSE packages locally,
then the problem happens more often. In these jails, some directories
like /dev/, /dev/pts, /proc, are mounted inside the jail.

2) When I open my KVM, I also see this problem more often. Notice,
however, that the KVM disk is stored in another EXT4 partition.

I would be glad if anyone can help me to fix it. In the following, I'm
providing more information about my system:

# uname -a
Linux ronanarraes-osd 4.7.0-1-default #1 SMP PREEMPT Mon Jul 25
08:42:47 UTC 2016 (89a2ada) x86_64 x86_64 x86_64 GNU/Linux

# btrfs --version
btrfs-progs v4.6.1+20160714

# btrfs fi show
Label: none  uuid: 80381f7f-8cef-4bd8-bdbc-3487253ee566
Total devices 1 FS bytes used 113.13GiB
devid1 size 1.26TiB used 119.07GiB path /dev/sda6

# btrfs fi df /
Data, single: total=113.01GiB, used=111.19GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=3.00GiB, used=1.94GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

Regards,
Ronan Arraes
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-12 Thread Chris Murphy
On Fri, Aug 12, 2016 at 6:04 AM, Austin S. Hemmelgarn
 wrote:
> On 2016-08-11 16:23, Dave T wrote:

>> 5. Would most of you guys use btrfs + dm-crypt on a production file
>> server (with spinning disks in JBOD configuration -- i.e., no RAID).
>> In this situation, the data is very important, of course. My past
>> experience indicated that RAID only improves uptime, which is not so
>> critical in our environment. Our main criteria is that we should never
>> ever have data loss. As far as I understand it, we do have to use
>> encryption.
>
> On a file server?  No, I'd ensure proper physical security is established
> and make sure it's properly secured against network based attacks and then
> not worry about it.  Unless you have things you want to hide from law
> enforcement or your government (which may or may not be legal where you
> live) or can reasonably expect someone to steal the system, you almost
> certainly don't actually need whole disk encryption.

Sure but then you need a fairly strict handling policy for those
drives when they leave the environment: e.g. for an RMA if the drive
dies under warranty, or when the drive is being retired. First there's
the actual physical handling (even interception) and accounting of all
of the drives, which has to be rather strict. And second, the fallback
to wiping the drive if it's dead must be physical destruction. For any
data not worth physically destroying the drive for proper disposal,
you can probably forego full disk encryption.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-12 Thread Patrik Lundquist
On 10 August 2016 at 23:21, Chris Murphy  wrote:
>
> I'm using LUKS, aes xts-plain64, on six devices. One is using mixed-bg
> single device. One is dsingle mdup. And then 2x2 mraid1 draid1. I've
> had zero problems. The two computers these run on do have aesni
> support. Aging wise, they're all at least a  year old. But I've been
> using Btrfs on LUKS for much longer than that.

FWIW:
I've had 5 spinning disks with LUKS + Btrfs raid1 for 1,5 years.
Also xts-plain64 with AES-NI acceleration.
No problems so far. Not using Btrfs compression.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs quota issues

2016-08-12 Thread Rakesh Sankeshi
Thanks for your inputs.

Another question I had was, is there any way to check what's the
directory/file sizes prior to compression and how much copression
btrfs did, etc? Basicaly some stats around compression and/or dedupe
from btrfs.


On Thu, Aug 11, 2016 at 12:13 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Rakesh Sankeshi posted on Thu, 11 Aug 2016 10:32:03 -0700 as excerpted:
>
>> I set 200GB limit to one user and 100GB to another user.
>>
>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>> anyway to workaround quota functionality on btrfs LZO compressed
>> filesystem?
>
> The btrfs quota subsystem remains somewhat buggy and unstable.  A lot of
> work has gone into it to fix the problems, including rewrites of the
> entire subsystem, and it's much better than it used to be, but it's still
> a feature that I would recommend not using on btrfs.
>
> My general position is this.  Either you need quotas for your use-case or
> you don't.  If you truly need them, you're far better off using a more
> mature filesystem with proven quota subsystem reliability.  If you don't
> really need them, simply keep the feature off for now, and for however
> long it takes to stabilize the feature, which could be some time.
>
> Of course if you're specifically testing quotas in ordered to report
> issues and test bugfixes, that's a specific case of needing quota
> functionality, and your work is greatly appreciated as it'll help to
> eventually make that feature stable and workable for all. =:^)
>
>
>
> --
> Duncan - List replies preferred.   No HTML msgs.
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master."  Richard Stallman
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-12 Thread Duncan
Austin S. Hemmelgarn posted on Fri, 12 Aug 2016 08:04:42 -0400 as
excerpted:

> On a file server?  No, I'd ensure proper physical security is
> established and make sure it's properly secured against network based
> attacks and then not worry about it.  Unless you have things you want to
> hide from law enforcement or your government (which may or may not be
> legal where you live) or can reasonably expect someone to steal the
> system, you almost certainly don't actually need whole disk encryption.
> There are two specific exceptions to this though:
> 1. If your employer requires encryption on this system, that's their
> call.
> 2. Encrypted swap is a good thing regardless, because it prevents
> security credentials from accidentally being written unencrypted to
> persistent storage.

In the US, medical records are pretty well protected under penalty of law 
(HIPPA, IIRC?).  Anyone storing medical records here would do well to 
have full filesystem encryption for that reason.

Of course financial records are sensitive as well, or even just forum 
login information, and then there's the various industrial spies from 
various countries (China being the one most frequently named) that would 
pay good money for unencrypted devices from the right sources.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 2/3] btrfs: relocation: Fix leaking qgroups numbers on data extents

2016-08-12 Thread Filipe Manana
On Tue, Aug 9, 2016 at 9:30 AM, Qu Wenruo  wrote:
> When balancing data extents, qgroup will leak all its numbers for
> relocated data extents.
>
> The relocation is done in the following steps for data extents:
> 1) Create data reloc tree and inode
> 2) Copy all data extents to data reloc tree
>And commit transaction
> 3) Create tree reloc tree(special snapshot) for any related subvolumes
> 4) Replace file extent in tree reloc tree with new extents in data reloc
>tree
>And commit transaction
> 5) Merge tree reloc tree with original fs, by swapping tree blocks
>
> For 1)~4), since tree reloc tree and data reloc tree doesn't count to
> qgroup, everything is OK.
>
> But for 5), the swapping of tree blocks will only info qgroup to track
> metadata extents.
>
> If metadata extents contain file extents, qgroup number for file extents
> will get lost, leading to corrupted qgroup accounting.
>
> The fix is, before commit transaction of step 5), manually info qgroup to
> track all file extents in data reloc tree.
> Since at commit transaction time, the tree swapping is done, and qgroup
> will account these data extents correctly.

Hi Qu,

This changelog should mention this fixes a regression introduced in
the 4.2 kernel.
It's specially important for people responsible to backport fixes to
earlier kernel releases.

>
> Cc: Mark Fasheh 
> Reported-by: Mark Fasheh 
> Reported-by: Filipe Manana 
> Signed-off-by: Qu Wenruo 
> ---
>  fs/btrfs/relocation.c | 114 
> +++---
>  1 file changed, 108 insertions(+), 6 deletions(-)
>
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index b26a5ae..a6ace8a 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -31,6 +31,7 @@
>  #include "async-thread.h"
>  #include "free-space-cache.h"
>  #include "inode-map.h"
> +#include "qgroup.h"
>
>  /*
>   * backref_node, mapping_node and tree_block start with this
> @@ -3916,6 +3917,95 @@ int prepare_to_relocate(struct reloc_control *rc)
> return 0;
>  }
>
> +/*
> + * Qgroup fixer for data chunk relocation.
> + * The data relocation is done in the following steps
> + * 1) Copy data extents into data reloc tree
> + * 2) Create tree reloc tree(special snapshot) for related subvolumes
> + * 3) Modify file extents in tree reloc tree
> + * 4) Merge tree reloc tree with original fs tree, by swapping tree blocks
> + *
> + * The problem is, data and tree reloc tree are not accounted to qgroup,
> + * and 4) will only info qgroup to track tree blocks change, not file extents
> + * in the tree blocks.
> + *
> + * The good news is, related data extents are all in data reloc tree, so we
> + * only need to info qgroup to track all file extents in data reloc tree
> + * before commit trans.
> + */
> +static int qgroup_fix_relocated_data_extents(struct btrfs_trans_handle 
> *trans,
> +struct reloc_control *rc)
> +{
> +   struct btrfs_fs_info *fs_info = rc->extent_root->fs_info;
> +   struct inode *inode = rc->data_inode;
> +   struct btrfs_root *data_reloc_root = BTRFS_I(inode)->root;
> +   struct btrfs_path *path;
> +   struct btrfs_key key;
> +   int ret = 0;
> +
> +   if (!fs_info->quota_enabled)
> +   return 0;
> +
> +   /*
> +* Only for stage where we update data pointers the qgroup fix is
> +* valid.
> +* For MOVING_DATA stage, we will miss the timing of swapping tree
> +* blocks, and won't fix it.
> +*/
> +   if (!(rc->stage == UPDATE_DATA_PTRS && rc->extents_found))
> +   return 0;
> +
> +   path = btrfs_alloc_path();
> +   if (!path)
> +   return -ENOMEM;
> +   key.objectid = btrfs_ino(inode);
> +   key.type = BTRFS_EXTENT_DATA_KEY;
> +   key.offset = 0;
> +
> +   ret = btrfs_search_slot(NULL, data_reloc_root, &key, path, 0, 0);
> +   if (ret < 0)
> +   goto out;
> +
> +   lock_extent(&BTRFS_I(inode)->io_tree, 0, (u64)-1);
> +   while (1) {
> +   struct btrfs_file_extent_item *fi;
> +
> +   btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
> +   if (key.objectid > btrfs_ino(inode))
> +   break;
> +   if (key.type != BTRFS_EXTENT_DATA_KEY)
> +   goto next;
> +   fi = btrfs_item_ptr(path->nodes[0], path->slots[0],
> +   struct btrfs_file_extent_item);
> +   if (btrfs_file_extent_type(path->nodes[0], fi) !=
> +   BTRFS_FILE_EXTENT_REG)
> +   goto next;
> +   /*
> +   pr_info("disk bytenr: %llu, num_bytes: %llu\n",
> +   btrfs_file_extent_disk_bytenr(path->nodes[0], fi),
> +   btrfs_file_extent_disk_num_bytes(path->nodes[0], fi));
> +   */

Please remove this debugging pr_info.

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-12 Thread Austin S. Hemmelgarn

On 2016-08-11 16:23, Dave T wrote:

What I have gathered so far is the following:

1. my RAM is not faulty and I feel comfortable ruling out a memory
error as having anything to do with the reported problem.

2. my storage device does not seem to be faulty. I have not figured
out how to do more definitive testing, but smartctl reports it as
healthy.
Is this just based on smartctl -H, or is it based on looking at all the 
info available from smartctl?  Based on everything you've said so far, 
it sounds to me like there was a group of uncorrectable errors on the 
disk, and the sectors in question have now been remapped by the device's 
firmware.  Such a situation is actually more common than people think 
(this is part of the whole 'reinstall to speed up your system' mentality 
in the Windows world).  I've actually had this happen before (and 
correlated the occurrences with spikes in readings from the data-logging 
Geiger counter I have next to my home server).  Most disks don't start 
to report as failing until they get into pretty bad condition (on most 
hard drives, it takes a pretty insanely large count of reallocated 
sectors to mark the disk as failed in the drive firmware, and on SSD's 
you pretty much have to run it out of spare blocks (which takes a _long_ 
time on many SSD's)).


3. this problem first happened on a normally running system in light
use. It had not recently crashed. But the root fs went read-only for
an unknown reason.

4. the aftermath of the initial problem may have been exacerbated by
hard resetting the system, but that's only a guess


The compression-related problem is this:  Btrfs is considerably less tolerant 
of checksum-related errors on btrfs-compressed data


I'm an unsophisticated user. The argument in support of this statement
sounds convincing to me. Therefore, I think I should discontinue using
compression. Anyone disagree?

Is there anything else I should change? (Do I need to provide
additional information?)

What can I do to find out more about what caused the initial problem.
I have heard memory errors mentioned, but that's apparently not the
case here. I have heard crash recovery mentioned, but that isn't how
my problem initially happened.

I also have a few general questions:

1. Can one discontinue using the compress mount option if it has been
used previously? What happens to existing data if the compress mount
option is 1) added when it wasn't used before, or 2) dropped when it
had been used.
Yes, it just affects newly written data.  If you want to convert 
existing data to be uncompressed, you'll need to run 'btrfs filesystem 
defrag -r ' on the filesystem to convert things.


2. I understand that the compress option generally improves btrfs
performance (via Phoronix article I read in the past; I don't find the
link). Since encryption has some characteristics in common with
compression, would one expect any decrease in performance from
dropping compression when using btrfs on dm-crypt? (For more context,
with an i7 6700K which has aes-ni, CPU performance should not be a
bottleneck on my computer.)
I would expect a change in performance in that case, but not necessarily 
a decrease.  The biggest advantage of compression is that it trades time 
spent using the disk for time spent using the CPU.  In many cases, this 
is a favorable trade-off when your storage is slower than your memory 
(because memory speed is really the big limiting factor here, not 
processor speed).  In your case, the encryption is hardware accelerated, 
but the compression isn't, so you should in theory actually get better 
performance by turning off compression.


3. How do I find out if it is appropriate to use dup metadata on a
Samsung 950 Pro NVMe drive? I don't see deduplication mentioned in the
drive's datasheet:
http://www.samsung.com/semiconductor/minisite/ssd/downloads/document/Samsung_SSD_950_PRO_Data_Sheet_Rev_1_2.pdf
Whether or not it does deduplication is hard to answer.  If it does, 
then you obviously should avoid dup metadata.  If it doesn't, then it's 
a complex question as to whether or not to use dup metadata.  The short 
explanation for why is that the SSD firmware maintains a somewhat 
arbitrary mapping between LBA's and actual location of the data in 
flash, and it tends to group writes from around the same time together 
in the flash itself.  The argument against dup on SSD's in general takes 
this into account, arguing that because the data is likely to be in the 
same erase block for both copies, it's not as well protected. 
Personally, I run dup on non-deduplicationg SSD's anyway, because I 
don't trust higher layers to not potentially mess up one of the copies, 
and I still get better performance than most hard disks.


4. Given that my drive is not reporting problems, does it seem
reasonable to re-use this drive after the errors I reported? If so,
how should I do that? Can I simply make a new btrfs filesystem and
copy my data back? Should I start at a lower lev

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-12 Thread Adam Borowski
On Thu, Aug 11, 2016 at 04:23:45PM -0400, Dave T wrote:
> 1. Can one discontinue using the compress mount option if it has been
> used previously?

The mount option applies only to newly written blocks, and even then only to
files that don't say otherwise (via chattr +c or +C, btrfs property, etc).
You can change it on the fly (mount -o remount,...), etc.

> What happens to existing data if the compress mount option is 1) added
> when it wasn't used before, or 2) dropped when it had been used.

That data stays compressed or uncompressed, as when it was written.  You can
defrag them to change that; balance moves extents without changing their
compression.

> 2. I understand that the compress option generally improves btrfs
> performance (via Phoronix article I read in the past; I don't find the
> link). Since encryption has some characteristics in common with
> compression, would one expect any decrease in performance from
> dropping compression when using btrfs on dm-crypt? (For more context,
> with an i7 6700K which has aes-ni, CPU performance should not be a
> bottleneck on my computer.)

As said elsewhere, compression can drastically help or reduce performance,
this depends on your CPU-to-IO ratio, and to whether you do small random
writes inside files (compress has to rewrite a whole 128KB block).

An extreme data point: Odroid-U2 on eMMC doing Debian archive rebuilds,
compression improves overall throughput by a factor of around two!  On the
other hand, this same task on typical machines tends to be CPU bound.

-- 
An imaginary friend squared is a real enemy.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html