merge status of per-chunk degradable check [was Re: Which device is missing ?]

2018-10-08 Thread Nicholas D Steeves
On Mon, Oct 08, 2018 at 04:10:55PM +, Hugo Mills wrote:
> On Mon, Oct 08, 2018 at 03:49:53PM +0200, Pierre Couderc wrote:
> > I ma trying to make a "RAID1" with /dev/sda2 ans /dev/sdb (or similar).
> > 
> > But I have stranges status or errors  about "missing devices" and I
> > do not understand the current situation :
[...]
>Note that, since the main FS is missing a device, it will probably
> need to be mounted in degraded mode (-o degraded), and that on kernels
> earlier than (IIRC) 4.14, this can only be done *once* without the FS
> becoming more or less permanently read-only. On recent kernels, it
> _should_ be OK.
> 
> *WARNING ENDS*

I think this was the patch that addressed this?:
  https://www.spinics.net/lists/linux-btrfs/msg47283.html
  https://patchwork.kernel.org/patch/7226931/

In my notes it wasn't present in <= 4.14.15, but my notes might be
wrong.  Does this patch resolve the one-shot -o degraded, reboot,
forever read-only behaviour, or is something else required?  When was
it merged?  Has it been or will it be backported to 4.14.x?  I'm
guessing 4.9.x is too far back, but it would be really nice to see it
there too :-)

Also, will this issue be resolved for linux-4.19?  If so I'd like to
update the Debian btrfs wiki with this good news :-)

Finally, is the following a valid workaround for users who don't have
access to a kernel containing this fix:

1. Make a raid1 profile volume (both data and metadata) with >= 3 disks.
2. Lose one disk.
3. Allocator continues to write raid1 chunks instead of single,
because it is still possible to write one chunk to two disks.
4. Thus reboot twice -> forever read-only averted?


Kind regards,
Nicholas

P.S. Please let me know if you'd prefer for me to shift this
documentation effort to btrfs.wiki.kernel.org.


signature.asc
Description: PGP signature


Re: Problem with BTRFS

2018-09-14 Thread Nicholas D Steeves
Hi,

On Fri, Sep 14, 2018 at 10:13:06PM +0200, Rafael Jesús Alcántara Pérez wrote:
> Hi,
> 
> It seems that btrfs-progs_4.17-1 from Sid, includes that feature (at
> least, it says so in the manual page). I don't know if I can install it
> on Stretch but I'll try.
> 
> Greets and thank you very much to both of you ;)
> Rafael J. Alcántara Pérez.

Please do not install btrfs-progs from sid on stretch, it's likely to
break your system.  If you can't wait, here is a link to what I
uploaded.  It includes both the source and the binary packages (see
gpg signed changes file, and please take care to verify the binaries
weren't tampered with):

https://drive.google.com/file/d/1WflwBEn-QN_btrKPiefz7Kxz58VT8kIQ/view?usp=sharing

Of course, this package is unofficial.  The official one will soon
become available through the regular channels.

For best results, set up a local file:// apt repository so that apt
update && apt upgrade will work properly.  Official bpo will
automatically overwrite these, in any case.

Cheers,
Nicholas


signature.asc
Description: PGP signature


Re: Problem with BTRFS

2018-09-14 Thread Nicholas D Steeves
Hi,

On Fri, Sep 14, 2018 at 10:46:12PM +0500, Roman Mamedov wrote:
> On Fri, 14 Sep 2018 19:27:04 +0200
> Rafael Jesús Alcántara Pérez  wrote:
> 
> > BTRFS info (device sdc1): use lzo compression, level 0
> > BTRFS warning (device sdc1): 'recovery' is deprecated, use
> > 'usebackuproot' instead
> > BTRFS info (device sdc1): trying to use backup root at mount time
> > BTRFS info (device sdc1): disk space caching is enabled
> > BTRFS info (device sdc1): has skinny extents
> > BTRFS error (device sdc1): super_total_bytes 601020864 mismatch with
> > fs_devices total_rw_bytes 601023424
> 
> There is a recent feature added to "btrfs rescue" to fix this kind of
> condition: https://patchwork.kernel.org/patch/10011399/
> 
> You need a recent version of the Btrfs tools for it, not sure which, I see
> that it's not in version 4.13 but is present in 4.17.

btrfs-progs_4.17-1~bpo9+1_amd64.changes is waiting to pass through NEW
(due to the new library packages in the backport).  It will take
between 24h and several weeks to pass to stretch-backports.  The
variability in expected delivery is because a Debian FTP Master will
need to manually ACK the changes.

If 4.17.1 is required, please 'reportbug btrfs-progs' and take care to
set the "found" version to 4.17-1, plus provide a citation for why
4.17.1 is required--without this justification the bug severity could
be downgraded to wishlist.  This hypothetical bug would be for the
"unstable" suite (packages migrate to testing after a period of
testing, and then become eligible for backporting).

Take care!
Nicholas


signature.asc
Description: PGP signature


Re: btrfs-convert missing in btrfs-tools v4.15.1

2018-08-23 Thread Nicholas D Steeves
Hi Qu,

On Sun, Jul 29, 2018 at 07:44:05AM +0800, Qu Wenruo wrote:
> 
> 
> On 2018年07月29日 05:34, Nicholas D Steeves wrote:
> > Resending because I forgot to CC list.
> > 
> > Hi jkexcel,
> > 
> > On 28 July 2018 at 16:50, jkexcel  wrote:
> >>
> >> I'm an end user trying to use btrfs-convert but when I installed
> >> btrfs-tools and its dependency btrfs-progs on kubuntu 18.04, the
> >> installation was successful, and it shows that v4.15.1-1build1 was
> >> installed.
> >>
> >> However, when using the command  # brtfs-convert  /dev/sda4 (with the
> >> drive unmounted) the resulting error appears "command not found"
> >> I also tried command "btrfs convert" in case this was folded into the
> >> main tool, but this also failed.
> >>
> >> 1. Is btrfs-convert still available?
> >>
> >> 2. Where can I find it?
> >>
> >> 3. Has btrfs-convert been replaced? what is it's new name?
> >>
> >> 4. Is it safe to use a downgraded version of btrfs-tools ie: 4.14 or
> >> older?
> > 
> > You can blame me for that.  In Debian several users had reported
> > release-critical issues in btrfs-convert, so I submitted a patch to
> > disable it for the forseable future, eg:
> > 
> >   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864798
> >   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=854489
> 
> Both report looks pretty old (4.7~4.9).
> 
> In fact, just in v4.10 we had a big rework for convert.
> It should work much better after that.
> 
> Furthermore, we have newer (but smaller) fixes to remove a lot of
> BUG_ON(), and do graceful exit for ENOSPC case since then.
> 
> And the design of btrfs-convert (at least for the latest btrfs-convert)
> is to ensure until everything goes well, we won't touch any bytes of the
> ext* fs (in fact we open the ext* fs in RO mode).
> So it at least won't corrupt the ext* fs.
> 
> > 
> > Also, please consider the official status "As of 4.0 kernels this feature
> > is not often used or well tested anymore, and there have been some reports
> > that the conversion doesn't work reliably. Feel free to try it out, but
> > make sure you have backups" (
> > https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3 ).
> 
> The wiki page looks needs to be updated.
> 
> Both btrfs-convert and base btrfs-progs are improving, especially after
> v4.10 btrfs-convert goes through a big rework and works well so far, and
> even added support for reiserfs under the same framwork.
> 
> So IMHO it's at least worth trying.
> 
> Thanks,
> Qu

Thank you for sharing the cut-off where success became more likely :-)
Debian 9 could have had 4.9.1 at the newest, so it wouldn't have had
btrfs-convert.  So it sounds like btrfs-convert could have been
enabled for the experimental suite (which is almost only used by
Debian developers and not users) for 4.10.  Looking at the changelog
it seems we might have had to disable it again before 4.14.1 or 4.16.

I'm happy we're having this conversation now, because the time to give
it another try is probably sometime in the next month :-)  See my long
email to David for the caveats.

Cheers,
Nicholas


signature.asc
Description: PGP signature


Re: btrfs-convert missing in btrfs-tools v4.15.1

2018-08-23 Thread Nicholas D Steeves
Hi everyone,

Sorry for the delay replying, I've been busy with other work.  Reply
follows inline.  P.S. sorry about the table in this email that is 99
columns wide.

On Thu, Aug 09, 2018 at 01:50:46PM +0200, David Sterba wrote:
> On Sat, Jul 28, 2018 at 05:34:49PM -0400, Nicholas D Steeves wrote:
> > On 28 July 2018 at 16:50, jkexcel  wrote:
> > > I'm an end user trying to use btrfs-convert but when I installed
> > > btrfs-tools and its dependency btrfs-progs on kubuntu 18.04, the
> > > installation was successful, and it shows that v4.15.1-1build1 was
> > > installed.
> > >
> > > However, when using the command  # brtfs-convert  /dev/sda4 (with the
> > > drive unmounted) the resulting error appears "command not found"
> > > I also tried command "btrfs convert" in case this was folded into the
> > > main tool, but this also failed.
> > >
> > > 1. Is btrfs-convert still available?
> > >
> > > 2. Where can I find it?
> > >
> > > 3. Has btrfs-convert been replaced? what is it's new name?
> > >
> > > 4. Is it safe to use a downgraded version of btrfs-tools ie: 4.14 or
> > > older?
> > 
> > You can blame me for that.  In Debian several users had reported
> > release-critical issues in btrfs-convert, so I submitted a patch to
> > disable it for the forseable future, eg:
> > 
> >   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864798
> >   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=854489
> 
> The reports are against version 4.7.3 released one year before the time
> of the bug reports. The fix for the reported bug happened in 4.10, that
> was half a year before the bug.

Debian stable will always have an old version, which will be in use
for two to four years.  Btrfs-progs 4.7.3 will be in use in Debian 9
until at least 2021, possibly longer.  Btw, I strongly believe Debian 9
should have shipped with btrfs-progs 4.9.1, but alas the primary
maintainer didn't upload it on time.

For "buster" (Debian 10), which will probably be released in mid 2019,
the newest possible btrfs-progs version that could be included is
whatever is current at the end of January 2019.  Exceptions are
sometimes granted for an unblock of a newer version.  For example, if
an LTS kernel won't be released until March, and the release,
technical committee, and kernel team decide that's the version we
want, then a preapproved exception will be granted.  At that time an
argument can be made for preapproval of a newer btrfs-progs as well.

That said, I try to keep a backported newer version of btrfs-progs for
the stable Debian release reasonably up-to-date (backports are
available to users on a per-package opt-in basis).  That's the channel
for feature enablement.  Also, my apologies, at the moment this
backport is very much out of date--it stalled while investigating
which packages would be broken by the library reorganisation;
although, from what I can tell that would only be snapper.

> > Also, please consider the official status "As of 4.0 kernels this feature
> > is not often used or well tested anymore, and there have been some reports
> > that the conversion doesn't work reliably. Feel free to try it out, but
> > make sure you have backups" (
> > https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3 ).
> 
> Sorry that this you take it as official status. The wiki is open to
> edits and such claims appear there from time to time. I've removed it,
> it's been there since 2015 when it possibly was accurate but it's not
> anymore.

Is there a more authoritative and up-to-date location for various
statuses?  It would be nice to have something in btrfs-progs as a
table like this, or exportable in some kind of human-friendly format:

+---+-++--+---+
|Feature|1st mainline version where feature   |LTS kernel 1|LTS kernel 2
  |LTS kernel 3   |
|   |appeared |eg: 4.4 |eg: 4.9 
  |eg: 4.14   |
+---+-++--+---+
|convert|Assume -progs and kernel |exp?|mostly? 
  |stable?|
|from   |vessions are strongly||  amend 
  |   |
|ext3/4 |associated, for simplicity.  ||  status with:  
  |   |
|   | ||4.9.z:testing   
  |   |
+---+-++--+---+
|foo|3.16:danger||exp||mostly||testing||stable|exp |mostly  
  |testing|
+-

Re: btrfs-convert missing in btrfs-tools v4.15.1

2018-07-28 Thread Nicholas D Steeves
Resending because I forgot to CC list.

Hi jkexcel,

On 28 July 2018 at 16:50, jkexcel  wrote:
>
> I'm an end user trying to use btrfs-convert but when I installed
> btrfs-tools and its dependency btrfs-progs on kubuntu 18.04, the
> installation was successful, and it shows that v4.15.1-1build1 was
> installed.
>
> However, when using the command  # brtfs-convert  /dev/sda4 (with the
> drive unmounted) the resulting error appears "command not found"
> I also tried command "btrfs convert" in case this was folded into the
> main tool, but this also failed.
>
> 1. Is btrfs-convert still available?
>
> 2. Where can I find it?
>
> 3. Has btrfs-convert been replaced? what is it's new name?
>
> 4. Is it safe to use a downgraded version of btrfs-tools ie: 4.14 or
> older?

You can blame me for that.  In Debian several users had reported
release-critical issues in btrfs-convert, so I submitted a patch to
disable it for the forseable future, eg:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864798
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=854489

Also, please consider the official status "As of 4.0 kernels this feature
is not often used or well tested anymore, and there have been some reports
that the conversion doesn't work reliably. Feel free to try it out, but
make sure you have backups" (
https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3 ).

I'm happy to hear it is still disabled in Ubuntu, where many more
users would be affected.  IIRC OpenSUSE LEAP and SLED 15 reenabled it
(it was previously disabled there), so maybe it needs specific kernel
versions or patches to not trigger RC bugs, and/or very specific
btrfs-progs versions, and/or very specific e2fslibs, and/or specific
combinations?  While I very much look forward to the day when
btrfs-convert can be relied on in Debian, I don't think we're there
yet.  Please take this as an opportunity to test that your backups are
restorable, mkfs.btrfs, and then restore from backup.  P.S. I have no
idea if Ubuntu has additional btrfs support.

Cheers,
Nicholas


signature.asc
Description: PGP signature


Re: [PATCH 1/1] btrfs-progs: Fix typos in docs and user-facing strings

2018-03-17 Thread Nicholas D Steeves
Hi Nikolay,

Thank you for nit-picking, I'm definitely not above reproach and often
make errors--especially in my own writing!  Reply follows below.

On Fri, Mar 16, 2018 at 10:00:34AM +0200, Nikolay Borisov wrote:
> 
> 
> On 16.03.2018 02:39, Nicholas D Steeves wrote:
> > Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
> 
> 
> All look fine except one nit, see below.
> 
> > ---
> >  Documentation/btrfs-balance.asciidoc | 2 +-
> >  Documentation/btrfs-check.asciidoc   | 2 +-
> >  Documentation/btrfs-man5.asciidoc| 8 
> >  cmds-subvolume.c | 2 +-
> >  4 files changed, 7 insertions(+), 7 deletions(-)
> > 
> > diff --git a/Documentation/btrfs-balance.asciidoc 
> > b/Documentation/btrfs-balance.asciidoc
> > index 7017bed7..536243bc 100644
> > --- a/Documentation/btrfs-balance.asciidoc
> > +++ b/Documentation/btrfs-balance.asciidoc
> > @@ -204,7 +204,7 @@ The way balance operates, it usually needs to 
> > temporarily create a new block
> >  group and move the old data there, before the old block group can be 
> > removed.
> >  For that it needs the work space, otherwise it fails for ENOSPC reasons.
> >  This is not the same ENOSPC as if the free space is exhausted. This refers 
> > to
> > -the space on the level of block groups, which are bigger parts of the 
> > filesytem
> > +the space on the level of block groups, which are bigger parts of the 
> > filesystem
> >  that contain many file extents.
> >  
> >  The free work space can be calculated from the output of the *btrfs 
> > filesystem show*
> > diff --git a/Documentation/btrfs-check.asciidoc 
> > b/Documentation/btrfs-check.asciidoc
> > index cc76d846..b963eae5 100644
> > --- a/Documentation/btrfs-check.asciidoc
> > +++ b/Documentation/btrfs-check.asciidoc
> > @@ -122,7 +122,7 @@ NOTE: 'lowmem' mode does not work with '--repair' yet, 
> > and is still considered
> >  experimental.
> >  
> >  --force::
> > -allow to work on a mounted filesystem. Note that this should work fine on a
> > +allow work on a mounted filesystem. Note that this should work fine on a
> Shouldn't we use the continuous aspect of the verb here, i.e.
> s/work/working ? (I'm not a native speaker so take it with a grain of salt)

I'm not sure, but if the format is:
"[implied subject] verb, other stuff", then:

[The --force argument] allows work on a mounted filesystem. [1]
  or alternatively:
   allows operations on a mounted filesystem. [2]
  or
   allows operations to work on a mounted filesystem. [3]

but if we're in imperative/declarative then:

   [Allow] Work on a mounted filesystem. [4]
and not
   [Allow] Working on a mounted filesystem. [5]

Because "Working on a mounted filesystem" [5] is a sentence fragment
(also see related discussing at [6] below).  I chose [4] because it
required the fewest changes ;-)

In "Allow work on a mounted filesystem" [4]

"Allow" is the verb and "work" is the object, but if "Allow" is
dropped and the phrase becomes "work on a mounted filesystem" then
"work" becomes the verb, using its identically spelled verb form. The
following is also grammatically correct if the rule is:

"(argument_name functioning as imperative verb), [participial phrase]"
eg:--force, allowing work on a mounted filesystem. [6]

but IIRC [6] is considered poor style for Science writing and
documentation--not to mention it also might make translation
difficult, and finally it breaks whenever --argument cannot function
as a verb.  It's also tricky to analyse ing-verbs to tell if they're
gerund or participial when dealing with the truncated grammatical
context that is conventional in manpages.

As ever, I might be wrong, but this is my reasoning :-)

Cheers,
Nicholas



signature.asc
Description: PGP signature


Re: [PATCH 1/1] btrfs-progs: docs: annual typo, clarity, & grammar review & fixups

2018-03-15 Thread Nicholas D Steeves
Hi Qu,

So sorry for the incredibly delayed reply [it got lost in my drafts
folder], I sincerely appreciate the time you took to respond.  There
is a lot in your responses that I suspect would benefit readers of the
btrfs wiki, so I've drawn attention to them by replying inline.  I've
omitted the sections David resolved with his merge.

P.S. Even graduate-level native speakers struggle with the
multitude of special-cases in English!

On Sun, Oct 22, 2017 at 06:54:16PM +0800, Qu Wenruo wrote:
> Hi Nicholas,
> 
> Thanks for the documentation update.
> Since I'm not a native English speaker, I may not help much to organize
> the sentence, but I can help to explain the question noted in the
> modification.
> 
> On 2017年10月22日 08:00, Nicholas D Steeves wrote:
> > In one big patch, as requested
[...]
> > --- a/Documentation/btrfs-balance.asciidoc
> > +++ b/Documentation/btrfs-balance.asciidoc
> > @@ -21,7 +21,7 @@ filesystem.
> >  The balance operation is cancellable by the user. The on-disk state of the
> >  filesystem is always consistent so an unexpected interruption (eg. system 
> > crash,
> >  reboot) does not corrupt the filesystem. The progress of the balance 
> > operation
> > -is temporarily stored and will be resumed upon mount, unless the mount 
> > option
> > +is temporarily stored (EDIT: where is it stored?) and will be 
> > resumed upon mount, unless the mount option
> 
> To be specific, they are stored in data reloc tree and tree reloc tree.
> 
> Data reloc tree stores the data/metadata written to new location.
> 
> And tree reloc tree is kind of special snapshot for each tree whose tree
> block is get relocated during the relocation.

Is there already a document on the btrfs allocation?  This seems like
it might be a nice addition for the wiki.  I'm guessing it would fit
under
https://btrfs.wiki.kernel.org/index.php/Main_Page#Developer_documentation

> > @@ -200,11 +200,11 @@ section 'PROFILES'.
> >  ENOSPC
> >  --
> >  
> > -The way balance operates, it usually needs to temporarily create a new 
> > block
> > +The way balance operates, it usually needs to temporarily create a new 
> > block
> >  group and move the old data there. For that it needs work space, otherwise
> >  it fails for ENOSPC reasons.
> >  This is not the same ENOSPC as if the free space is exhausted. This refers 
> > to
> > -the space on the level of block groups.
> > +the space on the level of block groups. (EDIT: What is the 
> > relationship between the new block group and the work space?  Is the "old 
> > data" removed from the new block group?  Please say something about block 
> > groups to clarify)
> 
> Here I think we're talking about allocating new block group, so it's
> using unallocated space.
> 
> While for normal space usage, we're allocating from *allocated* block
> group space.
> 
> So there are two levels of space allocation:
> 
> 1) Extent level
>Always allocated from existing block group (or chunk).
>Data extent, tree block allocation are all happening at this level.
> 
> 2) Block group (or chunk, which are the same) level
>Always allocated from free device space.
> 
> I think the original sentence just wants to address this.

Also seems like a good fit for a btrfs allocation document.

> >  
> >  The free work space can be calculated from the output of the *btrfs 
> > filesystem show*
> >  command:
> > @@ -227,7 +227,7 @@ space. After that it might be possible to run other 
> > filters.
> >  
> >  Conversion to profiles based on striping (RAID0, RAID5/6) require the work
> >  space on each device. An interrupted balance may leave partially filled 
> > block
> > -groups that might consume the work space.
> > +groups that might (EDIT: is this 2nd level of uncertainty 
> > necessary?) consume the work space.
> >  
[...]
> > @@ -3,7 +3,7 @@ btrfs-filesystem(8)
[...]
> >  SYNOPSIS
> >  
> > @@ -53,8 +53,8 @@ not total size of filesystem.
> >  when the filesystem is full. Its 'total' size is dynamic based on the
> >  filesystem size, usually not larger than 512MiB, 'used' may fluctuate.
> >  +
> > -The global block reserve is accounted within Metadata. In case the 
> > filesystem
> > -metadata are exhausted, 'GlobalReserve/total + Metadata/used = 
> > Metadata/total'.
> > +The global block reserve is accounted within Metadata. In case the 
> > filesystem
> > +metadata are exhausted, 'GlobalReserve/total + Metadata/used = 
> > Metadata/total'. (EDIT: s/are/is/? And please write more for clarity. 
>

[PATCH 1/1] btrfs-progs: Fix typos in docs and user-facing strings

2018-03-15 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 Documentation/btrfs-balance.asciidoc | 2 +-
 Documentation/btrfs-check.asciidoc   | 2 +-
 Documentation/btrfs-man5.asciidoc| 8 
 cmds-subvolume.c | 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/Documentation/btrfs-balance.asciidoc 
b/Documentation/btrfs-balance.asciidoc
index 7017bed7..536243bc 100644
--- a/Documentation/btrfs-balance.asciidoc
+++ b/Documentation/btrfs-balance.asciidoc
@@ -204,7 +204,7 @@ The way balance operates, it usually needs to temporarily 
create a new block
 group and move the old data there, before the old block group can be removed.
 For that it needs the work space, otherwise it fails for ENOSPC reasons.
 This is not the same ENOSPC as if the free space is exhausted. This refers to
-the space on the level of block groups, which are bigger parts of the filesytem
+the space on the level of block groups, which are bigger parts of the 
filesystem
 that contain many file extents.
 
 The free work space can be calculated from the output of the *btrfs filesystem 
show*
diff --git a/Documentation/btrfs-check.asciidoc 
b/Documentation/btrfs-check.asciidoc
index cc76d846..b963eae5 100644
--- a/Documentation/btrfs-check.asciidoc
+++ b/Documentation/btrfs-check.asciidoc
@@ -122,7 +122,7 @@ NOTE: 'lowmem' mode does not work with '--repair' yet, and 
is still considered
 experimental.
 
 --force::
-allow to work on a mounted filesystem. Note that this should work fine on a
+allow work on a mounted filesystem. Note that this should work fine on a
 quiescent or read-only mounted filesystem but may crash if the device is
 changed externally, eg. by the kernel module.  Repair without mount checks is
 not supported right now.
diff --git a/Documentation/btrfs-man5.asciidoc 
b/Documentation/btrfs-man5.asciidoc
index b20abf05..0529496a 100644
--- a/Documentation/btrfs-man5.asciidoc
+++ b/Documentation/btrfs-man5.asciidoc
@@ -210,7 +210,7 @@ system at that point.
 Enable discarding of freed file blocks.  This is useful for SSD devices, thinly
 provisioned LUNs, or virtual machine images; however, every storage layer must
 support discard for it to work. if the backing device does not support
-asynchronous queued TRIM, then this operation can severly degrade performance,
+asynchronous queued TRIM, then this operation can severely degrade performance,
 because a synchronous TRIM operation will be attempted instead. Queued TRIM
 requires newer than SATA revision 3.1 chipsets and devices.
 
@@ -223,7 +223,7 @@ of actually discarding the blocks.
 
 If discarding is not necessary to be done at the block freeing time, there's
 `fstrim`(8) tool that lets the filesystem discard all free blocks in a batch,
-possibly not much interfering with other operations. Also, the the device may
+possibly not much interfering with other operations. Also, the device may
 ignore the TRIM command if the range is too small, so running the batch discard
 can actually discard the blocks.
 
@@ -289,7 +289,7 @@ checksums don't fit inside a single page.
 +
 Don't use this option unless you really need it. The inode number limit
 on 64bit system is 2^64^, which is practically enough for the whole filesystem
-lifetime. Due to implemention of linux VFS layer, the inode numbers on 32bit
+lifetime. Due to implementation of linux VFS layer, the inode numbers on 32bit
 systems are only 32 bits wide. This lowers the limit significantly and makes
 it possible to reach it. In such case, this mount option will help.
 Alternatively, files with high inode numbers can be copied to a new subvolume
@@ -415,7 +415,7 @@ will disable all SSD options.
 
 *subvol='path'*::
 Mount subvolume from 'path' rather than the toplevel subvolume. The
-'path' is always treated as relative to the the toplevel subvolume.
+'path' is always treated as relative to the toplevel subvolume.
 This mount option overrides the default subvolume set for the given filesystem.
 
 *subvolid='subvolid'*::
diff --git a/cmds-subvolume.c b/cmds-subvolume.c
index ba57eaa0..45363a5a 100644
--- a/cmds-subvolume.c
+++ b/cmds-subvolume.c
@@ -338,7 +338,7 @@ again:
error("unable to get fsid for '%s': %s",
path, strerror(-res));
error(
-   "delete suceeded but commit may not be done in the 
end");
+   "delete succeeded but commit may not be done in the 
end");
ret = 1;
goto out;
}
-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/1] btrfs-progs: Fix typos in docs and user-facing strings

2018-03-15 Thread Nicholas D Steeves
Found with lintian and Emacs' writegood-mode.

Cheers!

Nicholas D Steeves (1):
  Fix typos in docs and user-facing strings

 Documentation/btrfs-balance.asciidoc | 2 +-
 Documentation/btrfs-check.asciidoc   | 2 +-
 Documentation/btrfs-man5.asciidoc| 8 
 cmds-subvolume.c | 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)

-- 
2.14.2

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: updatedb does not index /home when /home is Btrfs

2017-11-04 Thread Nicholas D Steeves
On 4 November 2017 at 14:55, Chris Murphy  wrote:
> On Sat, Nov 4, 2017 at 12:27 PM, Andrei Borzenkov  wrote:
>> 04.11.2017 10:05, Adam Borowski пишет:
>>> On Sat, Nov 04, 2017 at 09:26:36AM +0300, Andrei Borzenkov wrote:
 04.11.2017 07:49, Adam Borowski пишет:
> On Fri, Nov 03, 2017 at 06:15:53PM -0600, Chris Murphy wrote:
>> Ancient bug, still seems to be a bug.
>> https://bugzilla.redhat.com/show_bug.cgi?id=906591
>>
>> The issue is that updatedb by default will not index bind mounts, but
>> by default on Fedora and probably other distros, put /home on a
>> subvolume and then mount that subvolume which is in effect a bind
>> mount.
>>
>> There's a lot of early discussion in 2013 about it, but then it's
>> dropped off the radar as nobody has any ideas how to fix this in
>> mlocate.
>
> I don't see how this would be a bug in btrfs.  The same happens if you
> bind-mount /home (or individual homes), which is a valid and non-rare 
> setup.

 It is the problem *on* btrfs because - as opposed to normal bind mount -
 those mount points do *not* refer to the same content.
>>>
>>> Neither do they refer to in a "normal" bind mount.
>>>
 As was commented in mentioned bug report:

 mount -o subvol=root /dev/sdb1 /root
 mount -o subvol=foo /dev/sdb1 /root/foo
 mount -o subvol bar /dev/sdb1 /bar/bar

 Both /root/foo and /root/bar, will be skipped even though they are not
 accessible via any other path (on mounted filesystem)
>>>
>>> losetup -D
>>> truncate -s 4G junk
>>> losetup -f junk
>>> mkfs.ext4 /dev/loop0
>>> mkdir -p foo bar
>>> mount /dev/loop0 foo
>>> mkdir foo/bar
>>> touch foo/fileA foo/bar/fileB
>>> mount --bind foo/bar bar
>>> umount foo
>>>
>>
>> Indeed. I can build the same configuration on non-btrfs and updatedb
>> would skip non-overlapping mounts just as it would on btrfs. It is just
>> that it is rather more involved on other filesystems (and as you
>> mentioned this requires top-level to be mounted at some point), while on
>> btrfs it is much easier to get (and is default on number of distributions).
>>
>> So yes, it really appears that updatedb check for duplicated mounts is
>> wrong in general and needs rethinking.
>
> Yes, even if it's not a Btrfs bug, I think it's useful to get a
> different set of eyes on this than just the mlocate folks. Maybe it
> should get posted to fs-devel?

How is this not a configuration issue?  For btrfs users why not just
recommend PRUNE_BIND_MOUNTS="no" and a particular set of PRUNEPATHS?

I have each top-level subvolume (id=5 or subvol=/) mounted at
/btrfs-admin/$LABEL, where /btrfs-admin is root:sudo 750, and this is
what I use in /etc/updatedb.conf:

PRUNE_BIND_MOUNTS="no"
PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media /btrfs-admin /var/cache /var/lib/lxc"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs
iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre tmpfs
usbfs udf fuse.glusterfs fuse.sshfs curlftpfs"

With the exception of LXC rootfss I have a flat subvolume structure
under each subvol=/.  These subvolumes are mounted at specific
mountpoints using fstab.  Given that updatedb and locate work
flawlessly, and that I've only had two issues (freespacecache) while
using LTS kernels, I'm inclined to conclude that this is the least
disruptive configuration.  If I used snapper I'd add it to PRUNEPATHS
and rely on its facilities to find files that had been deleted,
because I don't want to see n-duplicates-for-file when I use locate.
A user who wanted to see those duplicates could remove the path from
PRUNEPATHS.

Sincerely,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/1] btrfs-progs: docs: annual typo, clarity, & grammar review & fixups

2017-10-21 Thread Nicholas D Steeves
In one big patch, as requested

Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 Documentation/btrfs-balance.asciidoc  | 12 -
 Documentation/btrfs-check.asciidoc| 17 ++---
 Documentation/btrfs-convert.asciidoc  | 21 
 Documentation/btrfs-device.asciidoc   |  6 ++---
 Documentation/btrfs-filesystem.asciidoc   | 18 +++---
 Documentation/btrfs-find-root.asciidoc|  2 +-
 Documentation/btrfs-inspect-internal.asciidoc |  8 +++---
 Documentation/btrfs-man5.asciidoc | 36 ++-
 Documentation/btrfs-map-logical.asciidoc  |  2 +-
 Documentation/btrfs-qgroup.asciidoc   |  8 +++---
 Documentation/btrfs-quota.asciidoc| 18 +++---
 Documentation/btrfs-receive.asciidoc  |  2 +-
 Documentation/btrfs-replace.asciidoc  |  6 ++---
 Documentation/btrfs-rescue.asciidoc   |  2 +-
 Documentation/btrfs-scrub.asciidoc|  4 +--
 Documentation/btrfs-send.asciidoc | 11 
 Documentation/btrfs-subvolume.asciidoc| 15 +--
 Documentation/btrfs.asciidoc  | 14 +--
 Documentation/btrfstune.asciidoc  | 13 +-
 Documentation/fsck.btrfs.asciidoc |  8 +++---
 Documentation/mkfs.btrfs.asciidoc | 10 
 21 files changed, 119 insertions(+), 114 deletions(-)

diff --git a/Documentation/btrfs-balance.asciidoc 
b/Documentation/btrfs-balance.asciidoc
index cc81de91..f471be3b 100644
--- a/Documentation/btrfs-balance.asciidoc
+++ b/Documentation/btrfs-balance.asciidoc
@@ -21,7 +21,7 @@ filesystem.
 The balance operation is cancellable by the user. The on-disk state of the
 filesystem is always consistent so an unexpected interruption (eg. system 
crash,
 reboot) does not corrupt the filesystem. The progress of the balance operation
-is temporarily stored and will be resumed upon mount, unless the mount option
+is temporarily stored (EDIT: where is it stored?) and will be resumed 
upon mount, unless the mount option
 'skip_balance' is specified.
 
 WARNING: running balance without filters will take a lot of time as it 
basically
@@ -200,11 +200,11 @@ section 'PROFILES'.
 ENOSPC
 --
 
-The way balance operates, it usually needs to temporarily create a new block
+The way balance operates, it usually needs to temporarily create a new 
block
 group and move the old data there. For that it needs work space, otherwise
 it fails for ENOSPC reasons.
 This is not the same ENOSPC as if the free space is exhausted. This refers to
-the space on the level of block groups.
+the space on the level of block groups. (EDIT: What is the relationship 
between the new block group and the work space?  Is the "old data" removed from 
the new block group?  Please say something about block groups to clarify)
 
 The free work space can be calculated from the output of the *btrfs filesystem 
show*
 command:
@@ -227,7 +227,7 @@ space. After that it might be possible to run other filters.
 
 Conversion to profiles based on striping (RAID0, RAID5/6) require the work
 space on each device. An interrupted balance may leave partially filled block
-groups that might consume the work space.
+groups that might (EDIT: is this 2nd level of uncertainty necessary?) 
consume the work space.
 
 EXAMPLES
 
@@ -238,7 +238,7 @@ can be found in section 'TYPICAL USECASES' of 
`btrfs-device`(8).
 MAKING BLOCK GROUP LAYOUT MORE COMPACT
 ~~
 
-The layout of block groups is not normally visible, most tools report only
+The layout of block groups is not normally visible; most tools report only
 summarized numbers of free or used space, but there are still some hints
 provided.
 
@@ -298,7 +298,7 @@ data to the remaining blockgroups, ie. the 6GiB are now 
free of filesystem
 structures, and can be reused for new data or metadata block groups.
 
 We can do a similar exercise with the metadata block groups, but this should
-not be typically necessary, unless the used/total ration is really off. Here
+not typically be necessary, unless the used/total ratio is really off. Here
 the ratio is roughly 50% but the difference as an absolute number is "a few
 gigabytes", which can be considered normal for a workload with snapshots or
 reflinks updated frequently.
diff --git a/Documentation/btrfs-check.asciidoc 
b/Documentation/btrfs-check.asciidoc
index fbf48847..b9f8bd60 100644
--- a/Documentation/btrfs-check.asciidoc
+++ b/Documentation/btrfs-check.asciidoc
@@ -22,10 +22,10 @@ by the option '--readonly'.
 
 *btrfsck* is an alias of *btrfs check* command and is now deprecated.
 
-WARNING: Do not use '--repair' unless you are advised to by a developer, an
-experienced user or accept the fact that 'fsck' cannot possibly fix all sorts
-of damage that could happen to a filesystem because of software and hardware
-

[PATCH 0/1] btrfs-progs: docs: annual typo, clarity, & grammar review & fixups

2017-10-21 Thread Nicholas D Steeves
Hi David,

I've marked the parts the require review like this: This paragraph
explains something but something is opaque (EDIT: reason why
it's unclear or needs to be updated).  Here is what I found this
year (@664dbbff6e72ffbdeb451b4f03f98dab3eabb1d1):

Nicholas D Steeves (1):
  btrfs-progs: docs: annual typo, clarity, & grammar review & fixups

 Documentation/btrfs-balance.asciidoc  | 12 -
 Documentation/btrfs-check.asciidoc| 17 ++---
 Documentation/btrfs-convert.asciidoc  | 21 
 Documentation/btrfs-device.asciidoc   |  6 ++---
 Documentation/btrfs-filesystem.asciidoc   | 18 +++---
 Documentation/btrfs-find-root.asciidoc|  2 +-
 Documentation/btrfs-inspect-internal.asciidoc |  8 +++---
 Documentation/btrfs-man5.asciidoc | 36 ++-
 Documentation/btrfs-map-logical.asciidoc  |  2 +-
 Documentation/btrfs-qgroup.asciidoc   |  8 +++---
 Documentation/btrfs-quota.asciidoc| 18 +++---
 Documentation/btrfs-receive.asciidoc  |  2 +-
 Documentation/btrfs-replace.asciidoc  |  6 ++---
 Documentation/btrfs-rescue.asciidoc   |  2 +-
 Documentation/btrfs-scrub.asciidoc|  4 +--
 Documentation/btrfs-send.asciidoc | 11 
 Documentation/btrfs-subvolume.asciidoc| 15 +--
 Documentation/btrfs.asciidoc  | 14 +--
 Documentation/btrfstune.asciidoc  | 13 +-
 Documentation/fsck.btrfs.asciidoc |  8 +++---
 Documentation/mkfs.btrfs.asciidoc | 10 
 21 files changed, 119 insertions(+), 114 deletions(-)

-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs-progs: add option to only list parent subvolumes

2017-09-30 Thread Nicholas D Steeves
On Sat, Sep 30, 2017 at 07:56:25PM +0200, Holger Hoffstätte wrote:
> On 09/30/17 18:37, Graham Cobb wrote:
> > On 30/09/17 14:08, Holger Hoffstätte wrote:
> >> A "root" subvolume is identified by a null parent UUID, so adding a new
> >> subvolume filter and flag -P ("Parent") does the trick. 
> > 
> > I don't like the naming. The flag you are proposing is really nothing to
> > do with whether a subvolume is a parent or not: it is about whether it
> > is a snapshot or not (many subvolumes are both snapshots and also
> > parents of other snapshots, and many non-snapshots are not the parent of
> > any subvolumes).
> 
> You're completely right. I wrote that patch about a year ago because I
> needed it for my own homegrown backup program and spent approx. 5 seconds
> finding a free option letter; ultimately I ended up using the mentioned
> shell hackery as alternative. Anyway, I was sure that at the time the
> other letters sounded even worse/were taken, but that may just have been
> in my head. ;-)
> 
> I just rechecked and -S is still available, so that's good.

Hi Holger,

On the topic of shell hackery, someone on #btrfs and I discussed this
two days ago.  The method I decided to use was blacklisting snapshots
(subvols with parent UUID) that also had to ro property set.

The rational was that a rw snapshot (with parent UUID but not ro
property) could contain live data.  eg: subvol containerA, containerB,
and containerC are all rw children of subvol containerX.  My backup
script would include containerA, containerB, and containerC in the
automatically generated list of live data subvols that need to be
snapshotted and backed up.  Because containerX has no parents it is
part of this list, regardless of whether or not it's ro.

Would you please consider addressing rw child subvols (eg: containers
cloned from a base subvol) in your patch?

> > I have two suggestions:
> > 
> > 1) Use -S (meaning "not a snapshot", the inverse of -s). Along with this
> > change. I would change the usage text to say something like:
> > 
> >  -s list subvolumes originally created as snapshots
> >  -S list subvolumes originally created not as snapshots
> > 
> > Presumably specifying both -s and -S should be an error.
> 
> That sounds much better indeed and is a quick fix. While I agree
> that the "-P /none" filter would be useful too, it's also
> a different feature and more work than I want to invest in this
> right now. Maybe later "-S" can simply delegate to "-P none".

Proposed -s will exclude both containerX and container{A,B,C}
Proposed -S will add containerX (base subvolume) to list,
and will exclude container{A,B,C}

I believe the quickest way to solve this would be to add a check for
ro property to:
-s list read-only subvolumes originally created as snapshots

Then in the backup script:  1) get the full subvolume list  2) get the
ro snapshot list  3) Use #2 to remove ro snapshots from #1  4) Backup
subvols from #1 list

Thank you for working on this! :-)

Sincerely,
Nicholas

P.S. Please CC me for any replies


signature.asc
Description: PGP signature


Re: [PATCH 2/2] Remove misleading BCP 78 boilerplate

2017-09-28 Thread Nicholas D Steeves
Hi David,

On 18 September 2017 at 10:40, David Sterba <dste...@suse.cz> wrote:
> On Sun, Sep 17, 2017 at 07:52:27PM -0400, Nicholas D Steeves wrote:
>> BCP 78 applies to RFC 6234, but sha224-256.c is Simplified BSD.
>>
>> This causes the following lintian error when building on Debian and
>> Debian derivatives:
>>
>> E: btrfs-progs source: license-problem-non-free-RFC-BCP78
>>tests/sha224-256.c
>>
>> Please consult the following email from debian-le...@lists.debian.org
>> for more information:
>>
>> https://lists.debian.org/debian-legal/2017/08/msg4.html
>
> Thanks, this looks like I've copied too much from the RFC and was not
> aware of the BCP license issues. I believe the copyright notice(s) past
> the line mentioning the filename(s) should be enough to satisfy the
> licensing requirements and also the debian license checker.

Thank you for applying these so quickly, and for the new release :-)

Sincerely,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] Add required IETF Trust copyright

2017-09-17 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 tests/sha-private.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/tests/sha-private.h b/tests/sha-private.h
index 6e9c4520..d94d4374 100644
--- a/tests/sha-private.h
+++ b/tests/sha-private.h
@@ -1,5 +1,9 @@
 / sha-private.h /
 /* See RFC 6234 for details. ***/
+/* Copyright (c) 2011 IETF Trust and the persons identified as */
+/* authors of the code.  All rights reserved.  */
+/* See sha.h for terms of use and redistribution.  */
+
 #ifndef _SHA_PRIVATE__H
 #define _SHA_PRIVATE__H
 /*
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/2] btrfs-progs: tests/sha* copyright clarity and compliance fixes

2017-09-17 Thread Nicholas D Steeves
Hi,

Thank you very much for fixing up the tests!  A couple of releases ago
I noticed that they were reliably passing, but I had to exclude the
tests from my uploads because the BCP 78 license is explicitely banned
from Debian.  I plan to study Debian's autopkgtest framework and then
configure the package to automatically run the tests in a VM on one of
the buildservers for every version update.

The first patch adds what I believe is a necessary header for license
compliance, and the second removes the misleading boilerplate.  Please
consult the annotated commit messages for more information.

Thank you,
Nicholas

Nicholas D Steeves (2):
  Add required IETF Trust copyright
  Remove misleading BCP 78 boilerplate

 tests/sha-private.h |  4 
 tests/sha224-256.c  | 20 
 2 files changed, 4 insertions(+), 20 deletions(-)

-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] Remove misleading BCP 78 boilerplate

2017-09-17 Thread Nicholas D Steeves
BCP 78 applies to RFC 6234, but sha224-256.c is Simplified BSD.

This causes the following lintian error when building on Debian and
Debian derivatives:

E: btrfs-progs source: license-problem-non-free-RFC-BCP78
   tests/sha224-256.c

Please consult the following email from debian-le...@lists.debian.org
for more information:

https://lists.debian.org/debian-legal/2017/08/msg4.html
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 tests/sha224-256.c | 20 
 1 file changed, 20 deletions(-)

diff --git a/tests/sha224-256.c b/tests/sha224-256.c
index 2d963e65..82124a03 100644
--- a/tests/sha224-256.c
+++ b/tests/sha224-256.c
@@ -1,23 +1,3 @@
-/*
-RFC 6234SHAs, HMAC-SHAs, and HKDF   May 2011
-
-
-Copyright Notice
-
-   Copyright (c) 2011 IETF Trust and the persons identified as the
-   document authors.  All rights reserved.
-
-   This document is subject to BCP 78 and the IETF Trust's Legal
-   Provisions Relating to IETF Documents
-   (http://trustee.ietf.org/license-info) in effect on the date of
-   publication of this document.  Please review these documents
-   carefully, as they describe your rights and restrictions with respect
-   to this document.  Code Components extracted from this document must
-   include Simplified BSD License text as described in Section 4.e of
-   the Trust Legal Provisions and are provided without warranty as
-   described in the Simplified BSD License.
-*/
-
 /* sha224-256.c /
 /* See RFC 6234 for details. ***/
 /* Copyright (c) 2011 IETF Trust and the persons identified as */
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/1] btrfs-progs: docs: fix many typos, plus three edits for clarity

2017-02-21 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 Documentation/btrfs-balance.asciidoc  |  2 +-
 Documentation/btrfs-check.asciidoc|  8 
 Documentation/btrfs-device.asciidoc   |  6 +++---
 Documentation/btrfs-filesystem.asciidoc   |  6 +++---
 Documentation/btrfs-inspect-internal.asciidoc |  6 +++---
 Documentation/btrfs-man5.asciidoc | 15 ---
 Documentation/btrfs-quota.asciidoc|  8 
 Documentation/btrfs-receive.asciidoc  |  4 ++--
 Documentation/btrfs-restore.asciidoc  |  2 +-
 Documentation/btrfs-scrub.asciidoc|  9 -
 Documentation/btrfs-send.asciidoc |  6 +++---
 11 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/Documentation/btrfs-balance.asciidoc 
b/Documentation/btrfs-balance.asciidoc
index c456898e..0b687eaf 100644
--- a/Documentation/btrfs-balance.asciidoc
+++ b/Documentation/btrfs-balance.asciidoc
@@ -89,7 +89,7 @@ warned and has a few seconds to cancel the operation before 
it starts. The
 warning and delay can be skipped with '--full-balance' option.
 +
 Please note that the filters must be written together with the '-d', '-m' and
-'-s' options, because they're optional and bare '-d' etc alwo work and mean no
+'-s' options, because they're optional and bare '-d' etc also work and mean no
 filters.
 +
 `Options`
diff --git a/Documentation/btrfs-check.asciidoc 
b/Documentation/btrfs-check.asciidoc
index 633cbbf6..28ed9dd7 100644
--- a/Documentation/btrfs-check.asciidoc
+++ b/Documentation/btrfs-check.asciidoc
@@ -30,11 +30,11 @@ data structures satisfy the constraints, point to the right 
objects or are
 correctly connected together.
 
 There are several cross checks that can detect wrong reference counts of shared
-extents, backrefrences, missing extents of inodes, directory and inode
+extents, backreferences, missing extents of inodes, directory and inode
 connectivity etc.
 
 The amount of memory required can be high, depending on the size of the
-filesystem, smililarly the run time.
+filesystem, similarly the run time.
 
 SAFE OR ADVISORY OPTIONS
 
@@ -49,7 +49,7 @@ verify checksums of data blocks
 +
 This expects that the filesystem is otherwise
 OK, so this is basically and offline 'scrub' but does not repair data from
-spare coipes.
+spare copies.
 
 --chunk-root ::
 use the given offset 'bytenr' for the chunk tree root
@@ -111,7 +111,7 @@ NOTE: Do not use unless you know what you're doing.
 select mode of operation regarding memory and IO
 +
 The 'MODE' can be one of 'original' and 'lowmem'. The original mode is mostly
-unoptimized regarding memory consumpption and can lead to out-of-memory
+unoptimized regarding memory consumption and can lead to out-of-memory
 conditions on large filesystems. The possible workaround is to export the block
 device over network to a machine with enough memory. The low memory mode is
 supposed to address the memory consumption, at the cost of increased IO when it
diff --git a/Documentation/btrfs-device.asciidoc 
b/Documentation/btrfs-device.asciidoc
index eedcac85..b7f27c44 100644
--- a/Documentation/btrfs-device.asciidoc
+++ b/Documentation/btrfs-device.asciidoc
@@ -24,14 +24,14 @@ similarity, the RAID terminology is widely used in the 
documentation.  See
 constraints.
 
 The device management works on a mounted filesystem. Devices can be added,
-removed or replaced, by commands profided by *btrfs device* and *btrfs 
replace*.
+removed or replaced, by commands provided by *btrfs device* and *btrfs 
replace*.
 
 The profiles can be also changed, provided there's enough workspace to do the
 conversion, using the *btrfs balance* command and namely the filter 'convert'.
 
 Profile::
 A profile describes an allocation policy based on the redundancy/replication
-constrants in connection with the number of devices. The profile applies to
+constraints in connection with the number of devices. The profile applies to
 data and metadata block groups separately.
 
 RAID level::
@@ -182,7 +182,7 @@ blocks, the disk seeking is the key factor affecting 
performance.
 
 You'll note that the system block group has been also converted to RAID1, this
 normally happens as the system block group also holds metadata (the physical to
-logial mappings).
+logical mappings).
 
 What changed:
 
diff --git a/Documentation/btrfs-filesystem.asciidoc 
b/Documentation/btrfs-filesystem.asciidoc
index 0f7ea495..d57f28fb 100644
--- a/Documentation/btrfs-filesystem.asciidoc
+++ b/Documentation/btrfs-filesystem.asciidoc
@@ -14,7 +14,7 @@ DESCRIPTION
 *btrfs filesystem* is used to perform several whole filesystem level tasks,
 including all the regular filesystem operations like resizing, space stats,
 label setting/getting, and defragmentation. There are other whole filesystem
-taks like scrub or balance that are grouped in separate commands.
+tasks like scrub or balance that are grouped in separate co

[PATCH] btrfs-progs: docs: fix many typos, plus three edits for clarit

2017-02-21 Thread Nicholas D Steeves
Hi David,

Please see attached a reasonably thorough patch for all the typos I
could find in btrfs-progs documentation.  The three edits are very
minor and look larger than they are because I had to reflow the
paragraphs.  I'm confident in the quality of the work, with the
exception of the following, where I'm not sure what to do:

Documentation/btrfs-receive.asciidoc
@@ -66,7 +66,7 @@ tell us where this filesystem is mounted.
 --dump::
 dump the stream metadata, one line per operation
 +
-Does not require the 'path' parameter. The filesystem chanded.
+Does not require the 'path' parameter. The filesystem changed.


Sincerely,
Nicholas

Nicholas D Steeves (1):
  btrfs-progs: docs: fix many typos, plus three edits for clarity

 Documentation/btrfs-balance.asciidoc  |  2 +-
 Documentation/btrfs-check.asciidoc|  8 
 Documentation/btrfs-device.asciidoc   |  6 +++---
 Documentation/btrfs-filesystem.asciidoc   |  6 +++---
 Documentation/btrfs-inspect-internal.asciidoc |  6 +++---
 Documentation/btrfs-man5.asciidoc | 15 ---
 Documentation/btrfs-quota.asciidoc|  8 
 Documentation/btrfs-receive.asciidoc  |  4 ++--
 Documentation/btrfs-restore.asciidoc  |  2 +-
 Documentation/btrfs-scrub.asciidoc|  9 -
 Documentation/btrfs-send.asciidoc |  6 +++---
 11 files changed, 36 insertions(+), 36 deletions(-)

-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


user_subvol_rm_allowed? Is there a user_subvol_create_deny|allowed?

2017-02-07 Thread Nicholas D Steeves
Dear btrfs community,

Please accept my apologies in advance if I missed something in recent
btrfs development; my MUA tells me I'm ~1500 unread messages
out-of-date. :/

I recently read about "mount -t btrfs -o user_subvol_rm_allowed" while
doing reading up on LXC handling of snapshots with the btrfs backend.
Is this mount option per-subvolume, or per volume?

Also, what mechanisms to restrict a user's ability to create an
arbitrarily large number of snapshots?  Is there a
user_subvol_create_deny|allowed?  What I've read about the inverse
correlation between number of subvols to performance, a potentially
hostile user could cause an IO denial of service or potentially even
trigger an ENOSPC.

From what I gather, the following will reproduce the hypothetical
issue related to my question:

# as root
btrfs sub create /some/dir/subvol
chown some-user /some/dir/subvol

# as some-user
cd /home/dir/subvol
cp -ar --reflink=always /some/big/files ./
COUNT=1
while [ 0 -lt 1 ]; do
  btrfs sub snap ./ ./snapshot-$COUNT
  COUNT=COUNT+1
  sleep 2   # --maybe unnecessary
done

--

I hope there's something I've misunderstood or failed to read!

Please CC me so your reply will hit my main inbox :-)
Nicholas


signature.asc
Description: Digital signature


[PATCH] Fix spelling/typos in user-facing strings.

2017-01-05 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 Documentation/btrfs-device.asciidoc | 2 +-
 Documentation/btrfs-quota.asciidoc  | 4 ++--
 image/main.c| 2 +-
 mkfs/main.c | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/Documentation/btrfs-device.asciidoc 
b/Documentation/btrfs-device.asciidoc
index 58dc9b0..eedcac8 100644
--- a/Documentation/btrfs-device.asciidoc
+++ b/Documentation/btrfs-device.asciidoc
@@ -186,7 +186,7 @@ logial mappings).
 
 What changed:
 
-* available data space decreased by 3GiB, usable rougly (50 - 3) + (100 - 3) = 
144 GiB
+* available data space decreased by 3GiB, usable roughly (50 - 3) + (100 - 3) 
= 144 GiB
 * metadata redundancy increased
 
 IOW, the unequal device sizes allow for combined space for data yet improved
diff --git a/Documentation/btrfs-quota.asciidoc 
b/Documentation/btrfs-quota.asciidoc
index 33c3bfd..77d4c68 100644
--- a/Documentation/btrfs-quota.asciidoc
+++ b/Documentation/btrfs-quota.asciidoc
@@ -16,7 +16,7 @@ of a btrfs filesystem. The quota groups (qgroups) are managed 
by the subcommand
 `btrfs qgroup`(8).
 
 NOTE: the qgroups are different than the traditional user quotas and designed
-to track shared and exlusive data per-subvolume.  Plese refer to the section
+to track shared and exclusive data per-subvolume.  Please refer to the section
 'HIERARCHICAL QUOTA GROUP CONCEPTS' for a detailed description.
 
 PERFORMANCE IMPLICATIONS
@@ -91,7 +91,7 @@ Qgroups of level 0 get created automatically when a 
subvolume/snapshot gets
 created.  The ID of the qgroup corresponds to the ID of the subvolume, so 0/5
 is the qgroup for the root subvolume.
 For the *btrfs qgroup* command, the path to the subvolume can also be used
-instead of '0/ID'.  For all higher levels, the ID can be choosen freely.
+instead of '0/ID'.  For all higher levels, the ID can be chosen freely.
 
 Each qgroup can contain a set of lower level qgroups, thus creating a hierarchy
 of qgroups. Figure 1 shows an example qgroup tree.
diff --git a/image/main.c b/image/main.c
index c464b65..58dcecb 100644
--- a/image/main.c
+++ b/image/main.c
@@ -2533,7 +2533,7 @@ static int restore_metadump(const char *input, FILE *out, 
int old_restore,
ret = mdrestore_init(, in, out, old_restore, num_threads,
 fixup_offset, info, multi_devices);
if (ret) {
-   error("failed to intialize metadata restore state: %d", ret);
+   error("failed to initialize metadata restore state: %d", ret);
goto failed_cluster;
}
 
diff --git a/mkfs/main.c b/mkfs/main.c
index 5756a72..8cdc74b 100644
--- a/mkfs/main.c
+++ b/mkfs/main.c
@@ -366,7 +366,7 @@ static void print_usage(int ret)
printf("\t-V|--versionprint the mkfs.btrfs version and 
exit\n");
printf("\t--help  print this help and exit\n");
printf("  deprecated:\n");
-   printf("\t-A|--alloc-start START  the offset to start the filesytem\n");
+   printf("\t-A|--alloc-start START  the offset to start the 
filesystem\n");
printf("\t-l|--leafsize SIZE  deprecated, alias for nodesize\n");
exit(ret);
 }
-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Fix user-facing typos/spelling in user-facing strings

2017-01-05 Thread Nicholas D Steeves
Hi,

This is just another trivial patch for typos/spelling in user facing strings.

Sincerely,
Nicholas

Nicholas D Steeves (1):
  Fix spelling/typos in user-facing strings.

 Documentation/btrfs-device.asciidoc | 2 +-
 Documentation/btrfs-quota.asciidoc  | 4 ++--
 image/main.c| 2 +-
 mkfs/main.c | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

-- 
2.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid levels and NAS drives

2016-10-11 Thread Nicholas D Steeves
On Mon, Oct 10, 2016 at 08:07:53AM -0400, Austin S. Hemmelgarn wrote:
> On 2016-10-09 19:12, Charles Zeitler wrote:
> >Is there any advantage to using NAS drives
> >under RAID levels,  as oppposed to regular
> >'desktop' drives for BTRFS?
[...]
> So, as for what you should use in a RAID array, here's my specific advice:
> 1. Don't worry about enterprise drives unless you've already got a system
> that has them.  They're insanely overpriced for relatively minimal benefit
> when compared to NAS drives.
> 2. If you can afford NAS drives, use them, they'll get you the best
> combination of energy efficiency, performance, and error recovery.
> 3. If you can't get NAS drives, most desktop drives work fine, but you will
> want to bump up the scsi_command_timer attribute in the kernel for them (200
> seconds is reasonable, just make sure you have good cables and a good
> storage controller).
> 4. Avoid WD Green drives, without special effort, they will get worse
> performance and have shorter lifetimes than any other hard disk I've ever
> seen.
> 5. Generally avoid drives with a capacity over 1TB from manufacturers other
> than WD, HGST, and Seagate, most of them are not particularly good quality
> (especially if it's an odd non-power-of-two size like 5TB).

+1 !  Additionally, is it still the case that it is generally safer to
buy the largest capacity disks offered by the previous generation of
technology rather than the current largest capacity?  eg: right now
that would be 4TB or 6TB, and not 8TB or 10TB.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is stability a joke?

2016-09-14 Thread Nicholas D Steeves
On Mon, Sep 12, 2016 at 07:36:57PM +0200, Zoiled wrote:
> Ok good to know , however from the Debian wiki as well as the link to the
> mailing list only LZO compression are mentioned (as far as I remember) and I
> have no idea myself how much difference there is between LZO and the ZLIB
> code,

I tried my best to not make any over-claims, and to always have
supporting citations, which is why only LZO compression is mentioned.
If anyone sees any inaccuracies, please let me know and I'll address
them without hesitation.

Sincerely,
Nicholas


signature.asc
Description: Digital signature


Re: Is stability a joke?

2016-09-14 Thread Nicholas D Steeves
On Mon, Sep 12, 2016 at 01:31:42PM -0400, Austin S. Hemmelgarn wrote:
> In general yes in this case, but performance starts to degrade exponentially
> beyond a certain point.  The difference between (for example) 10 and 20
> snapshots is not as much as between 1000 and 1010. The problem here is that
> we don't really have a BCP document that anyone ever reads.  A lot of stuff
> that may seem obvious to us after years of working with BTRFS isn't going to
> be to a newcomer, and it's a lot more likely that some random person will
> get things write if we have a good, central BCP document than if it stays as
> scattered tribal knowledge.

"Scattered tribal knowledge"...exactly!  :-D

Cheers,
Nicholas


signature.asc
Description: Digital signature


Re: Is stability a joke?

2016-09-14 Thread Nicholas D Steeves
On Mon, Sep 12, 2016 at 08:20:20AM -0400, Austin S. Hemmelgarn wrote:
> On 2016-09-11 09:02, Hugo Mills wrote:
> >On Sun, Sep 11, 2016 at 02:39:14PM +0200, Waxhead wrote:
> >>Martin Steigerwald wrote:
> >>>Am Sonntag, 11. September 2016, 13:43:59 CEST schrieb Martin Steigerwald:
> >>Thing is: This just seems to be when has a feature been implemented
> >>matrix.
> >>Not when it is considered to be stable. I think this could be done with
> >>colors or so. Like red for not supported, yellow for implemented and
> >>green for production ready.
> >Exactly, just like the Nouveau matrix. It clearly shows what you can
> >expect from it.
> >>>I mentioned this matrix as a good *starting* point. And I think it would be
> >>>easy to extent it:
> >>>
> >>>Just add another column called "Production ready". Then research / ask 
> >>>about
> >>>production stability of each feature. The only challenge is: Who is
> >>>authoritative on that? I´d certainly ask the developer of a feature, but 
> >>>I´d
> >>>also consider user reports to some extent.
> >>>
> >>>Maybe thats the real challenge.
> >>>
> >>>If you wish, I´d go through each feature there and give my own estimation. 
> >>>But
> >>>I think there are others who are deeper into this.
> >>That is exactly the same reason I don't edit the wiki myself. I
> >>could of course get it started and hopefully someone will correct
> >>what I write, but I feel that if I start this off I don't have deep
> >>enough knowledge to do a proper start. Perhaps I will change my mind
> >>about this.
> >
> >   Given that nobody else has done it yet, what are the odds that
> >someone else will step up to do it now? I would say that you should at
> >least try. Yes, you don't have as much knowledge as some others, but
> >if you keep working at it, you'll gain that knowledge. Yes, you'll
> >probably get it wrong to start with, but you probably won't get it
> >*very* wrong. You'll probably get it horribly wrong at some point, but
> >even the more knowledgable people you're deferring to didn't identify
> >the problems with parity RAID until Zygo and Austin and Chris (and
> >others) put in the work to pin down the exact issues.
> FWIW, here's a list of what I personally consider stable (as in, I'm willing
> to bet against reduced uptime to use this stuff on production systems at
> work and personal systems at home):
> 1. Single device mode, including DUP data profiles on single device without
> mixed-bg.
> 2. Multi-device raid0, raid1, and raid10 profiles with symmetrical devices
> (all devices are the same size).
> 3. Multi-device single profiles with asymmetrical devices.
> 4. Small numbers (max double digit) of snapshots, taken at infrequent
> intervals (no more than once an hour).  I use single snapshots regularly to
> get stable images of the filesystem for backups, and I keep hourly ones of
> my home directory for about 48 hours.
> 5. Subvolumes used to isolate parts of a filesystem from snapshots.  I use
> this regularly to isolate areas of my filesystems from backups.
> 6. Non-incremental send/receive (no clone source, no parent's, no
> deduplication).  I use this regularly for cloning virtual machines.
> 7. Checksumming and scrubs using any of the profiles I've listed above.
> 8. Defragmentation, including autodefrag.
> 9. All of the compat_features, including no-holes and skinny-metadata.
> 
> Things I consider stable enough that I'm willing to use them on my personal
> systems but not systems at work:
> 1. In-line data compression with compress=lzo.  I use this on my laptop and
> home server system.  I've never had any issues with it myself, but I know
> that other people have, and it does seem to make other things more likely to
> have issues.
> 2. Batch deduplication.  I only use this on the back-end filesystems for my
> personal storage cluster, and only because I have multiple copies as a
> result of GlusterFS on top of BTRFS.  I've not had any significant issues
> with it, and I don't remember any reports of data loss resulting from it,
> but it's something that people should not be using if they don't understand
> all the implications.
> 
> Things that I don't consider stable but some people do:
> 1. Quotas and qgroups.  Some people (such as SUSE) consider these to be
> stable.  There are a couple of known issues with them still however (such as
> returning the wrong errno when a quota is hit (should be returning -EDQUOT,
> instead returns -ENOSPC)).
> 2. RAID5/6.  There are a few people who use this, but it's generally agreed
> to be unstable.  There are still at least 3 known bugs which can cause
> complete loss of a filesystem, and there's also a known issue with rebuilds
> taking insanely long, which puts data at risk as well.
> 3. Multi device filesystems with asymmetrical devices running raid0, raid1,
> or raid10.  The issue I have here is that it's much easier to hit errors
> regarding free space than a reliable system should be.  It's possible to
> 

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-11 Thread Nicholas D Steeves
Why is the combination of dm-crypt|luks+btrfs+compress=lzo as
overlooked as a potential cause?  Other than the "raid56 ate my data"
I've noticed a bunch of "luks+btrfs+compress=lzo ate my data" threads.

On 10 August 2016 at 15:46, Austin S. Hemmelgarn  wrote:
>
> As far as dm-crypt goes, it looks like BTRFS is stable on top in the
> configuration I use (aex-xts-plain64 with a long key using plain dm-crypt
> instead of LUKS).  I have heard rumors of issues when using LUKS without
> hardware acceleration, but I've never seen any conclusive proof, and what
> little I've heard sounds more like it was just race conditions elsewhere
> causing the issues.
>

Austin, I'm very curious if they were also using compress=lzo, because
my informal hypothesis is that the encryption+btrfs+compress=lzo
combination precipitates these issues.  Maybe the combo is more likely
to trigger these race conditions?  It might also be neat to mine the
archive to see these seem to be more likely to occur with fast SSDs vs
slow rotational disks.  Do you use compress=lzo?

On 10 August 2016 at 18:52, Dave T  wrote:
> On Wed, Aug 10, 2016 at 5:15 PM, Chris Murphy  wrote:
>
>> 1. Report 'btrfs check' without --repair, let's see what it complains
>> about and if it might be able to plausibly fix this.
>
> First, a small part of the dmesg output:
>
> [  172.772283] Btrfs loaded
> [  172.772632] BTRFS: device label top_level devid 1 transid 103495 /dev/dm-0
> [  274.320762] BTRFS info (device dm-0): use lzo compression

Compress=lzo confirmed.  Corruption occurred on an SSD.

On 10 August 2016 at 17:21, Chris Murphy  wrote:
> I'm using LUKS, aes xts-plain64, on six devices. One is using mixed-bg
> single device. One is dsingle mdup. And then 2x2 mraid1 draid1. I've
> had zero problems. The two computers these run on do have aesni
> support. Aging wise, they're all at least a  year old. But I've been
> using Btrfs on LUKS for much longer than that.
>

Chris, do you use compress=lzo?  SSDs or rotational disks?

If a bunch of people are using this combo without issue, I'll drop the
informal hypothesis as "just a suspicion informed by sloppy pattern
recognition" ;-)

Thank you!
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unmountable and unrepairable BTRFS

2016-08-07 Thread Nicholas D Steeves
> On Sat, Aug 06, 2016 at 07:09:40PM -0400, james harvey wrote:
>> On Sat, Aug 6, 2016 at 6:36 PM, Chris McFaul  wrote:
>> Hi, if anyone is able to help with a rather large (34TB of data on it)
>> BTRFS RAID 6 I would be very grateful (pastebins below) - at this
>> point I am only interested in recovery since I am obviously switching
>> to RAID 1 once/if I get my data back.
>>
>> Many thanks in advance
>>
>> Chris
>>
> Depending on how important the data is, wanted to throw out the most
> prudent first step is to get another set of drives equal to or bigger
> than the ones of the bad volume, and image them using dd one by one as
> block devices.  That gives you an undo button if recovery attempts go
> wrong.  Always the best first step in data recovery, if there's not a
> hardware failure involved.
> 
> Depending on the value of the data, it might not be practical as
> you're looking at an expensive set of new drives.  Just wanted to
> throw that out there, in case.
>

Has anyone ever rented a server with massive storage for this?  From what I 
understand it's possible to rent them short-term...  How much did it cost?  The 
reason I ask is I imagine it would be cheaper than buying a second set of disks 
for a huge array.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs fi defrag does not defrag files >256kB?

2016-07-28 Thread Nicholas D Steeves
On 28 July 2016 at 06:55, David Sterba <dste...@suse.cz> wrote:
> On Wed, Jul 27, 2016 at 01:19:01PM -0400, Nicholas D Steeves wrote:
>> > In that regard a defrag -t 32M recommendation is reasonable for a
>> > converted filesystem, tho you can certainly go larger... to 1 GiB as I
>> > said.
>>
>> I only mentioned btrfs-convert.asciidoc, because that's what led me to
>> the discrepancy between the default target extent size value, and a
>> recommended value.  I was searching for everything I could find on
>> defrag, because I had begun to suspect that it wasn't functioning as
>> expected.
>
> Historically, the 256K size is from kernel. Defrag can create tons of
> data to write and this is noticeable on the system. However the results
> of defragmentation are not satisfactory to the user so the recommended
> value is 32M. I'd rather not change the kernel default but we can
> increase the default threshold (-t) in the userspace tools.

Thank you, I just saw that commit too!  For the purposes of minimizing
the impact btrfs fi defrag in a background cron or systemd.trigger job
has on a running system, I've read that "-f" (flush data after defrag
of each file) is beneficial.  Would it be even more beneficial to
ionice -c idle the defragmentation?

>> Is there any reason why defrag without -t cannot detect and default to
>> the data chunk size, or why it does not default to 1 GiB?
>
> The 1G value wouldn't be reached on an average filesystem where the free
> space is fragmented, besides there are some smaller internal limits on
> extent sizes that may not reach the user target size.  The value 32M has
> been experimentally found and tested on various systems and it proved to
> work well. With 64M the defragmentation was less successful but as it's
> only a hint, it's not wrong to use it.

Thank you for sharing these results :-)

>> In the same
>> way that balance's default behaviour is a full balance, shouldn't
>> defrag's default behaviour defrag whole chunks?  Does it not default
>> to 1 GiB because that would increase the number of cases where defrag
>> unreflinks and duplicates files--leading to an ENOSPC?
>
> Yes, this would also happen, unless the '-f' option is given (flush data
> after defragmenting each file).

When flushing data after defragmenting each file, one might still hit
an ENOSPC, right?  But because the writes are more atomic it will be
easier to recover from?

Additionally, I've read that -o autodefrag doesn't yet work well for
large databases.  Would a supplementary targeted defrag policy be
useful here?  For example: a general cron/systemd.trigger default of
"-t 32M", and then another job for /var/lib/mysql/ with a policy of
"-f -t 1G"?  Or did your findings also show that large databases did
not benefit from larger target extent defrags?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs fi defrag does not defrag files >256kB?

2016-07-27 Thread Nicholas D Steeves
On 26 July 2016 at 21:10, Duncan <1i5t5.dun...@cox.net> wrote:
> Nicholas D Steeves posted on Tue, 26 Jul 2016 19:03:53 -0400 as excerpted:
>
>> Hi,
>>
>> I've been using btrfs fi defrag with out the "-r -t 32M" option for
>> regular maintenance.  I just learned, in
>> Documentation/btrfs-convert.asciidoc, that there is a recommendation
>> to run with "-t 32M" after a conversion from ext2/3/4.  I then
>> cross-referenced this with btrfs-filesystem(8), and found that:
>>
>> Extents bigger than value given by -t will be skipped, otherwise
>> this value is used as a target extent size, but is only advisory
>> and may not be reached if the free space is too fragmented. Use 0
>> to take the kernel default, which is 256kB but may change in the
>> future.
>>
>> I understand the default behaviour of target extent size of 256kB to
>> mean only defragment small files and metadata.  Or does this mean that
>> the default behaviour is to defragment extent tree metadata >256kB,
>> and then defragment the (larger than 256kB) data from many extents
>> into a single extent?  I was surprised to read this!
>>
>> What's really happening with this default behaviour?  Should everyone
>> be using -t with a much larger value to actually defragment their
>> databases?
>
> Something about defrag's -t option should really be in the FAQ, as it is
> known to be somewhat confusing and to come up from time to time, tho this
> is the first time I've seen it in the context of convert.
>
> In general, you are correct in that the larger the value given to -t, the
> more defragging you should ultimately get.  There's a practical upper
> limit, however, the data chunk size, which is nominally 1 GiB (tho on
> tiny btrfs it's smaller and on TB-scale it can be larger, to 8 or 10 GiB
> IIRC).  32-bit btrfs-progs defrag also had a bug at one point that would
> (IIRC) kill the parameter if it was set to 2+ GiB -- that has been fixed
> by hard-coding the 32-bit max to 1 GiB, I believe.  The bug didn't affect
> 64-bit.  In any case, 1 GiB is fine, and often the largest btrfs can do
> anyway, due as I said to that being the normal data chunk size.
>
> And btrfs defrag only deals with data.  There's no metadata defrag, tho
> balance -m (or whole filesystem) will normally consolidate the metadata
> into the fewest (nominally 256 MiB) metadata chunks possible as it
> rewrites them.

Thank you for this metadata consolidation tip!

> In that regard a defrag -t 32M recommendation is reasonable for a
> converted filesystem, tho you can certainly go larger... to 1 GiB as I
> said.
>

I only mentioned btrfs-convert.asciidoc, because that's what led me to
the discrepancy between the default target extent size value, and a
recommended value.  I was searching for everything I could find on
defrag, because I had begun to suspect that it wasn't functioning as
expected.

Is there any reason why defrag without -t cannot detect and default to
the data chunk size, or why it does not default to 1 GiB?  In the same
way that balance's default behaviour is a full balance, shouldn't
defrag's default behaviour defrag whole chunks?  Does it not default
to 1 GiB because that would increase the number of cases where defrag
unreflinks and duplicates files--leading to an ENOSPC?

https://github.com/kdave/btrfsmaintenance/blob/master/btrfs-defrag.sh
uses -t 32M ; if a default target extent size of 1GiB is too radical,
why not set it it 32M?  If SLED ships btrfsmaintenance, then defrag -t
32M should be well-tested, no?

Thank you,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs fi defrag -r -t 32M? What is actually happening?

2016-07-26 Thread Nicholas D Steeves
Hi,

I've been using btrfs fi defrag with out the "-r -t 32M" option for
regular maintenance.  I just learned, in
Documentation/btrfs-convert.asciidoc, that there is a recommendation
to run with "-t 32M" after a conversion from ext2/3/4.  I then
cross-referenced this with btrfs-filesystem(8), and found that:

Extents bigger than value given by -t will be skipped, otherwise
this value is used as a target extent size, but is only advisory
and may not be reached if the free space is too fragmented. Use 0
to take the kernel default, which is 256kB but may change in the
future.

I understand the default behaviour of target extent size of 256kB to
mean only defragment small files and metadata.  Or does this mean that
the default behaviour is to defragment extent tree metadata >256kB,
and then defragment the (larger than 256kB) data from many extents
into a single extent?  I was surprised to read this!

What's really happening with this default behaviour?  Should everyone
be using -t with a much larger value to actually defragment their
databases?

Thanks,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: check if hardware checksumming works or not

2016-06-05 Thread Nicholas D Steeves
Hi Alberto,

On 5 June 2016 at 15:37, Alberto Bursi  wrote:
>
> Hi, I'm running Debian ARM on a Marvell Kirkwood-based 2-disk NAS.
>
> Kirkwood SoCs have a XOR engine that can hardware-accelerate crc32c
> checksumming, and from what I see in kernel mailing lists it seems to have a
> linux driver and should be supported.
>
> I wanted to ask if there is a way to test if it is working at all.
>
> How do I force btrfs to use software checksumming for testing purposes?

Is there a mv_xor.ko module you can blacklist?  I'm not familiar with
the platform, but I imagine you'll have to blacklist it and reboot,
because I'm guessing the module can't be removed once it's loaded.

'just a guess,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended why to use btrfs for production?

2016-06-03 Thread Nicholas D Steeves
On 3 June 2016 at 11:33, Austin S. Hemmelgarn  wrote:
> On 2016-06-03 10:11, Martin wrote:
>>>
>>> Make certain the kernel command timer value is greater than the driver
>>> error recovery timeout. The former is found in sysfs, per block
>>> device, the latter can be get and set with smartctl. Wrong
>>> configuration is common (it's actually the default) when using
>>> consumer drives, and inevitably leads to problems, even the loss of
>>> the entire array. It really is a terrible default.
>>
>>
>> Are nearline SAS drives considered consumer drives?
>>
> If it's a SAS drive, then no, especially when you start talking about things
> marketed as 'nearline'.  Additionally, SCT ERC is entirely a SATA thing, I
> forget what the equivalent in SCSI (and by extension SAS) terms is, but I'm
> pretty sure that the kernel handles things differently there.

For the purposes of BTRFS RAID1: For drives that ship with SCT ERC of
7sec, is the default kernel command timeout of 30sec appropriate, or
should it be reduced?  For SATA drives that do not support SC TERC, is
it true that 120sec is a sane value?  I forget where I got this value
of 120sec; it might have been this list, it might have been an mdadm
bug report.  Also, in terms of tuning, I've been unable to find
whether the ideal kernel timeout value changes depending on RAID
type...is that a factor in selecting a sane kernel timeout value?

Kind regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Debian BTRFS/UEFI Documentation

2016-06-03 Thread Nicholas D Steeves
Hi David,

Sorry for the delay.  Yes, at this point I feel it would be best to
continue this discussion off-list, or perhaps to shift it to the
debian-doc list.  Appologies to linux-btrfs if this should have been
shifted sooner!  I'll follow-up with a PM reply momentarily.

Cheers,
Nicholas

On 3 May 2016 at 03:37, David Alcorn  wrote:
> "Honestly, did you read the Debian wiki pages for btrfs and EFI?  If
> you read them, could you please let me know where they were deficient
> so I can fix them?"
>
> I did not use the Debian wiki pages for BTRFS and UEFI as a resource
> in my attempts to answer my questions because I read them in the past
> and they did not address my specific needs.  Technically, I lack the
> skill set required for my perspectives to merit credulity but I am
> willing to give it a shot.  I do not want to take the list off focus:
> if this discussion belongs elsewhere, let me know.
>
> My question about how to recover/replace a failed boot where "/" is
> located in a BTRFS subvolume located on a BTRFS RAID56 array presents
> challenges but it is reasonable to provide sufficient infrastructure
> in the wiki's to let a portion of the readers answer this question
> themselves rather than bother this list.  Am I correct that (i) there
> is no reasonable tool to permit a screen shot of the Grub menu being
> edited using the "e" key as the O/S has not yet loaded?, and (ii) do
> USB flash drive (unlike some SSD's) respect the "dup" data profile?
>
> It is easy to answer my question whether "/boot" may be located on a
> BTRFS RAID56 array somewhere in the UEFI wiki.  I am more comfortable
> with a more comprehensive revision to the wiki as suggested in the
> below draft.  If the editorial comments are excessive or offend
> community standards, scrap em.
>
> Replace the "RAID for the EFI System Partition" section with:
>
> "DRAFT: RAID and LVM for the EFI and /Boot Partitions".  The UEFI
> firmware specification supports several alternative boot strategies
> including PXE boot and boot from an EFI System Partition ("ESP") which
> might be located on a MBR, GPT or El Torito volume on an optical disk.
> The ESP must be partitioned using a supported FAT partition (such as
> FAT32).  A mdadm RAID array (other than perhaps a RAID 1 array
> formatted as FAT32), a LVM partition and a BTRFS RAID array are not
> FAT and can not hold a functional ESP.  Once Grub loads the ESP
> payload, Grub has enhanced abilities to recognize file systems which
> it uses to acquire required information from "/boot".  The Grub
> Manual, which may be viewed with the command "info grub", reports Grub
> (unlike grub-legacy stage 1.5) has some ability to use advanced file
> systems such as LVM and RAID once the ESP payload is loaded.  This
> support appears to exclude BTRFS RAID 56.  Other than the possible
> mdadm RAID 1 exception noted above, ESP always goes in a separate, non
> array, non LVM FAT partition.  For BTRFS RAID56 arrays,  "/boot" also
> requires a separate, non array partition.
>
> Because LVM does not favor a whole disk Physical Volume ("PV") over a
> partition based PV, it is trivial to create a petite ESP on a disk and
> assign the balance of the disk to a LVM PV.  Array capacity of both
> MDADM and BTRFS RAID 56 arrays may be disproportionately reduced when
> the size of a single disk is reduced by, say an ESP.  For
> administrative simplicity and to maximize array capacity, equal sized
> whole disk arrays are favored.
>
> Both the ESP and "/boot" partitions present limited, read dominated
> workloads.  USB flash drives are cheap and tolerate light, read
> dominated workloads well.  For a stand alone server, it is common to
> locate the ESP on a USB flash device.  If you use a BTRFS RAID56
> array, "/boot" and perhaps "/swap" may also go to separate partitions
> on the flash drive.  This permits assignment of whole disks to the
> array.  If you are working with a large number of servers, it may be
> cheaper, more energy efficient, and more reliable to replace whatever
> is on the flash drive with PXE boot.  Frequently, SATA (or IDE) drives
> that are not wholly allocated to the RAID array are scarce.  If you
> have one, the ESP (and "/boot") partitions may be located there.
> Similar concerns affect LILO.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFE: 'btrfs' tools machine readable output

2016-05-27 Thread Nicholas D Steeves
On 16 May 2016 at 08:39, Austin S. Hemmelgarn  wrote:
> On 2016-05-16 08:14, Richard W.M. Jones wrote:
>>
>> It would be really helpful if the btrfs tools had a machine-readable
>> output.
>> With machine-readable output, there'd be a flag which would
>> change the output.  eg:
>> $ btrfs --json filesystem show
>> {
>>   "devices": {
>>  "Label": "ROOT",
>>  "uuid": "af471cfc-421a-4d51-8811-ce969f76828a",
>>  /// etc
>>   }
>> }
> I would love to see something like this myself, as it would make integration
> with monitoring tools so much easier.  I'd vote for YAML as the output format
> though, as it's easily human readable as well as machine parseable while
> still being space efficient.
> ---
> - filesystems
>   - label: 'ROOT'
> uuid: af471cfc-421a-4d51-8811-ce969f76828a
> devices: 1
>   - devid: 1
> size: 767.97 MB
> used 92.00 MB
> path: /dev/sda2
> used: 5.29 MB
> ...

Rather than adding language-specific output/extensions to btrfs-progs,
what does everyone think of an interface which allows arbitrary
ordering similar to the date command, with arbitrary delimiter
support?

eg: date +%a\ %d\ \%B\ %H:%m

with something like: %D (device), %L (label), %V (volume), %v
(subvolume), %U (volume UUID), %u (subvol UUID), et al.

'avoids formatting the output to any particular convention, and the
order of what is queried can be in any order...but I think the [unit
to operate on] [named unit] bit would necessarily need to come first.
We already have that in btrfs [sub/fi/dev] [name] :-)

Or should it be a separate command...  Something like "btrfs-machine
[unit to operate on] [named unit] [action eg: print] [cryptic line
with user's choice of delimiter and formatting]".  And then there
might also be the option to "btrfs-machine [unit to operate on] [named
unit] return [scrub status, balance status, etc.]" which returns a
single integer value along the lines of idle|in
progress|success|failure.  And at some point in the future
btrfs-machine [unit to operate on] [named unit] progress [operation]
which functions like "fsck -C [fd]".

Now that, I imagine, would be something libvirt users would like,
whether for web interface, GUI, or wrapper script for for a slick NAS
web interfaces.

For now, maybe start with a proof of concept with only the "print"
action implemented?  And eventually merge this into the main btrfs
command something like this: "btrfs -m unit name action args" or
"btrfs --machine unit name action args"

On 16 May 2016 at 08:21, Martin Steigerwald  wrote:
> How about a libbtrfs so that other tools can benefit from btrfs tools
> functionality?
Does much does /usr/lib/x86_64-linux-gnu/libbtrfs.so expose?

What is the path of minimum duplication of work, and minimum long-term
maintenance?  If my assumptions are correct, this is a 2000-level
programming course textbook challenge on writing a program using a
library to query some values...and that might be something I could
manage :-)

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [1/1 v2] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-27 Thread Nicholas D Steeves
On 24 May 2016 at 06:50, David Sterba <dste...@suse.cz> wrote:
> On Mon, May 23, 2016 at 02:26:46PM -0400, Nicholas D Steeves wrote:
>> On 23 May 2016 at 13:01, David Sterba <dste...@suse.cz> wrote:
>> > On Thu, May 19, 2016 at 09:30:49PM -0400, Nicholas D Steeves wrote:
>> >> Sorry for the noise.  Please disregard my v1 patch and subsequent
>> >> emails.  This patch is for upstream linux-next.  From now on I think
>> >> that's what I'm going to work from, to keep things simple, because it
>> >> seems I'm still inept with git.
>> >
>> > The patch applies cleanly on top of the current branch that's going to
>> > Linus tree, so I'll queue it for the next pull request. All your inline
>> > notices were addressed. Thanks.
>>
>> You're welcome, and thank you for the assistance.  I don't want to
>> annoy everyone with a regular stream of these patches, so what do you
>> think of the the following?:  I'll submit a patch for user-facing
>> typos in btrfs-progs when I find one,
>
> Sending typo fixes for all user visible text is OK and welcome anytime.
>
>> if I find any, and a strings &
>> comments review for both -progs and kernel twice a year, where one
>> review is part of preparing for an LTS kernel.
>
> Typos in comments can be done once in a year I think.

Thank you for the clarification.  When would be the best time?  While
preparing for an LTS kernel, or a certain amount of time before one?
Would you like me to skip 4.10 this year and stick to doing it in the
month of May?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Some ideas for improvements

2016-05-25 Thread Nicholas D Steeves
On 25 May 2016 at 15:03, Duncan <1i5t5.dun...@cox.net> wrote:
> Dmitry Katsubo posted on Wed, 25 May 2016 16:45:41 +0200 as excerpted:
>> btrfs-restore:
>>
>> * It does not restore special files like named pipes and devices.
>> * Hard-linked files are not correctly restored (they all turn into
>> independent replicas).
>> * If the file cannot be read / recovered, it is still created with zero
>> size (I would expect that the file is not created).
>> * I think that the options '-xmS' should be enabled by default
>> (shouldn't it be a goal to restore as much as possible?).
>> * Option that applies (y) to all questions (completely unattended
>> recovery) is missing.
>
> That latter point is a known sore spot that a lot of people have
> complained about.  So it'll almost certainly be addressed eventually, but
> it's a matter of this project vying with all the others so it could be
> awhile.

I'm surprised no one has mentioned, in any of these discussions, what
I believe is the standard method of providing this functionality:
yes | btrfs-restore -options /dev/disk

And if you need it in your initrd on a Debian-like system, put the
following in /etc/initramfs-tools/hooks/yes.hook :

#!/bin/sh

. /usr/share/initramfs-tools/hook-functions

if command -v /usr/bin/yes >/dev/null 2>&1; then
copy_exec /usr/bin/yes usr/bin/yes
copy_exec /usr/bin/yes usr/bin/yes
fi

I haven't tested this, but it seems like it would do the trick.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs filesystem usage - Wrong Unallocated indications - RAID10

2016-05-23 Thread Nicholas D Steeves
On 23 May 2016 at 09:34, Marco Lorenzo Crociani
 wrote:
> Hi,
> as I wrote today in IRCI experienced an issue with 'btrfs filesystem usage'.
> I have a 4 partitions RAID10 btrfs filesystem almost full.
> 'btrfs filesystem usage' reports wrong "Unallocated" indications.
>
> Linux 4.5.3
> btrfs-progs v4.5.3
>
>
> # btrfs fi usage /data/
>
> Overall:
> Device size:   13.93TiB
> Device allocated:   13.77TiB
> Device unallocated: 167.54GiB

I wonder if this is related to whatever caused the free space cache
bug for Ivan Pilipenko and myself (linux 4.4.10, btrfs-progs 4.4.1)?

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [1/1 v2] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-23 Thread Nicholas D Steeves
On 23 May 2016 at 13:01, David Sterba <dste...@suse.cz> wrote:
> On Thu, May 19, 2016 at 09:30:49PM -0400, Nicholas D Steeves wrote:
>> Sorry for the noise.  Please disregard my v1 patch and subsequent
>> emails.  This patch is for upstream linux-next.  From now on I think
>> that's what I'm going to work from, to keep things simple, because it
>> seems I'm still inept with git.
>
> The patch applies cleanly on top of the current branch that's going to
> Linus tree, so I'll queue it for the next pull request. All your inline
> notices were addressed. Thanks.

You're welcome, and thank you for the assistance.  I don't want to
annoy everyone with a regular stream of these patches, so what do you
think of the the following?:  I'll submit a patch for user-facing
typos in btrfs-progs when I find one, if I find any, and a strings &
comments review for both -progs and kernel twice a year, where one
review is part of preparing for an LTS kernel.

Regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [1/1 v2] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
Hi David,

Sorry for the noise.  Please disregard my v1 patch and subsequent
emails.  This patch is for upstream linux-next.  From now on I think
that's what I'm going to work from, to keep things simple, because it
seems I'm still inept with git.

Regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/1 v2] for linux-next! String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
Hi David,

There.  I decided to just check out a fresh copy of linux-next.git from a URL 
provided in a previous email, because I'm clearly still inept with git.  Please 
disregard my last thread.

I took a look at what you merged and what you didn't for my review of 
btrfs-progs, and revised my approach accordingly.  There are six edits that 
require your review.  Please search/grep for "Steeves" to find them.

Cheers!
Nicholas

Nicholas D Steeves (1):
  String and comment review: Fix typos; fix a couple of mandatory
grammatical issues for clarity.

 fs/btrfs/backref.c|  2 +-
 fs/btrfs/btrfs_inode.h|  2 +-
 fs/btrfs/check-integrity.c|  2 +-
 fs/btrfs/ctree.c  | 14 +++---
 fs/btrfs/ctree.h  |  6 +++---
 fs/btrfs/delayed-ref.h|  2 +-
 fs/btrfs/dev-replace.c|  2 +-
 fs/btrfs/disk-io.c| 10 +-
 fs/btrfs/extent-tree.c| 32 
 fs/btrfs/extent_io.c  |  4 ++--
 fs/btrfs/extent_map.c |  2 +-
 fs/btrfs/file.c   |  4 ++--
 fs/btrfs/free-space-cache.c   |  2 +-
 fs/btrfs/free-space-cache.h   |  2 +-
 fs/btrfs/inode.c  | 22 +++---
 fs/btrfs/ioctl.c  | 10 +-
 fs/btrfs/ordered-data.h   |  2 +-
 fs/btrfs/qgroup.c | 16 
 fs/btrfs/raid56.c |  6 +++---
 fs/btrfs/relocation.c | 12 ++--
 fs/btrfs/root-tree.c  |  4 ++--
 fs/btrfs/scrub.c  |  4 ++--
 fs/btrfs/send.c   |  6 +++---
 fs/btrfs/struct-funcs.c   |  2 +-
 fs/btrfs/super.c  |  8 
 fs/btrfs/sysfs.c  |  2 +-
 fs/btrfs/tests/extent-io-tests.c  |  2 +-
 fs/btrfs/tests/free-space-tests.c |  7 ---
 fs/btrfs/tests/inode-tests.c  |  2 +-
 fs/btrfs/tests/qgroup-tests.c |  2 +-
 fs/btrfs/transaction.h|  2 +-
 fs/btrfs/tree-log.c   |  8 
 fs/btrfs/ulist.c  |  2 +-
 fs/btrfs/volumes.c|  8 
 34 files changed, 107 insertions(+), 106 deletions(-)

-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[1/1 v2] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 fs/btrfs/backref.c|  2 +-
 fs/btrfs/btrfs_inode.h|  2 +-
 fs/btrfs/check-integrity.c|  2 +-
 fs/btrfs/ctree.c  | 14 +++---
 fs/btrfs/ctree.h  |  6 +++---
 fs/btrfs/delayed-ref.h|  2 +-
 fs/btrfs/dev-replace.c|  2 +-
 fs/btrfs/disk-io.c| 10 +-
 fs/btrfs/extent-tree.c| 32 
 fs/btrfs/extent_io.c  |  4 ++--
 fs/btrfs/extent_map.c |  2 +-
 fs/btrfs/file.c   |  4 ++--
 fs/btrfs/free-space-cache.c   |  2 +-
 fs/btrfs/free-space-cache.h   |  2 +-
 fs/btrfs/inode.c  | 22 +++---
 fs/btrfs/ioctl.c  | 10 +-
 fs/btrfs/ordered-data.h   |  2 +-
 fs/btrfs/qgroup.c | 16 
 fs/btrfs/raid56.c |  6 +++---
 fs/btrfs/relocation.c | 12 ++--
 fs/btrfs/root-tree.c  |  4 ++--
 fs/btrfs/scrub.c  |  4 ++--
 fs/btrfs/send.c   |  6 +++---
 fs/btrfs/struct-funcs.c   |  2 +-
 fs/btrfs/super.c  |  8 
 fs/btrfs/sysfs.c  |  2 +-
 fs/btrfs/tests/extent-io-tests.c  |  2 +-
 fs/btrfs/tests/free-space-tests.c |  7 ---
 fs/btrfs/tests/inode-tests.c  |  2 +-
 fs/btrfs/tests/qgroup-tests.c |  2 +-
 fs/btrfs/transaction.h|  2 +-
 fs/btrfs/tree-log.c   |  8 
 fs/btrfs/ulist.c  |  2 +-
 fs/btrfs/volumes.c|  8 
 34 files changed, 107 insertions(+), 106 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index d309018..8bb3509 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -1939,7 +1939,7 @@ static int inode_to_path(u64 inum, u32 name_len, unsigned 
long name_off,
  * from ipath->fspath->val[i].
  * when it returns, there are ipath->fspath->elem_cnt number of paths available
  * in ipath->fspath->val[]. when the allocated space wasn't sufficient, the
- * number of missed paths in recored in ipath->fspath->elem_missed, otherwise,
+ * number of missed paths is recorded in ipath->fspath->elem_missed, otherwise,
  * it's zero. ipath->fspath->bytes_missing holds the number of bytes that would
  * have been needed to return all paths.
  */
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 1da5753..4919aed 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -313,7 +313,7 @@ struct btrfs_dio_private {
struct bio *dio_bio;
 
/*
-* The original bio may be splited to several sub-bios, this is
+* The original bio may be split to several sub-bios, this is
 * done during endio of sub-bios
 */
int (*subio_endio)(struct inode *, struct btrfs_io_bio *, int);
diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 516e19d..b677a6e 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -1939,7 +1939,7 @@ again:
/*
 * Clear all references of this block. Do not free
 * the block itself even if is not referenced anymore
-* because it still carries valueable information
+* because it still carries valuable information
 * like whether it was ever written and IO completed.
 */
list_for_each_entry_safe(l, tmp, >ref_to_list,
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index decd0a3..ba4dc5c 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -156,7 +156,7 @@ struct extent_buffer *btrfs_root_node(struct btrfs_root 
*root)
 
/*
 * RCU really hurts here, we could free up the root node because
-* it was cow'ed but we may not get the new root node yet so do
+* it was COWed but we may not get the new root node yet so do
 * the inc_not_zero dance and if it doesn't work then
 * synchronize_rcu and try again.
 */
@@ -955,7 +955,7 @@ int btrfs_block_can_be_shared(struct btrfs_root *root,
  struct extent_buffer *buf)
 {
/*
-* Tree blocks not in refernece counted trees and tree roots
+* Tree blocks not in reference counted trees and tree roots
 * are never shared. If a block was allocated after the last
 * snapshot and the block was not allocated by tree relocation,
 * we know the block is not shared.
@@ -1270,7 +1270,7 @@ __tree_mod_log_oldest_root(struct btrfs_fs_info *fs_info,
 
 /*
  * tm is a pointer to the first operation to rewind within eb. then, all
- * previous operations will be rewinded (until we reach something older than
+ * previous operations will be rewound (until we reach somethin

Re: [PATCH 0/1] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
On 19 May 2016 at 19:47, Nicholas D Steeves <nstee...@gmail.com> wrote:
> On 19 May 2016 at 19:13, Nicholas D Steeves <nstee...@gmail.com> wrote:
>> Nicholas D Steeves (1):
>>   String and comment review: Fix typos; fix a couple of mandatory
>> grammatical issues for clarity.
>>
>>  fs/btrfs/backref.c|  2 +-
>>  fs/btrfs/btrfs_inode.h|  2 +-
>>  fs/btrfs/check-integrity.c|  2 +-
>>  fs/btrfs/ctree.c  | 14 +++---
>>  fs/btrfs/ctree.h  | 10 +-
>>  fs/btrfs/delayed-ref.h|  2 +-
>>  fs/btrfs/dev-replace.c|  2 +-
>>  fs/btrfs/disk-io.c| 10 +-
>>  fs/btrfs/extent-tree.c| 34 +-
>>  fs/btrfs/extent_io.c  |  4 ++--
>>  fs/btrfs/extent_map.c |  2 +-
>>  fs/btrfs/file.c   |  4 ++--
>>  fs/btrfs/free-space-cache.c   |  2 +-
>>  fs/btrfs/free-space-cache.h   |  2 +-
>>  fs/btrfs/inode.c  | 22 +++---
>>  fs/btrfs/ioctl.c  | 10 +-
>>  fs/btrfs/ordered-data.h   |  2 +-
>>  fs/btrfs/qgroup.c | 16 
>>  fs/btrfs/raid56.c |  6 +++---
>>  fs/btrfs/relocation.c | 12 ++--
>>  fs/btrfs/root-tree.c  |  4 ++--
>>  fs/btrfs/scrub.c  |  4 ++--
>>  fs/btrfs/send.c   |  6 +++---
>>  fs/btrfs/struct-funcs.c   |  2 +-
>>  fs/btrfs/super.c  | 10 +-
>>  fs/btrfs/sysfs.c  |  2 +-
>>  fs/btrfs/tests/extent-io-tests.c  |  2 +-
>>  fs/btrfs/tests/free-space-tests.c |  7 ---
>>  fs/btrfs/tests/inode-tests.c  |  2 +-
>>  fs/btrfs/tests/qgroup-tests.c |  2 +-
>>  fs/btrfs/transaction.h|  2 +-
>>  fs/btrfs/tree-log.c   |  8 
>>  fs/btrfs/ulist.c  |  2 +-
>>  fs/btrfs/volumes.c|  8 
>>  34 files changed, 111 insertions(+), 110 deletions(-)
>>
>> --
>> 2.1.4
>>
>
> Argh.  I'm sorry, I wasn't quite careful enough, and was working off
> of master rather than for-next.  Please disregard this patch.
>
> Sincerely,
> Nicholas

Oh my, rebasing off of for-next includes everyone else's patches too!
Maybe I did it correctly by using master after all?  There were only a
couple of minor rejects affecting the following: ctree.h,
extent-tree.c, and super.c.

Or is it preferred that I delete all patches excepting my cover letter
and 0119-For-linux-next-String-and-comment-review-Fix-typos-f.patch,
and then git send-email?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
On 19 May 2016 at 19:13, Nicholas D Steeves <nstee...@gmail.com> wrote:
> Nicholas D Steeves (1):
>   String and comment review: Fix typos; fix a couple of mandatory
> grammatical issues for clarity.
>
>  fs/btrfs/backref.c|  2 +-
>  fs/btrfs/btrfs_inode.h|  2 +-
>  fs/btrfs/check-integrity.c|  2 +-
>  fs/btrfs/ctree.c  | 14 +++---
>  fs/btrfs/ctree.h  | 10 +-
>  fs/btrfs/delayed-ref.h|  2 +-
>  fs/btrfs/dev-replace.c|  2 +-
>  fs/btrfs/disk-io.c| 10 +-
>  fs/btrfs/extent-tree.c| 34 +-
>  fs/btrfs/extent_io.c  |  4 ++--
>  fs/btrfs/extent_map.c |  2 +-
>  fs/btrfs/file.c   |  4 ++--
>  fs/btrfs/free-space-cache.c   |  2 +-
>  fs/btrfs/free-space-cache.h   |  2 +-
>  fs/btrfs/inode.c  | 22 +++---
>  fs/btrfs/ioctl.c  | 10 +-
>  fs/btrfs/ordered-data.h   |  2 +-
>  fs/btrfs/qgroup.c | 16 
>  fs/btrfs/raid56.c |  6 +++---
>  fs/btrfs/relocation.c | 12 ++--
>  fs/btrfs/root-tree.c  |  4 ++--
>  fs/btrfs/scrub.c  |  4 ++--
>  fs/btrfs/send.c   |  6 +++---
>  fs/btrfs/struct-funcs.c   |  2 +-
>  fs/btrfs/super.c  | 10 +-
>  fs/btrfs/sysfs.c  |  2 +-
>  fs/btrfs/tests/extent-io-tests.c  |  2 +-
>  fs/btrfs/tests/free-space-tests.c |  7 ---
>  fs/btrfs/tests/inode-tests.c  |  2 +-
>  fs/btrfs/tests/qgroup-tests.c |  2 +-
>  fs/btrfs/transaction.h|  2 +-
>  fs/btrfs/tree-log.c   |  8 
>  fs/btrfs/ulist.c  |  2 +-
>  fs/btrfs/volumes.c|  8 
>  34 files changed, 111 insertions(+), 110 deletions(-)
>
> --
> 2.1.4
>

Argh.  I'm sorry, I wasn't quite careful enough, and was working off
of master rather than for-next.  Please disregard this patch.

Sincerely,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/1] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 fs/btrfs/backref.c|  2 +-
 fs/btrfs/btrfs_inode.h|  2 +-
 fs/btrfs/check-integrity.c|  2 +-
 fs/btrfs/ctree.c  | 14 +++---
 fs/btrfs/ctree.h  | 10 +-
 fs/btrfs/delayed-ref.h|  2 +-
 fs/btrfs/dev-replace.c|  2 +-
 fs/btrfs/disk-io.c| 10 +-
 fs/btrfs/extent-tree.c| 34 +-
 fs/btrfs/extent_io.c  |  4 ++--
 fs/btrfs/extent_map.c |  2 +-
 fs/btrfs/file.c   |  4 ++--
 fs/btrfs/free-space-cache.c   |  2 +-
 fs/btrfs/free-space-cache.h   |  2 +-
 fs/btrfs/inode.c  | 22 +++---
 fs/btrfs/ioctl.c  | 10 +-
 fs/btrfs/ordered-data.h   |  2 +-
 fs/btrfs/qgroup.c | 16 
 fs/btrfs/raid56.c |  6 +++---
 fs/btrfs/relocation.c | 12 ++--
 fs/btrfs/root-tree.c  |  4 ++--
 fs/btrfs/scrub.c  |  4 ++--
 fs/btrfs/send.c   |  6 +++---
 fs/btrfs/struct-funcs.c   |  2 +-
 fs/btrfs/super.c  | 10 +-
 fs/btrfs/sysfs.c  |  2 +-
 fs/btrfs/tests/extent-io-tests.c  |  2 +-
 fs/btrfs/tests/free-space-tests.c |  7 ---
 fs/btrfs/tests/inode-tests.c  |  2 +-
 fs/btrfs/tests/qgroup-tests.c |  2 +-
 fs/btrfs/transaction.h|  2 +-
 fs/btrfs/tree-log.c   |  8 
 fs/btrfs/ulist.c  |  2 +-
 fs/btrfs/volumes.c|  8 
 34 files changed, 111 insertions(+), 110 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 80e8472..b8b5987 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -1939,7 +1939,7 @@ static int inode_to_path(u64 inum, u32 name_len, unsigned 
long name_off,
  * from ipath->fspath->val[i].
  * when it returns, there are ipath->fspath->elem_cnt number of paths available
  * in ipath->fspath->val[]. when the allocated space wasn't sufficient, the
- * number of missed paths in recored in ipath->fspath->elem_missed, otherwise,
+ * number of missed paths is recorded in ipath->fspath->elem_missed, otherwise,
  * it's zero. ipath->fspath->bytes_missing holds the number of bytes that would
  * have been needed to return all paths.
  */
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 61205e3..c0a2018 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -303,7 +303,7 @@ struct btrfs_dio_private {
struct bio *dio_bio;
 
/*
-* The original bio may be splited to several sub-bios, this is
+* The original bio may be split to several sub-bios, this is
 * done during endio of sub-bios
 */
int (*subio_endio)(struct inode *, struct btrfs_io_bio *, int);
diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 516e19d..b677a6e 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -1939,7 +1939,7 @@ again:
/*
 * Clear all references of this block. Do not free
 * the block itself even if is not referenced anymore
-* because it still carries valueable information
+* because it still carries valuable information
 * like whether it was ever written and IO completed.
 */
list_for_each_entry_safe(l, tmp, >ref_to_list,
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index ec7928a..6487c9c 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -156,7 +156,7 @@ struct extent_buffer *btrfs_root_node(struct btrfs_root 
*root)
 
/*
 * RCU really hurts here, we could free up the root node because
-* it was cow'ed but we may not get the new root node yet so do
+* it was COWed but we may not get the new root node yet so do
 * the inc_not_zero dance and if it doesn't work then
 * synchronize_rcu and try again.
 */
@@ -955,7 +955,7 @@ int btrfs_block_can_be_shared(struct btrfs_root *root,
  struct extent_buffer *buf)
 {
/*
-* Tree blocks not in refernece counted trees and tree roots
+* Tree blocks not in reference counted trees and tree roots
 * are never shared. If a block was allocated after the last
 * snapshot and the block was not allocated by tree relocation,
 * we know the block is not shared.
@@ -1270,7 +1270,7 @@ __tree_mod_log_oldest_root(struct btrfs_fs_info *fs_info,
 
 /*
  * tm is a pointer to the first operation to rewind within eb. then, all
- * previous operations will be rewinded (until we reach something older than
+ * previous operations will be rewound (until we reach som

[PATCH 0/1] String and comment review: Fix typos; fix a couple of mandatory grammatical issues for clarity.

2016-05-19 Thread Nicholas D Steeves
Hi David,

I took a look at what you merged and what you didn't for my review of 
btrfs-progs, and revised my approach accordingly.  There are six edits that 
require your review.  Please search/grep for "Steeves" to find them.

Cheers!
Nicholas

Nicholas D Steeves (1):
  String and comment review: Fix typos; fix a couple of mandatory
grammatical issues for clarity.

 fs/btrfs/backref.c|  2 +-
 fs/btrfs/btrfs_inode.h|  2 +-
 fs/btrfs/check-integrity.c|  2 +-
 fs/btrfs/ctree.c  | 14 +++---
 fs/btrfs/ctree.h  | 10 +-
 fs/btrfs/delayed-ref.h|  2 +-
 fs/btrfs/dev-replace.c|  2 +-
 fs/btrfs/disk-io.c| 10 +-
 fs/btrfs/extent-tree.c| 34 +-
 fs/btrfs/extent_io.c  |  4 ++--
 fs/btrfs/extent_map.c |  2 +-
 fs/btrfs/file.c   |  4 ++--
 fs/btrfs/free-space-cache.c   |  2 +-
 fs/btrfs/free-space-cache.h   |  2 +-
 fs/btrfs/inode.c  | 22 +++---
 fs/btrfs/ioctl.c  | 10 +-
 fs/btrfs/ordered-data.h   |  2 +-
 fs/btrfs/qgroup.c | 16 
 fs/btrfs/raid56.c |  6 +++---
 fs/btrfs/relocation.c | 12 ++--
 fs/btrfs/root-tree.c  |  4 ++--
 fs/btrfs/scrub.c  |  4 ++--
 fs/btrfs/send.c   |  6 +++---
 fs/btrfs/struct-funcs.c   |  2 +-
 fs/btrfs/super.c  | 10 +-
 fs/btrfs/sysfs.c  |  2 +-
 fs/btrfs/tests/extent-io-tests.c  |  2 +-
 fs/btrfs/tests/free-space-tests.c |  7 ---
 fs/btrfs/tests/inode-tests.c  |  2 +-
 fs/btrfs/tests/qgroup-tests.c |  2 +-
 fs/btrfs/transaction.h|  2 +-
 fs/btrfs/tree-log.c   |  8 
 fs/btrfs/ulist.c  |  2 +-
 fs/btrfs/volumes.c|  8 
 34 files changed, 111 insertions(+), 110 deletions(-)

-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: problems with free space cache

2016-05-17 Thread Nicholas D Steeves
On 16 May 2016 at 07:36, Austin S. Hemmelgarn  wrote:
> On 2016-05-16 02:20, Qu Wenruo wrote:
>>
>>
>>
>> Duncan wrote on 2016/05/16 05:59 +:
>>>
>>> Qu Wenruo posted on Mon, 16 May 2016 10:24:23 +0800 as excerpted:
>>>
 IIRC clear_cache option is fs level option.
 So the first mount with clear_cache, then all subvolume will have
 clear_cache.
>>>
>>>
>>> Question:  Does clear_cache work with a read-only mount?
>>
>> Good question.
>>
>> But easy to check.
>> I just checked it and found even that's possible, it doesn't work.

+1  I had to use my USB flash rescue disk to mount with clear_cache.

>> Free space cache inode bytenr doesn't change and no generation change.
>> While without ro, it did rebuild free space cache for *SOME* chunks, not
>> *ALL* chunks.
>>
>> And that's the problem I'm just chasing today.

+1  Unfortunately, it didn't fix the affected free space cache files.

>> Short conclude: clear_cache mount option will only rebuild free space
>> cache for chunks which we allocated space from, during the mount time of
>> clear_cache.
>> (Maybe I'm just out-of-date and some other devs may already know that)

Does this mean creating a huge file filled with zeros, while mounted
with clear_cache would solve this?  I think that would be faster than
a full rebalance, but I'm not convinced it would work, because of #1
(see below).

One of the following might have caused this situation:

1) I created a new subvolume, and tested a full restore from
(non-btrfs aware) backups to this subvolume; some time later,
verification (rsync -c with other options) completed, and the backup
was confirmed to be usable; I deleted the subvolume and watched the
cleaner get to work and reduce the allocated space, remembering a time
a balance was necessary.  If this is what caused it, cloud there be a
bug in the cleaner and/or cleaner<->space_cache interaction?

2) The same week I reorganised several hundred gigabytes of short term
backups, moving them from one subvolume to another.  I used cp -ar
--reflink=always from within /btrfs-admin, which is where I mount the
whole volume, because / is subvol=rootfs.  After the copy I removed
the source, then used the checksums I keep in my short-term backup
directory to verify everything was ok (I could have restored from a
fresh long-term backup if this failed).  As expected, everything was
ok.  If this is what caused the free space cache bug, then could there
be a bug in the intersubvolume reflink code?

>> That behavior makes things a little confusing, which users may continue
>> hitting annoying free space cache warning from kernel, even they try to
>> use clear_cache mount option.

So this warning is totally harmless and will not cause future problems?

>> Anyway, I'll add ability for manually wipe out all/given free space
>> cache to btrfsck, at least creating a solution to really rebuild all v1
>> free space cache.

Nice!  Am I correct in understanding that this is as safe as a full
balance, and not dangerous like btrfs check --repair?  Is it likely
that the v2 free space cache will be default by linux-4.10?

> FWIW, I think it's possible to do this by mounting with clear_cache and then
> running a full balance on the filesystem.  Having an option to do this on an
> unmounted FS would be preferred of course, as that would almost certainly
> be more efficient on any reasonably sized filesystem.

I'm running a balance now, like it's 2015! ;-)

Thank you,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: problems with free space cache

2016-05-15 Thread Nicholas D Steeves
On 15 May 2016 at 22:24, Qu Wenruo <quwen...@cn.fujitsu.com> wrote:
>
>
> Nicholas D Steeves wrote on 2016/05/15 22:11 -0400:
>> What is the standard procedure for fixing the cache?  Rootfs is a
>> subvolume and the first entry in fstab.  Second entry is /btrfs-admin,
>> which is where I mount the whole volume.  Is it sufficient to add the
>> clear_cache option to the rootfs entry, or does the /btrfs-admin entry
>> also need it?
>
>
> IIRC clear_cache option is fs level option.
> So the first mount with clear_cache, then all subvolume will have
> clear_cache.
>

So that means that in this case mounting subvol=rootfs with
clear_cache will clear the cache for all subsequently mounted
subvolumes, and for the whole volume, correct?

Thanks,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


problems with free space cache

2016-05-15 Thread Nicholas D Steeves
The volume was created with btrfs-progs-4.4.41; I upgraded to
linux-4.4.10 today, and here is what I grepped from my dmesg:

[  +0.002613] Btrfs loaded
[  +0.080621] BTRFS: device label Red devid 1 transid 108234 /dev/sda3
[  +0.000106] BTRFS: device label Red devid 2 transid 108234 /dev/sdb2
[  +0.011888] BTRFS info (device sdb2): disk space caching is enabled
[  +0.01] BTRFS: has skinny extents
[  +0.063532] BTRFS info (device sdb2): disk space caching is enabled
[  +3.615389] BTRFS warning (device sdb2): block group 183639212032
has wrong amount of free space
[  +0.02] BTRFS warning (device sdb2): failed to load free space
cache for block group 183639212032, rebuilding it now
[  +1.718348] BTRFS warning (device sdb2): block group 859022819328
has wrong amount of free space
[  +0.03] BTRFS warning (device sdb2): failed to load free space
cache for block group 859022819328, rebuilding it now
[  +4.357965] BTRFS info (device sdb2): The free space cache file
(1082361118720) is invalid. skip it
[  +0.026489] BTRFS info (device sdb2): The free space cache file
(1094172278784) is invalid. skip it
[  +0.740340] BTRFS info (device sdb2): The free space cache file
(1183292850176) is invalid. skip it
[  +0.610161] BTRFS info (device sdb2): The free space cache file
(1248791101440) is invalid. skip it
[  +0.353670] BTRFS info (device sdb2): The free space cache file
(1284224581632) is invalid. skip it

What is the standard procedure for fixing the cache?  Rootfs is a
subvolume and the first entry in fstab.  Second entry is /btrfs-admin,
which is where I mount the whole volume.  Is it sufficient to add the
clear_cache option to the rootfs entry, or does the /btrfs-admin entry
also need it?

>From what I've read in the documentation one modifies fstab, reboots,
removes modification from fstab, and it's fixed.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] btrfs-progs: Typo review of strings and comments

2016-05-12 Thread Nicholas D Steeves
On 12 May 2016 at 07:29, David Sterba <dste...@suse.cz> wrote:
> On Wed, May 11, 2016 at 07:50:35PM -0400, Nicholas D Steeves wrote:
>> There were a couple of instances where I wasn't sure what to do; I've
>> annotated them, and they are long lines now.  To find them, search or
>> grep the diff for 'Steeves'.
>
> Found and updated, most of them real typos, 'strtoull' is name of a C
> library functioon.

Hi David,

Thank you for reviewing this patch.  Sorry for missing the context of
the strtoull comment; I should have been able to infer that and am
embarrassed that I failed to.  Also, embarrassed because I think I've
used it in some C++ code!

I learned how to use git rebase and git reset today, and can submit a
v2 patch diffed against master at your earliest convenience.  My only
remaining question is this:

mkfs.c: printf:("Incompatible features:  %s", features_buf)
  * Should this be left as "Imcompat features"?

Regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/1] Typo review of strings and comments.

2016-05-11 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 CHANGES| 12 -
 backref.c  |  4 +--
 btrfs-convert.c|  4 +--
 btrfs-corrupt-block.c  |  4 +--
 btrfs-debugfs  |  2 +-
 btrfs-image.c  | 22 
 btrfs-list.c   |  4 +--
 chunk-recover.c|  6 ++---
 cmds-balance.c |  4 +--
 cmds-check.c   | 30 +++---
 cmds-fi-du.c   |  2 +-
 cmds-fi-usage.c|  2 +-
 cmds-inspect-tree-stats.c  |  2 +-
 cmds-inspect.c |  2 +-
 cmds-receive.c |  2 +-
 cmds-scrub.c   |  2 +-
 cmds-send.c|  2 +-
 ctree.c|  2 +-
 ctree.h|  6 ++---
 dir-item.c |  2 +-
 disk-io.h  |  4 +--
 extent-cache.h |  2 +-
 file.c |  4 +--
 inode.c|  4 +--
 ioctl.h|  6 ++---
 list_sort.c|  2 +-
 mkfs.c | 12 -
 send-utils.c   |  2 +-
 show-blocks|  4 +--
 super-recover.c|  4 +--
 tests/README.md|  4 +--
 tests/common   |  2 +-
 tests/fsck-tests.sh|  2 +-
 tests/misc-tests.sh|  2 +-
 tests/misc-tests/007-subvolume-sync/test.sh|  4 +--
 .../009-subvolume-sync-must-wait/test.sh   |  2 +-
 tests/misc-tests/013-subvolume-sync-crash/test.sh  |  2 +-
 tests/mkfs-tests.sh|  2 +-
 .../008-secorsize-nodesize-combination/test.sh |  2 +-
 ulist.c|  2 +-
 utils-lib.c|  2 +-
 utils.c|  2 +-
 volumes.c  |  2 +-
 43 files changed, 95 insertions(+), 95 deletions(-)

diff --git a/CHANGES b/CHANGES
index 6186808..aa13d48 100644
--- a/CHANGES
+++ b/CHANGES
@@ -82,8 +82,8 @@ btrfs-progs-4.4 (2016-01-18)
 * check: fix a false alert where extent record has wrong metadata flag
 * improved stability on fuzzed/crafted images when reading sys array in
   superblock
-* build: the 'ar' tool is properly deteced during cross-compilation
-* debug-tree: option -t understands ids for tree root and chnuk tree
+* build: the 'ar' tool is properly detected during cross-compilation
+* debug-tree: option -t understands ids for tree root and chunk tree
 * preparatory work for btrfs-convert rewrite
 * sparse, gcc warning fixes
 * more memory allocation failure handling
@@ -131,7 +131,7 @@ btrfs-progs-4.3 (2015-11-06)
   subvolume
   * other:
 * check: add progress indicator
-* scrub: enahced error message
+* scrub: enhanced error message
 * show-super: read superblock from a given offset
 * add README
 * docs: update manual page for mkfs.btrfs, btrfstune, balance,
@@ -155,7 +155,7 @@ btrfs-progs-4.3 (2015-11-06)
 
 btrfs-progs-4.2.3 (2015-10-19)
   * subvol sync: make it actually work again: it's been broken since 4.1.2,
-due to a reversed condition it returned immediatelly instead of waiting
+due to a reversed condition it returned immediately instead of waiting
   * scanning: do not scan already discovered filesystems (minor optimization)
   * convert: better error message in case the filesystem is not finalized
   * restore: off-by-one symlink path check fix
@@ -226,7 +226,7 @@ btrfs-progs-4.1.1 (2015-07-10) -- Do not use this version!
 * documentation updates
 * debug-tree: print nbytes
 * test: image for corrupted nbytes
-* corupt-block: let it kill nbytes
+* corrupt-block: let it kill nbytes
 
 btrfs-progs-4.1 (2015-06-22)
   Bugfixes:
@@ -267,7 +267,7 @@ btrfs-progs-4.1 (2015-06-22)
 * debug tree: print key names according to their C name
 
   New:
-* rescure zero-log
+* rescue zero-log
 * btrfsune:
   * rewrite uuid on a filesystem image
   * new option to turn on NO_HOLES incompat feature
diff --git a/backref.c b/backref.c
index 9d48a10..7b3b592 

[PATCH 0/1] btrfs-progs: Typo review of strings and comments

2016-05-11 Thread Nicholas D Steeves
Thank you David Sterba for the btrfs-typos.txt which gave me a head start.  
Unfortunately I wasn't able finish before btrfs-progs-4.5.3 was released, 
because I decided to use emacs' ispell-comments-and-strings to do a full 
review.  I had to rebase to kdave's 4.5.2 branch on github, and that is what 
this patch will cleanly apply to.

There were a couple of instances where I wasn't sure what to do; I've annotated 
them, and they are long lines now.  To find them, search or grep the diff for 
'Steeves'.

Best regards,
Nicholas

Nicholas D Steeves (1):
  Typo review of strings and comments.

 CHANGES| 12 -
 backref.c  |  4 +--
 btrfs-convert.c|  4 +--
 btrfs-corrupt-block.c  |  4 +--
 btrfs-debugfs  |  2 +-
 btrfs-image.c  | 22 
 btrfs-list.c   |  4 +--
 chunk-recover.c|  6 ++---
 cmds-balance.c |  4 +--
 cmds-check.c   | 30 +++---
 cmds-fi-du.c   |  2 +-
 cmds-fi-usage.c|  2 +-
 cmds-inspect-tree-stats.c  |  2 +-
 cmds-inspect.c |  2 +-
 cmds-receive.c |  2 +-
 cmds-scrub.c   |  2 +-
 cmds-send.c|  2 +-
 ctree.c|  2 +-
 ctree.h|  6 ++---
 dir-item.c |  2 +-
 disk-io.h  |  4 +--
 extent-cache.h |  2 +-
 file.c |  4 +--
 inode.c|  4 +--
 ioctl.h|  6 ++---
 list_sort.c|  2 +-
 mkfs.c | 12 -
 send-utils.c   |  2 +-
 show-blocks|  4 +--
 super-recover.c|  4 +--
 tests/README.md|  4 +--
 tests/common   |  2 +-
 tests/fsck-tests.sh|  2 +-
 tests/misc-tests.sh|  2 +-
 tests/misc-tests/007-subvolume-sync/test.sh|  4 +--
 .../009-subvolume-sync-must-wait/test.sh   |  2 +-
 tests/misc-tests/013-subvolume-sync-crash/test.sh  |  2 +-
 tests/mkfs-tests.sh|  2 +-
 .../008-secorsize-nodesize-combination/test.sh |  2 +-
 ulist.c|  2 +-
 utils-lib.c|  2 +-
 utils.c|  2 +-
 volumes.c  |  2 +-
 43 files changed, 95 insertions(+), 95 deletions(-)

-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Trivial fix for typos in comments.

2016-05-10 Thread Nicholas D Steeves
On 10 May 2016 at 09:33, David Sterba <dste...@suse.cz> wrote:
> On Mon, May 09, 2016 at 08:13:29PM -0400, Nicholas D Steeves wrote:
>> Trivial fix for typos in comments; I hope this patch isn't a nuisance!
>
> No, but I don't see the typos in any of the branches (master or the
> for-next snapshots).

Sorry, I used for-linus, while following this guide:
"git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus"
https://btrfs.wiki.kernel.org/index.php/Writing_patch_for_btrfs

and "git checkout master" confirms that I'm already on 'master', but
"git rebase next" produces the error:

fatal: Needed a single revision
invalid upstream next

I'm very new to git.  Should I have cloned the whole linux-btrfs.git
without "for-linus"?  Am I on the a master branch for the for-linus
module, which is separate from the master branch of the main
linux-btrfs?

On 10 May 2016 at 10:20, David Sterba <dste...@suse.cz> wrote:
Message-ID: <20160510142028.gu29...@twin.jikos.cz>
> On Mon, May 09, 2016 at 08:13:29PM -0400, Nicholas D Steeves wrote:
> I ran ispell on the strings from comments, there are like 90+ typos
> (attached) that seem worth fixing, if you like.

Thank you for the list, I'd be happy to.  Are patches against master
or next preferred (I don't see a for-next branch)?

Kind regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Trivial fix for typos in comments.

2016-05-09 Thread Nicholas D Steeves
Signed-off-by: Nicholas D Steeves <nstee...@gmail.com>
---
 fs/btrfs/disk-io.c | 2 +-
 fs/btrfs/extent-tree.c | 2 +-
 fs/btrfs/file.c| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 50bed6c..c66c752 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -816,7 +816,7 @@ static void run_one_async_done(struct btrfs_work *work)
waitqueue_active(_info->async_submit_wait))
wake_up(_info->async_submit_wait);
 
-   /* If an error occured we just want to clean up the bio and move on */
+   /* If an error occurred we just want to clean up the bio and move on */
if (async->error) {
async->bio->bi_error = async->error;
bio_endio(async->bio);
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index e2287c7..7ed61e9 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -5752,7 +5752,7 @@ out_fail:
 
/*
 * This is tricky, but first we need to figure out how much we
-* free'd from any free-ers that occured during this
+* free'd from any free-ers that occurred during this
 * reservation, so we reset ->csum_bytes to the csum_bytes
 * before we dropped our lock, and then call the free for the
 * number of bytes that were freed while we were trying our
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index af782fd..c2c378d 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1825,7 +1825,7 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
/*
 * We also have to set last_sub_trans to the current log transid,
 * otherwise subsequent syncs to a file that's been synced in this
-* transaction will appear to have already occured.
+* transaction will appear to have already occurred.
 */
spin_lock(_I(inode)->lock);
BTRFS_I(inode)->last_sub_trans = root->log_transid;
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Trivial fix for typos in comments.

2016-05-09 Thread Nicholas D Steeves
Trivial fix for typos in comments; I hope this patch isn't a nuisance!

Nicholas D Steeves (1):
  Trivial fix for typos in comments.

 fs/btrfs/disk-io.c | 2 +-
 fs/btrfs/extent-tree.c | 2 +-
 fs/btrfs/file.c| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: commands like "du", "df", and "btrfs fs sync" hang

2016-05-09 Thread Nicholas D Steeves
On May 7, 2016 7:43 AM, "Kai Krakow" <hurikha...@gmail.com> wrote:
>
> Am Thu, 5 May 2016 08:35:37 +0200
> schrieb Kai Krakow <hurikha...@gmail.com>:
>
> > Am Tue, 3 May 2016 08:48:14 +0200
> > schrieb Kai Krakow <hurikha...@gmail.com>:
> >
> > > Am Sun, 1 May 2016 20:39:31 -0400
> > > schrieb Nicholas D Steeves <nstee...@gmail.com>:
> > >
> > > > On 1 May 2016 at 03:00, Kai Krakow <hurikha...@gmail.com>
> > > > wrote:
> >  [...]
> > > >
> > > > Out of curiosity, does this occur if you don't create or delete
> > > > snapshots, or if your backup script doesn't create or delete
> > > > snapshots?  And when it occurs, are you able to go to another
> > > > terminal and run a command that you don't use often, that
> > > > definitely had to be read from disk, but that doesn't query any
> > > > filesystem properties (eg: whois)?
> > >
> > > I only create snapshots in the destination device, not on the
> > > source. I could try disabling the snapshotting and see if it
> > > changes things.
> > >
> > > It seems from my observation, that only programs querying disk free
> > > or disk usage status hang, especially all of the btrfs subcommands
> > > doing it hang, in addition to traditional programs like du and df.
> > > I think also "btrfs sub delete" and friends hang. So your guess may
> > > very well go into the right direction. Let me try. Coming back
> > > later...
> >
> > With the snapshot and sync related bits disabled in my script, I no
> > longer experience freezing df/du/... commands.
>
> Ah well, I still do - it just triggers much less often.
>
> But I can track it down to the automounter now: As soon as I stop the
> automounter of my backup device, du/df/etc no longer hang. Still I'm
> not sure if the problem is originating from btrfs or autofs itself.
>

Hi Kai,

If it's anything like what I've encountered it can be worked around
with careful use of sleep, sync, btrfs sub sync and/or btrfs fi sync.
Also, if it's a similar issue to my own then from what I've read on
this list it's a timing or race issue (or maybe a locking issue).
There's a Debian bug where someone is testing out different
values/strategies.  The values/strategy I use can be found in
btrfs-borg on github.  If I'm correct, you'll probably also need to
script in delays to things time to settle with your automounter.

Of course, ideally someone will one day spend the time debugging
what's actually going on.

Cheers,
Nick
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: commands like "du", "df", and "btrfs fs sync" hang

2016-05-01 Thread Nicholas D Steeves
On 1 May 2016 at 03:00, Kai Krakow  wrote:
> Hello!
>
> I'm not sure what triggeres this, neither if it is btrfs specific. The
> filesystems have been recreated from scratch. Mainly during my rsync
> backup (from btrfs to btrfs), but not necessarily limited to rsync
> usage, my system experiences uninterruptable freezes of commands like
> "df", "du", "btrfs fs sync", and probably more.

Out of curiosity, does this occur if you don't create or delete
snapshots, or if your backup script doesn't create or delete
snapshots?  And when it occurs, are you able to go to another terminal
and run a command that you don't use often, that definitely had to be
read from disk, but that doesn't query any filesystem properties (eg:
whois)?

Thanks,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Install to or Recover RAID Array Subvolume Root?

2016-04-27 Thread Nicholas D Steeves
Hi David,

Thanks for the update, sorry for the delay, this email was sitting in
my drafts folder.  Instructions follow.

On 26 April 2016 at 05:23, David Alcorn  wrote:
>
> Nicholas:
>
> 1. My RAID array is partially filled and backed up.
> 2. I prefer to mount by UUID.
> 3. n/a
> 4. the relevant content of my blkid is:
> "/dev/sda1: UUID="AEF6-E013" TYPE="vfat"
> PARTUUID="b4b9c894-aa4f-4c83-ba27-8919eaaeac49"
> /dev/sda2: UUID="a428a1ea-5174-47b6-a894-521166a7a354" TYPE="ext2"
> PARTUUID="e25bcabb-2fd3-4515-bf74-3a2f0c548fec"
> /dev/sda3: UUID="1b413c7c-d39d-4c10-9f1d-4f3a21791c50"
> UUID_SUB="e2d12168-c216-425f-9a82-6d46dad8ccc8" TYPE="btrfs"
> PARTLABEL="primary" PARTUUID="1651d9f1-35ff-458b-ae64-bb6003c72159"
> /dev/sdb: UUID="c2cf44d3-28e0-492a-9d51-00a41b71428d"
> UUID_SUB="b4c6bbc5-ea80-4a79-89ca-333d351b09e5" TYPE="btrfs"
> /dev/sdc: UUID="c2cf44d3-28e0-492a-9d51-00a41b71428d"
> UUID_SUB="c4ac65e7-45bb-4450-9aa4-80a7fbebed3a" TYPE="btrfs"
> /dev/sdd: UUID="c2cf44d3-28e0-492a-9d51-00a41b71428d"
> UUID_SUB="5f55bf3b-8e82-4b50-8b62-5dc45c892281" TYPE="btrfs"
> /dev/sde: UUID="c2cf44d3-28e0-492a-9d51-00a41b71428d"
> UUID_SUB="5a12cbd5-bdc2-4258-9bc8-23d95cbb1db6" TYPE="btrfs"
>
>
> I omitted /dev/sdf from my blkid content as it is an unused,
> transitory drive that will be redeployed after "/" is moved to the
> array.  /dev/sda is my flash drive with separate partitions for efi
> (vfat), /boot/ (ext2) and "/" (btrfs subvolume).

Note: I'm guessing you read the Debian wiki entry on btrfs before
2016-03-14?  If you read it after this date, I might need to make the
fact that UEFI systems cannot boot from raw drives more explicit.

Ok, here is how you move everything on your raid6 array to a
subvolume--the goal is to have two subvolumes on your RAID6, one for
rootfs, and one for whatever you're using the bulk storage for.  I'm
assuming you haven't yet created one, but I think these commands will
still work, even if you have; you'll just have nested subvolumes in
that case.  Personally, I prefer alphanumeric subvolume naming, but
I'll use the @ convention for clarity.

These instructions assume that you've booted from you flash disk.
Please note that I'm not sure if this will work if your flash rootfs
is installed on subvolid=5...

btrfs sub list /
# If there is no output, make a note of it.

mkdir /tmp/tank
mount -o noatime UUID="c2cf44d3-28e0-492a-9d51-00a41b71428d" /tmp/tank
btrfs sub list /tmp/tank

cd /tmp/tank
btrfs sub create @tank
ls -1 | egrep -v '@tank' while read d; do mv "$d" @tank/; done
# There!  Now all your data is in a subvolume

btrfs sub snap -r / /rootfs-snapshot
btrfs send /rootfs-snapshot | btrfs receive /tmp/tank
# if this fails for any reason, send the output to this mailing list,
and mention if "btrfs sub list /"
# had any output.
# then umount /tmp/tank && sync

btrfs property set /tmp/tank/rootfs-snapshot ro false && mv rootfs-snapshot @
# There! now everything except for /boot and /boot/efi is on your raid6

# You have a choice between following the following instructions or
editing your grub command line.  Personally I would choose to edit the
grub command line.
# if you choose do it this way, run these commands:
#
# sync
# umount /tmp/tank
# sync
# reboot
#
# edit your grub command line by hitting the "e" key when the menu
comes up.  Find the bit that
# says root= and make sure it looks something like
# root=/dev/sdb ro rootflags=subvol=@

Boot it, and scroll down to the "IF YOU FOLLOWED" section.

# alternatively #

cd /tmp/tank/@
mount none -t proc proc
mount -o bind /dev dev
mount none -t sysfs sysfs
mount -o bind /boot boot
mount -o bind /boot/efi
chroot ./


 IF YOU FOLLOWED the "edit grub command line" method, start here 

editor /etc/fstab
# change the first line, for /

UUID=c2cf44d3-28e0-492a-9d51-00a41b71428d/btrfs
subvol=/@,noatime00

# if you have a line for your raid6 subvolume, find it, and change it to

UUID=c2cf44d3-28e0-492a-9d51-00a41b71428d/some/locationbtrfs
 subvol=/@tank,noatime00

# You just need to run the following command to get grub to use the
rootfs on your raid6

cp -arx /boot /boot.bak
update-grub
grep '/@/' /boot/grub/grub.cfg
# this should output something!  If it doesn't, seek help on IRC.

# What I would personally do is manually edit /boot/grub/grub.cfg...
# At any rate if that grep command outputted nothing, your system won't boot.
# restore booting from you /dev/sda with the following command
# cp -arx /boot.bak/* /boot
# /\ this /\ should allow your usb stick to continue booting.
# Continuing could trash your grub installation on /dev/sda if grep
'/@/' /boot/grub/grub.cfg
# didn't provide any output.

# If that grep verified that you'll be booting to @ as your rootfs,
then it's safe to do the following:
grub-install /dev/sda
sync

 IF YOU FOLLOWED the "edit grub command line" method, you're done! ###

### alternatively (continued) 
exit
umount proc dev sys 

Re: Install to or Recover RAID Array Subvolume Root?

2016-04-25 Thread Nicholas D Steeves
On 22 April 2016 at 06:44, David Alcorn  wrote:
>
> First, I verified that while the Debian Installer will install to a
> pre set default BTRFS RAID6 subvolume, the Grub install step fails.
> The alternative to restore installation to a RAID6 subvolume requires
> installation to a non RAID6 subvolume and then send|receive the
> snapshotted installation to the array.  To prepare for this attempt, I
> reinstalled BTRFS (Debian stable) to a flash drive using separate
> partitions for efi, /boot/ and / (in a subvolume).  The default
> subvolume was set to 5 for both the flash / partition and also the
> RAID6 array.  I used a separate /boot partition to reduce complexity.
> Both the kernel and btrfs tools were upgraded to 4.4.  I soon
> thereafter got lost.

1. Have you partially filled your RAID6 array?  If so, do you have
current backups for everything you care about?
2. Please indicate whether you prefer to mount by LABEL, UUID, or /dev
3. If it's by /dev, please send the output of: parted -l
4. If it's by LABEL or UUID, please also send the output of: blkid

Sincerely,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: primary location of btrfs-progs changelog: The wiki?

2016-04-25 Thread Nicholas D Steeves
On 25 April 2016 at 07:36, David Sterba  wrote:
> The conversion looks relatively ok, indentation could be 2 spaces and
> all bullet lists with '*'. Thanks.

Done.  I also added one line before each new version.  I've attached
it, since it's just one file; however, if you prefer I can clone your
repo on github and submit it that way.

Cheers,
Nick


changelog.gz
Description: GNU Zip compressed data


Re: primary location of btrfs-progs changelog: The wiki?

2016-04-25 Thread Nicholas D Steeves
oops, that gzip -9 shouldn't be there :-/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: primary location of btrfs-progs changelog: The wiki?

2016-04-25 Thread Nicholas D Steeves
On 25 April 2016 at 07:12, David Sterba <dste...@suse.cz> wrote:
> On Fri, Apr 22, 2016 at 08:41:36PM -0400, Nicholas D Steeves wrote:
>> I'm just wondering where the primary location of the btrfs-progs
>> changelog is located
>
> At the moment it's the release announcement in this mailinglist, that
> gets copied to the wiki with some formatting adjustments. I'm willing to
> copy the announcement text to a file in git (and will do for the next
> release). But at the moment I won't add all the past changelogs so if
> anybody wants to do that I'l appreciate that.

I'd be happy to.  Are you looking for something like:

curl https://btrfs.wiki.kernel.org/index.php/Changelog | html2text |
sed '0,/(announcement)/d;/By version (linux kernel)/Q' | gzip -9 >
changelog

With some formatting adjustments?

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


primary location of btrfs-progs changelog: The wiki?

2016-04-22 Thread Nicholas D Steeves
Hi,

I'm just wondering where the primary location of the btrfs-progs
changelog is located, because I'd like to include upstream changes in
the Debian package.  Is it really the wiki?  If so, it would seem my
options are copying+pasting with every release, or writing a script to
download the page, convert it to text, and then do something like cut
everything before By version (btrfs-progs) and everything after By
version (linux kernel).

Sincerely,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-04-22 Thread Nicholas D Steeves
Everyone, thank you very much for helping me to learn more.  Getting
up to speed takes forever!  I posted an idea relating to this thread,
but it's more read latency rather than throughput related, but I'm not
sure what the right way to link overlapping threads is, so here is how
to find it:

Date: Fri, 22 Apr 2016 18:14:00 -0400
Message-ID: 

Re: [PATCH v8 00/27][For 4.7] Btrfs: Add inband (write time) de-duplication framework

2016-04-22 Thread Nicholas D Steeves
Hi Qu,

On 6 April 2016 at 01:22, Qu Wenruo <quwen...@cn.fujitsu.com> wrote:
>
>
> Nicholas D Steeves wrote on 2016/04/05 23:47 -0400:
>>
>> It is unlikely that I will use dedupe, but I imagine your work will
>> apply tot he following wishlist:
>>
>> 1. Allow disabling of memory-backend hash via a kernel argument,
>> sysctl, or mount option for those of us have ECC RAM.
>>  * page_cache never gets pushed to swap, so this should be safe, no?
>
> And why it's related to ECC RAM? To avoid memory corruption which will
> finally lead to file corruption?
> If so, it makes sense.

Yes, my assumption is that a system with ECC will either correct the
error, or that an uncorrectable event will trigger the same error
handling procedure as if the software checksum failed.

> Also I didn't get the point when you mention page_cache.
> For hash pool, we didn't use page cache. We just use kmalloc, which won't be
> swapped out.
> For file page cache, it's not affected at all.

My apologies, I'm still very new to this, and my "point" only
demonstrates my lack of understanding.  Thank you for directing me to
the kmalloc-related sections.

>> 2. Implementing an intelligent cache so that it's possible to offset
>> the cost of hashing the most actively read data.  I'm guessing there's
>> already some sort of weighed cache eviction algorithm in place, but I
>> don't yet know how to look into it, let alone enough to leverage it...
>
>
> I not quite a fan of such intelligent but complicated cache design.
> The main problem is we are putting police into kernel space.
>
> Currently, either use last-recent-use in-memory backend, or use all-in
> ondisk backend.
> For user want more precious control on which file/dir shouldn't go through
> dedupe, they have the btrfs prop to set per-file flag to avoid dedupe.

I'm looking into a project for some (hopefully) safe,
low-hanging-fruit read optimisations, and read that

Qu Wenruo wrote on 2016/04/05 11:08 +0800:
> In-memory backend is much like an experimental field for new ideas,
> as it won't affect on-disk format at all."

Do you think that last-recent-use in-memory backend could be used in
this way?  Specifically, I'm wondering the even|odd PID method of
choosing which disk to read from could be replaced with the following
method for rotational disks:

The last-recent-use in-memory backend stores the value of last
allocation group (and/or transaction ID, or something else), with an
attached value of which disk did the IO.  I imagine it's possible to
minimize seeks by choosing the disk by getting the absolute value
difference between requested_location and last-recent-use_location of
each disk with a simple a static_cast.

Would the addition of that value pair (recent-use_location, disk) keep
things simple and maybe prove to be useful, or is last-recent-use
in-memory the wrong place for it?

Thank you for taking the time to reply,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs forced readonly + errno=-28 No space left

2016-04-22 Thread Nicholas D Steeves
On 21 April 2016 at 18:44, Chris Murphy  wrote:
> On Thu, Apr 21, 2016 at 6:53 AM, Martin Svec  wrote:
>> Hello,
>>
>> we use btrfs subvolumes for rsync-based backups. During backups btrfs often 
>> fails with "No space
>> left" error and goes to readonly mode (dmesg output is below) while there's 
>> still plenty of
>> unallocated space:
>
> Are you snapshotting near the time of enospc? If so it's a known
> problem that's been around for a while. There are some suggestions in
> the archives but I think the main thing is to back off on the workload
> momentarily, take the snapshot, and then resume the workload. I don't
> think it has to come to a complete stop but it's a lot more
> reproducible with heavy writes.

Is this known problem specific to heavy writes + take a snapshot + -o
compress (either zlib or lzo), or does this enospc also affect the
more simple heavy writes + take a snapshot case?  Is there a greater
likelyhood of running into it if using compression?

As for a workaround...is there a command like batch that can be used
to schedule things for periods of low IO?  Can a sync, btrfs fi sync
/mountpoint, or btrfs sub sync /sub_mountpoint before taking a
snapshot prevent it?  If the answer to all of these is no, which of
the following would be a good candidate for adding this support to:
http://gnqs.sourceforge.net/docs/starter_pack/alternatives/index.html

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Install to or Recover RAID Array Subvolume Root?

2016-04-19 Thread Nicholas D Steeves
On 19 April 2016 at 07:14, Austin S. Hemmelgarn  wrote:
> The closest I've ever seen for Debian to a Gentoo stage3 installation (the
> developers discourage usage of stage2 installs these days unless you're
> bootstrapping _everything_ yourself) is debbootstrap, and I could never get
> that to work reliably.

Fair, I remember Gentoo stage3+documentation was more straightforward
than debbootstrap+documentation.  There are a couple of wrappers
around debbootstrap and better documentation now, but I think there
are still rare times when bootstrapping from unstable or testing will
fail because a major transition is in progress (like the libc5 ->
libc6 transition, the ulibc -> glibc one, possibly the multiarch or
multilib transition, sometimes a GCC one, etc.)

> FWIW, the installer wasn't the only reason I switched to Gentoo, the two
> bigger ones for me were wanting newer software versions and needing
> different sets of features enabled on packages than the default builds, I
> ended up switching at the point that I was building more locally than I was
> installing from the repositories.

Ah yes, a convenient stream of fresh updates and the power of USE
flags :-)  For my needs, security-only updates + a backport whenever I
need a more up-to-date package of saves me time.  That's what the
choice of tool comes down to, right?  Does what you need it to, and
saves you time.

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Install to or Recover RAID Array Subvolume Root?

2016-04-19 Thread Nicholas D Steeves
On 18 April 2016 at 23:06, David Alcorn  wrote:
> Nicolas:
>
> My flash drive uses BTRFS and I am comfortable with your instructions
> with one exception.  What does "update /etc/default/grub" mean?
>
> Currently, I am waiting for a scrub to verify that all is in good
> order before fixing the problem.

I meant that more as a general precaution and good habit.  The most
common check/change would be to make sure the "resume=foo" option
matches the UUID or /dev/sdX of the swap partition; it's mostly
relevant to laptop users.

More to the point, as Austin and Chris mentioned the tricky bit is
going to get GRUB to boot from raid6 profile btrfs if your /boot is
part of your btrfs volume.  I honestly don't know if it will work...
Do you have a separate /boot partition?  What is your /dev/sda being
used for?  UEFI firmware loads GRUB's EFI payload, which loads the
different stages of grub that allow file system access, which is
necessary for grub to be able to find the kernel.  The EFI payload is
installed to your FAT-formatted ESP partition, which is usually
mounted to /boot/grub/efi/EFI.  I also suspect that without a separate
/boot partition GRUB won't be able to find the kernel
(/boot/vmlinuz-4.4.0-1-amd64).  If I remember correctly GRUB's stage1
talks to your motherboard's firmware, stage2 enables filesystem access
(/boot/grub/x86_64-efi/btrfs.mod), and stage3 loads the kernel.  En
bref, if GRUB has insufficient support for btrfs' raid6 profile then
grub will either be unable to access btrfs.mod, or btrfs.mod will be
unable to enable access /boot/vmlinuz-4.4.0-1-amd64.

I suspect the following worst-case scenario if you don't have a
partition you can use for /boot, and didn't leave any unallocated
space on any of your drives, and if you can't shrink something like a
swap partition to make room for /boot:  No need to backup/restore if
you have a usb port to dedicate to /boot.  A more exotic solution
would be using a small SATADOM to hold it, but then you lose a SATA
port ;-)  After sending the rootfs of your USB flash installation to a
subvolume of your raid6, you can manually use the GRUB command line on
your existing USB stick to attempt to boot the rootfs subvolume of
your raid6.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Install to or Recover RAID Array Subvolume Root?

2016-04-18 Thread Nicholas D Steeves
On 18 April 2016 at 01:22, David Alcorn  wrote:
> The goal is to install to a subvolume on the array
> without disturbing date on other array subvolumes.
>
> I erred and shutdown my NAS during a balance.  Grub lost track of my
> root.  Root was on RAID 6 array subvolid 257.  I can boot a different
> root from a USB flash drive but neither update-grub not install-grub
> sees my old root on array subvolid 257.  I am happy to either recover
> or lose array subvolid 257 but do not want to lose data on other array
> subvol's.  I prefer to have my root on the array rather than a flash
> drive.  The balance completed successfully after I booted from the
> flash drive.

Is your flash drive formatted btrfs?  If it is, you could always
snapshot it, send the snapshot to your array, set property of that
subvolume to RW, chroot, update fstab to mount / with the appropriate
subvol=option, update /etc/default/grub, reinstall grub and
update-grub, and reboot with your / as a subvolume on your array.  I'm
in the process of documenting how to do this on the Debian wiki.
Please let me know if I should put a rush on it.  It uses the subvol=
option rather than changing the volume's default subvol.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Install to or Recover RAID Array Subvolume Root?

2016-04-18 Thread Nicholas D Steeves
On 18 April 2016 at 11:52, Austin S. Hemmelgarn  wrote:
> On 2016-04-18 11:39, Chris Murphy wrote:
>>
>> On Mon, Apr 18, 2016 at 9:15 AM, Austin S. Hemmelgarn
>>  wrote:
> Like I said in one of my earlier e-mails though, these kind of limitations
> are part of why I switched to Gentoo, there's no GUI installer, but you can
> put the system together however the hell you want (which is especially nice
> with BTRFS, because none of the installers out there will let you use BTRFS
> on top of LVM, which is useful for things like BTRFS raid1 on top of
> DM-RAID0).

Limitations?  ;-)  Debian has a variety of bootstrapping methods,
though I forget if they're more analogous to starting a Gentoo
installation from stage2 or stage3...I haven't used Gentoo since 2002.
Please consult the following doc for one of the methods:
https://www.debian.org/releases/stable/amd64/apds03.html.en

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfrs send ... | ... receive ... stores files sparsely?

2016-04-15 Thread Nicholas D Steeves
Hi,

I happened to notice this when checking free space of my backup and
primary system.  I'll use an example of a file that won't have any
private or confidential information.  For du -hc
./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache; ls -alh
./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache; sha512sum
./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache

On the sending system:
11M ./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache
11M total
-rw-r--r-- 1 kdm nogroup 11M Apr  2 18:00
./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache
0ba53df610f35ef5170fe33fda4304456f4df2e997fa06467f8f6cfc89adc7da1698a1882929df56ce6be0e0846380cccfa411b4c7857f10a5c23d7797cb
 ./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache

On the receiving system:
64K ./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache
64K total
-rw-r--r-- 1 114 nogroup 11M Apr  2 18:00
./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache
0ba53df610f35ef5170fe33fda4304456f4df2e997fa06467f8f6cfc89adc7da1698a1882929df56ce6be0e0846380cccfa411b4c7857f10a5c23d7797cb
 ./var/tmp/kdecache-kdmtjNM8H/icon-cache.kcache

The only thing I can think of is that something in btrfs send ... |
... receive ... is converting to sparse storage.  Is this intentional?
 I suppose with a COW filesystem preallocating empty space to prevent
fragmentation doesn't work, because as soon as that
cache/database/whatever_file changes the filesystem COWs the changes
to a location that will almost certainly require a seek...  That said,
will the way btrfs-progs is doing it cause similar issues with
converting to sparse storage that I've observed with tar and rsync?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] btrfs: do not write corrupted metadata blocks to disk

2016-04-05 Thread Nicholas D Steeves
Hi Alex,

On 13 March 2016 at 05:51, Alex Lyakas <a...@zadarastorage.com> wrote:
> Nicholas,
>
> On Sat, Mar 12, 2016 at 12:19 AM, Nicholas D Steeves <nstee...@gmail.com> 
> wrote:
>> On 10 March 2016 at 06:10, Alex Lyakas <alex.bols...@gmail.com> wrote:
>> Does this mean there is a good chance that everyone has corrupted
>> metadata?
> No, this definitely does not.
>
> The code that I added prevents btrfs from writing a metadata block, if
> it somehow got corrupted before being sent to disk. If it happens, it
> indicates a bug somewhere in the kernel. For example, if some other
> kernel module erroneously uses a page-cache entry, which does not
> belong to it (and contains btrfs metadata block or part of it).

Oh wow, I didn't know that was possible.  If I understand correctly,
this patch makes using bcache a little bit safer?  (I don't use it
since I'm too short on free time to what is--I suspect-- something
that radically increases the chances of having to restore from backup)

>> Is there any way to verify/rebuild it without wipefs+mkfs+restore from 
>> backups?
> To verify btrfs metadata: unmount the filesystem and run "btrfs check
> ...". Do not specify the "repair" parameter. Another way to verify is
> to run "btrfs-debug-tree" and redirect its standard output to
> /dev/null. It should not print anything to standard error. But "btrfs
> check" is faster.

Ah, that's exactly what I was looking for!  Thank you.  It took
forever, and brought me back to what it was like to fsck large ext2
volumes.  Is btrfs check conceptually identical to a read-only fsck of
a ext2 volume?  If now how does it defer?

Are the following sort of errors still an issue?:
Extent back ref already exists for 2148837945344 parent 0 root 257
leaf parent key incorrect 504993210368
bad block 504993210368
( https://btrfs.wiki.kernel.org/index.php/Btrfsck )

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-04-05 Thread Nicholas D Steeves
On 11 March 2016 at 20:20, Chris Murphy <li...@colorremedies.com> wrote:
> On Fri, Mar 11, 2016 at 5:10 PM, Nicholas D Steeves <nstee...@gmail.com> 
> wrote:
>> P.S. Rather than parity, I mean instead of distributing into stripes, do a 
>> copy!
>
> raid56 by definition are parity based, so I'd say no that's confusing
> to turn it into something it's not.

I just found the Multiple Device Support diagram.  I'm trying to
figure out how hard it's going for me to get up to speed, because I've
only ever casually and informally read about filesystems.  I worry
that because I didn't study filesystem design in school, and because
everything I worked on was in C++...well, the level of sophistication
and design might be beyond what I can learn.  What do you think?  Can
you recommend any books on file system design that will provide what
is necessary to understand btrfs?

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-progs4.4 with linux-3.16.7 (with truncation of extends patch)

2016-04-05 Thread Nicholas D Steeves
Dear Duncan,

Gmail seems to have mangled the draft of my reply :-/  It's at the bottom.

On 06/02/16 12:57 AM, Duncan wrote:

Most of the rest of the userspace tools, in particular, btrfs scrub,
subvolume, balance, device, filesystem, send, receive, etc, work by
making kernel calls to do the actual work in any case, and they will
use the old calls if they need to. The compatibility discussion,
meanwhile, is on making mkfs.btrfs (and btrfs-convert) check the
running kernel and taking its defaults from what that kernel supports,
instead of choosing arbitrary defaults that may be better when
supported, but that older kernels don't actually support. Of course
there will still be options to set these as desired regardless of
defaults, just as there are now, so people using for instance booted
to an old recovery kernel for system maintenance can still choose
whatever options that version of mkfs.btrfs supports if they know
they'll actually be mounting with a newer kernel, but the idea is
simply to have mkfs.btrfs act more sanely /by/ /default/ when run on
old kernels, so those same old kernels can actually mount a filesystem
created with defaults. Along that line, as a distro maintainer of the
btrfs-progs package, you may wish to patch the mkfs.btrfs defaults to
what your kernel supports.

Btrfs-progs will probably ship with kernel-sensitive defaults some
time in the future (userspace 4.5 release, probably), but it doesn't
do so yet...
--

Thank you very much for taking the time to write such a thorough
reply. I'm not the maintainer of Debian's btrfs-progs package, but I
am investigating the issues preventing the addition of btrfs-progs-4.4
to the backports repository. [ edit: sorry it took me so long to
reply, I've been swamped with work.  In the meantime, it seems v4.4
has made it into the backports without warnings or compatibility
checks, so I want to get my facts straight asap and patch the package
with some kind of a notice/alert asap, if only through the
debian/NEWS...since there isn't currently a way to depend on a
particular kernel series, or ever a kernel version <= 4.4.0

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v8 00/27][For 4.7] Btrfs: Add inband (write time) de-duplication framework

2016-04-05 Thread Nicholas D Steeves
On 4 April 2016 at 12:55, David Sterba  wrote:
>> >> Not exactly. If we are using unsafe hash, e.g MD5, we will use MD5 only
>> >> for both in-memory and on-disk backend. No SHA256 again.
>> >
>> > I'm proposing unsafe but fast, which MD5 is not. Look for xxhash or
>> > murmur. As they're both order-of-magnitutes faster than sha1/md5, we can
>> > actually hash both to reduce the collisions.
>>
>> Don't quite like the idea to use 2 hash other than 1.
>> Yes, some program like rsync uses this method, but this also involves a
>> lot of details, like the order to restore them on disk.
>
> I'm considering fast-but-unsafe hashes for the in-memory backend, where
> the speed matters and we cannot hide the slow sha256 calculations behind
> the IO (ie. no point to save microseconds if the IO is going to take
> milliseconds).
>
>> >> In that case, for MD5 hit case, we will do a full byte-to-byte
>> >> comparison. It may be slow or fast, depending on the cache.
>> >
>> > If the probability of hash collision is low, so the number of needed
>> > byte-to-byte comparisions is also low.

It is unlikely that I will use dedupe, but I imagine your work will
apply tot he following wishlist:

1. Allow disabling of memory-backend hash via a kernel argument,
sysctl, or mount option for those of us have ECC RAM.
* page_cache never gets pushed to swap, so this should be safe, no?
2. Implementing an intelligent cache so that it's possible to offset
the cost of hashing the most actively read data.  I'm guessing there's
already some sort of weighed cache eviction algorithm in place, but I
don't yet know how to look into it, let alone enough to leverage it...
* on the topic of leaning on the cache, I've been thinking about
ways to optimize reads, while minimizing seeks on multi-spindle raid1
btrfs volumes.  I'm guessing that someone will commit a solution
before I manage to teach myself enough about filesystems to contribute
something useful.

That's it, in terms of features I want ;-)

It's probably a well-known fact, but sha512 is roughly 40 to 50%
faster than sha256, and 40 to 50% slower than sha1 on my 1200-series
Xeon v3 (Haswell), for 8192 size blocks.

Wish I could do more right now!
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: unable to mount btrfs partition, please help :(

2016-03-19 Thread Nicholas D Steeves
On 19 March 2016 at 21:34, Chris Murphy  wrote:
> On Sat, Mar 19, 2016 at 5:35 PM, Patrick Tschackert  
> wrote:
 $ uname -a
 Linux vmhost 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u4
 (2016-02-29) x86_64 GNU/Linux
>>>This is old. You should upgrade to something newer, ideally 4.5 but
>>>4.4.6 is good also, and then oldest I'd suggest is 4.1.20.
>>
>> Shouldn't I be able to get the newest kernel by executing "apt-get update && 
>> apt-get dist-upgrade"?
>> That's what I ran just now, and it doesn't install a newer kernel. Do I 
>> really have to manually upgrade to a newer one?
>
> I'm not sure. You might do a list search for debian, as I know debian
> users are using newer kernels that they didn't build themselves.
>
>
>> On top of the sticky situation i'm already in, i'm not sure if I trust 
>> myself manually building a new kernel. Should I?

If you enable Debian backports, which I assume you have since you're
running the version of btrfs-progs that was backported without a
warning not to use it with old kernels...well, if backports are
enabled then you can try:

apt-get install -t jessie-backports linux-image-4.3.0-0.bpo.1-amd64

linux-4.3.x was a complete mess for both my laptop (Thinkpad X220,
quite well supported), and I'm not sure if it was driver-related or
btrfs-related.  I actually started tracking linux-4.4 at rc1, it was
so bad.

If you don't want to try building your own kernel, I'd file a bug
report against linux-image-amd64 asking for a backport of linux-4.4,
which is in Stretch/testing; I'm surprised it hasn't been backported
yet...  The only issue I remember is an error message when booting, I
think because the microcode interface changed between 4.3.x and 4.4.x.
Installing microcode-related packages from backports is how think I
worked around this.

Alternatively, if you want to build your own kernel you might be able
to install linux-image from backports, download and untar linux-4.1.x
somewhere, and then copy the config from /boot/config-4.3* to
somedir/linux-4.1.x/.config.

I uploaded two scripts to github that I've been using for ages to
track the upstream LTS kernel branch that Debian didn't choose.  You
can find them here:

https://github.com/sten0/lts-convenience

All those syncs and btrfs sub sync lines are there because I always
seem to run strange issues with adding and removing snapshots.

Cheers,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 01/20] btrfs: dedup: Introduce dedup framework and its header

2016-03-15 Thread Nicholas D Steeves
On 13 March 2016 at 12:55, Duncan <1i5t5.dun...@cox.net> wrote:
> NeilBrown posted on Sun, 13 Mar 2016 22:33:22 +1100 as excerpted:
>
>> On Sun, Mar 13 2016, Qu Wenruo wrote:
>>
>>> BTW, I am always interested in, why de-duplication can be shorted as
>>> 'dedupe'.
>
>>> I didn't see any 'e' in the whole word "DUPlication".
>>> Or it's an abbreviation of "DUPlicatE" instead of "DUPlication"?
>>
>> The "u" in "duplicate" is pronounced as a long vowel sound, almost like
>> d-you-plicate.
>
>> To make a vowel long you can add an 'e' at the end of a word.
>
>> by analogy, "dupe" has a long "u" and so sounds like the first syllable
>> of "duplicate".
>
> As a native (USian but with some years growing up in the then recently
> independent former Crown colony of Kenya, influencing my personal
> preferences) English speaker, while what Neil says about short "u" vs.
> long "u" is correct, I agree with Qu that the "e" in dupe doesn't make so
> much sense, and would, other things being equal, vastly prefer dedup to
> dedupe, myself.
>
> However, there's some value in consistency, and given the previous dedupe
> precedent in-kernel, sticking to that for consistency reasons makes sense.
>
> But were this debate to have been about the original usage, I'd have
> definitely favored dedup all the way, as not withstanding Neil's argument
> above, adding the "e" makes little sense to me either.  So only because
> it's already in use in kernel code, but if this /were/ the original
> kernel code...
>
> So I definitely understand your confusion, Qu, and have the same personal
> preference even as a native English speaker. =:^)

I'm not sure to what degree the following is a relevant concern, and
I'm guessing it's not, other than for laughs, but to me "dedupe" reads
as "de-dupe" or "undupe".  While it functions as the inverse of the
verb "to dupe", I don't think one can "be unduped" or "be unfooled".
What is that old aphorism?  "Once duped twice shy"? ;-)

Honestly I'm surprised that a verb-form of "tuple" hasn't yet emerged,
because if it had we might be saying "detup" instead of "dedup".

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-03-11 Thread Nicholas D Steeves
P.S. Rather than parity, I mean instead of distributing into stripes, do a copy!
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-03-11 Thread Nicholas D Steeves
On 9 March 2016 at 23:06, Duncan <1i5t5.dun...@cox.net> wrote:
>
> Meanwhile, while parity-raid (aka raid56) isn't as bad as it was when
> first nominally completed in 3.19, as of 4.4 (and I think 4.5 as I've not
> seen a full trace yet, let alone a fix), there's still at least one known
> bug remaining to be traced down and exterminated, that's causing at least
> some raid56 reshapes to different numbers of devices or recovery from a
> lost device to take at least 10 times as long as they logically should,
> we're talking times of weeks to months, during which time the array can
> be used, but if it's a bad device replacement and more devices go down in
> that time...
>
> Tho hopefully all the really tough problems they would have hit with N-
> way-mirroring were hit and resolved with raid56, and N-way-mirroring will
> thus be relatively simple, so hopefully it's less than the four years
> it's taking raid56.  But I don't expect to see it for another year or
> two, and don't expect to be actually use it as intended (as a more
> failure resistant raid1) for some time after that as the bugs get worked
> out, so realistically, 2-3 years.

Could the raid5 code could be patched to copy/read instead of
build/check parity?  In effect I'm wondering if this could be used as
an alternative to the current raid1 profile.  The bonus being that it
seems like it might accelerate shaking out the bugs in raid5.
Likewise, would doing the same with the raid6 code in effect implement
a 3-way mirror distributed over n-devices?

Kind regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] btrfs: do not write corrupted metadata blocks to disk

2016-03-11 Thread Nicholas D Steeves
On 10 March 2016 at 06:10, Alex Lyakas  wrote:
> csum_dirty_buffer was issuing a warning in case the extent buffer
> did not look alright, but was still returning success.
> Let's return error in this case, and also add an additional sanity
> check on the extent buffer header.
> The caller up the chain may BUG_ON on this, for example flush_epd_write_bio 
> will,
> but it is better than to have a silent metadata corruption on disk.

Does this mean there is a good chance that everyone has corrupted
metadata?  Is there any way to verify/rebuild it without
wipefs+mkfs+restore from backups?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-03-09 Thread Nicholas D Steeves
On 9 March 2016 at 16:36, Roman Mamedov <r...@romanrm.net> wrote:
> On Wed, 9 Mar 2016 15:25:19 -0500
> Nicholas D Steeves <nstee...@gmail.com> wrote:
>
>> I understood that a btrfs RAID1 would at best grab one block from sdb
>> and then one block from sdd in round-robin fashion, or at worse grab
>> one chunk from sdb and then one chunk from sdd.  Alternatively I
>> thought that it might read from both simultaneously, to make sure that
>> all data matches, while at the same time providing single-disk
>> performance.  None of these was the case.  Running a single
>> IO-intensive process reads from a single drive.
>
> No RAID1 implementation reads from disks in a round-robin fashion, as that
> would give terrible performance giving disks a constant seek load instead of
> the normal linear read scenario.

On 9 March 2016 at 16:26, Chris Murphy <li...@colorremedies.com> wrote:
> It's normal and recognized to be sub-optimal. So it's an optimization
> opportunity. :-)
>
> I see parallelization of reads and writes to data single profile
> multiple devices as useful also, similar to XFS allocation group
> parallelization. Those AGs are spread across multiple devices in
> md/lvm linear layouts, so if you have processes that read/write to
> multiple AGs at a time, those I/Os happen at the same time when on
> separate devices.

Chris, yes, that's exactly how I thought that it would work.  Roman,
when I said round-robin--please forgive my naïvité--I meant hoped
there would be a chunk A1 from disk0 read at the same time as chunk A2
from disk1.  Can you use the btree associated with chunk A1 to put
disk B to work readingahead, but searching the btree associated with
chunk A1?  Then, when disk0 finishes reading A1 into memory, A2 gets
contatinated.

If disk0 is finishes reading chunk A1, change the primary read disk
for PID to disk1 and let reading A2 continue, and put disk0 to work
using the same method as disk1 was previously, but on chunk A3.  Else,
if disk1 reading A2 finishes before disk0 finishes A1, then disk0
remains the primary read disk for PID and disk1 begins reading A3.

That's how I thought that it would work, and that the scheduler could
interrupt the readahead operation for non-primary disk.  Eg: disk1
would becoming primary reading disk for PID2, where disk0 would
continue as primary for PID1.  And if there's a long queue of reads or
writes then this simplest-case would be limited in the following way:
disk0 and disk1 never actually get to read or write to the same chunk
<- Is this the explanation why, for practical reasons, dstat shows the
behaviour it shows?

If this is the case, would it be possible for the non-primary read
disk for PID1 to tag the A[x] chunk it wrote to memory with a request
for the PID to use what it wrote to memory from A[x]?  And also for
the "primary" disk to resume from location y in A[x] instead beginning
from scratch with A[x]?  Roman, in this case, the seeks would be
time-saving, no?

Unfortunately, I don't know how to implement this, but I had imagined
that the btree for a directory contained pointers (I'm using this term
loosely rather than programically) to all extents associated with all
files contained underneath it.  Or does it point to the chunk, which
then points to the extent?  At any rate, is this similar to the
dir_index of ext4, and is this the method btrfs uses?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-03-09 Thread Nicholas D Steeves
On 9 March 2016 at 16:43, Chris Murphy  wrote:
> On Wed, Mar 9, 2016 at 2:36 PM, Roman Mamedov  wrote:
>> On Wed, 9 Mar 2016 15:25:19 -0500
> This is a better qualification than my answer.
>
>>
>> Now if you want to do some more performance evaluation, check with your dstat
>> if both disks happen to *write* data in parallel, when you write to the 
>> array,
>> as ideally they should. Last I checked they mostly didn't, and this almost
>> halved write performance on a Btrfs RAID1 compared to a single disk.
>
> I've found it to be about the same or slightly less than single disk.
> But most of my writes to raid1 are btrfs receive.

Here are my results for sending pv /tmpfs_mem_disk/deleteme.tar -pabet
> /scratch/deleteme.tar, after I've cleared all caches.  Pv states the
average rate was 77MiB/s, which seems low for a 4GB file.  Here is the
dstat section for peak rates for writing.

system -dsk/totaldsk/sdb-dsk/sdd--
time |  read writ: read   writ: read  writ
09-03 16:48:43|   48k  145M:   074M:  48k   72M
09-03 16:48:44|  0   120M:   074M:   0 46M
09-03 16:48:45| 840k  144M:   074M:   0 70M
09-03 16:48:46|  0   147M:   080M:   0 67M

and for reading many >200MB raw WAVs from one subvolume while writing
a ~20GB tar to another subvolume:

09-03 16:59:57|  56M  103M:   054M:  56M   50M
09-03 16:59:58|  48M  118M:  32k   56M:  48M   62M
09-03 16:59:59|  54M  113M:   057M:  54M   55M
09-03 17:00:00|  43M  116M:   054M:  43M   63M
09-03 17:00:01|  60M  118M:   064M:  60M   54M
09-03 17:00:02|  57M   97M:  32k   48M:  54M   49M
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: dstat shows unexpected result for two disk RAID1

2016-03-09 Thread Nicholas D Steeves
grr.  Gmail is terrible :-/

I understood that a btrfs RAID1 would at best grab one block from sdb
and then one block from sdd in round-robin fashion, or at worse grab
one chunk from sdb and then one chunk from sdd.  Alternatively I
thought that it might read from both simultaneously, to make sure that
all data matches, while at the same time providing single-disk
performance.  None of these was the case.  Running a single
IO-intensive process reads from a single drive.

Did I misunderstand the documentation and is this normal, or is this a bug?
Nicholas

On 9 March 2016 at 15:21, Nicholas D Steeves <nstee...@gmail.com> wrote:
> Hello everyone,
>
> I've run into an expected behaviour for a my two disk RAID1.  I mount
> with UUIDs, because sometimes my USB disk gets /dev/sdc instead of
> /dev/sdd.  The two elements of my RAID1 are currently sdb and sdd.
>
> dstat -tdD total,sdb,sdc,sdd
>
> It seems that per process, reads come from either sdb or sdd.  This
> surprises me, because I understood that a btrfs RAID1
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


dstat shows unexpected result for two disk RAID1

2016-03-09 Thread Nicholas D Steeves
Hello everyone,

I've run into an expected behaviour for a my two disk RAID1.  I mount
with UUIDs, because sometimes my USB disk gets /dev/sdc instead of
/dev/sdd.  The two elements of my RAID1 are currently sdb and sdd.

dstat -tdD total,sdb,sdc,sdd

It seems that per process, reads come from either sdb or sdd.  This
surprises me, because I understood that a btrfs RAID1
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: incomplete conversion to RAID1?

2016-03-09 Thread Nicholas D Steeves
On 4 March 2016 at 07:55, Duncan <1i5t5.dun...@cox.net> wrote:
> Nicholas D Steeves posted on Thu, 03 Mar 2016 16:21:53 -0500 as excerpted:
>
>>> Of course either way assumes you don't run into some bug that will
>>> prevent removal of that chunk, perhaps exactly the same one that kept
>>> it from being removed during the normal raid1 conversion.  If that
>>> happens,
>>> the devs may well be interested in tracking it down, as I'm not aware
>>> of anything similar being posted to the list.
>>
>> I've made up-to-date backups of this volume.  Is one of these two
>> methods more likely to trigger a potential bug?  Also, this potential
>> bug, if it's not just cosmetic wouldn't silently corrupt something in my
>> pool, right?  It's when things won't fail loudly and immediately that
>> concerns me, but if that's not an issue then I'd prefer to try to gather
>> potentially useful data.
>
> I don't actually expect a bug.


I used btrfs balance start -dprofiles=single, because you mentioned
you usually use btrfs balance start -dusage=0, in the hopes that I
might be able to find a useful bug.  Nope!  100% trouble free, and
very fast.

Thank you,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: incomplete conversion to RAID1?

2016-03-03 Thread Nicholas D Steeves
Hi Duncan,

> Of course either way assumes you don't run into some bug that will
> prevent removal of that chunk, perhaps exactly the same one that kept it
> from being removed during the normal raid1 conversion.  If that happens,
> the devs may well be interested in tracking it down, as I'm not aware of
> anything similar being posted to the list.

I've made up-to-date backups of this volume.  Is one of these two
methods more likely to trigger a potential bug?  Also, this potential
bug, if it's not just cosmetic wouldn't silently corrupt something in
my pool, right?  It's when things won't fail loudly and immediately
that concerns me, but if that's not an issue then I'd prefer to try to
gather potentially useful data.

Thanks again for such a great, and super informative reply.  I've been
swamped with work so haven't finished replying to your last one (Re:
btrfs-progs4.4 with linux-3.16.7 (with truncation of extends patch),
Fri, 05 Feb 2016 21:58:26 -0800).  To briefly reply: The last 3.5
years I've spent countless hours reading everything I could find on
btrfs and zfs, and I chose to start testing btrfs in the fall of 2015.
Currently I'm working on a major update of the Debian wiki btrfs page,
I plan to package kdave's btrfsmaintenance scripts, and additionally
publish some convenience scripts I use to make staying up-to-date with
one's preferred LTS kernel a two-command affair.

One thing I'd like to see on btrfs.wiki.kernel.org is an "at a glance"
table of ranked btrfs features, according to riskiness.  Say: 1)
Safest configuration; keep backups, as always, just in case.  2)
Features that might causes issues or that only occasionally trigger
issues.  3) Still very experimental; only people who intend to help
with development and debugging should use these.  4) Risk of corrupted
data, your backups are useless.  The benefit is then all
distributions' wikis could point to this table.  I've read OpenSuSE
has patches to disable features in at least 3), and 4), and maybe in
2), so maybe it wouldn't be useful for them...but for everyone else...
:-)

Also, I think that it would be neat to have a list of subtle bugs that
could benefit from more people trying to find them, and also a list of
stuff to test that will provide the data necessary to help fix the
"btrfs pools need to be babysit" issues I've read so often about.  I'm
not really able to understand anything more complex than a simple
utility program, so the most I can help out with is writing reports,
documentation, packaging, and some distribution integration stuff.

I'll send more questions in our other thread wrt to updating the
Debian wiki next week.  It will be a bunch of stuff like "Does btrfs
send > to a file count as a backup as of linux-4.4.x, or should you
still be using another method?"

Kind regards,
Nicholas

On 3 March 2016 at 00:53, Duncan <1i5t5.dun...@cox.net> wrote:
> Nicholas D Steeves posted on Wed, 02 Mar 2016 20:25:46 -0500 as excerpted:
>
>> btrfs fi show
>> Label: none  uuid: 2757c0b7-daf1-41a5-860b-9e4bc36417d3
>> Total devices 2 FS bytes used 882.28GiB
>> devid1 size  926.66GiB used 886.03GiB path /dev/sdb1
>> devid2 size  926.66GiB used 887.03GiB path /dev/sdc1
>>
>> But this is what's troubling:
>>
>> btrfs fi df /.btrfs-admin/
>> Data, RAID1: total=882.00GiB, used=880.87GiB
>> Data, single: total=1.00GiB, used=0.00B
>> System, RAID1: total=32.00MiB, used=160.00KiB
>> Metadata, RAID1: total=4.00GiB, used=1.41GiB
>> GlobalReserve, single: total=496.00MiB, used=0.00B
>>
>> Do I still have 1.00GiB that isn't in RAID1?
>
> You have a 1 GiB empty data chunk still in single mode, explaining both
> the extra line in btrfs fi df, and the 1 GiB discrepancy between the two
> device usage values in btrfs fi show.
>
> It's empty, so it contains no data or metadata, and is thus more a
> "cosmetic oddity" than a real problem, but wanting to be rid of it is
> entirely understandable, and I'd want it gone as well. =:^)
>
> Happily, it should be easy enough to get rid of using balance filters.
> There are at least a two such filters that should do it, so take your
> pick. =:^)
>
> btrfs balance start -dusage=0
>
> This is the one I normally use.  -d is of course for data chunks. usage=N
> says only balance chunks with less than or equal to N% usage, this
> normally being used as a quick way to combine several partially used
> chunks into fewer chunks, releasing the space from the reclaimed chunks
> back to unallocated.  Of course usage=0 means only deal with fully empty
> chunks, so they don't have to be rewritten at all and can be directly
> reclaimed.
>
> This used to be needed somewhat often, as until /relatively/ recent
> kernels (tho a couple years ago now, 3.17 IIR

incomplete conversion to RAID1?

2016-03-02 Thread Nicholas D Steeves
Hi,

I recently moved my main system to btrfs and had to shuffle everything
around in tricky ways while migrating from LVM while maintaining at
least two copies + backup of everything.  I'm using linux-4.4.3 with
btrfs-progs-4.4.1.  The end result, after adding sdc and then
converting both metadata and data to RAID1 is:

fdisk -l output:
/dev/sdb14096 1943359487   1943355392 926.7G 83 Linux
/dev/sdb2   1943359488 1953523711   10164224  4.9G 82 Linux
swap / Solaris

and

/dev/sdc14096 19433594871943355392 926.7G Linux filesystem
/dev/sdc2   1943359488 1951748095  8388608 4G Linux swap

btrfs fi show
Label: none  uuid: 2757c0b7-daf1-41a5-860b-9e4bc36417d3
Total devices 2 FS bytes used 882.28GiB
devid1 size 926.66GiB used 886.03GiB path /dev/sdb1
devid2 size 926.66GiB used 887.03GiB path /dev/sdc1

But this is what's troubling:

btrfs fi df /.btrfs-admin/
Data, RAID1: total=882.00GiB, used=880.87GiB
Data, single: total=1.00GiB, used=0.00B
System, RAID1: total=32.00MiB, used=160.00KiB
Metadata, RAID1: total=4.00GiB, used=1.41GiB
GlobalReserve, single: total=496.00MiB, used=0.00B

Do I still have 1.00GiB that isn't in RAID1?

Best regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs-progs4.4 with linux-3.16.7 (with truncation of extends patch)

2016-02-05 Thread Nicholas D Steeves

Hello,

Is it safe to use btrfs-progs-4.4 with linux-3.16.7 patched with the 
following:


linux-3.16.7
Btrfs: fix truncation of compressed and inlined extents
https://git.kernel.org/linus/0305cd5f7fca85dae392b9ba85b116896eb7c1c7

The specific case I'm looking into is when a Debian user sticks with the 
default kernel, but installs btrfs-progs-4.4 from backports.  I've also 
read that there will be some userspace<->kernel compatibility checks 
added to btrfs-progs at some point, but I wasn't able to find recent 
news on its progress.


Kind regards,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html