I have a cron job which frequently deletes a subvolume and I decided I
wanted to silence the output. I remembered there was a -q option and
thought I would just quickly glance at the documentation for it to check
there wasn't some reason I had not put that in the script when I first
wrote it some
On 10/03/2021 12:07, telsch wrote:
> Dear devs,
>
> after my root partiton was full, i deleted the last monthly snapshots.
> however, no memory was freed.
> so far rebalancing helped:
>
> btrfs balance start -v -musage=0 /
> btrfs balance start -v -dusage=0 /
>
> i have deleted all
On 10/03/2021 08:09, Ulli Horlacher wrote:
> On Wed 2021-03-10 (07:59), Hugo Mills wrote:
>
>>> On tsmsrvj I have in /etc/exports:
>>>
>>> /data/fex tsmsrvi(rw,async,no_subtree_check,no_root_squash)
>>>
>>> This is a btrfs subvolume with snapshots:
>>>
>>> root@tsmsrvj:~# btrfs subvolume lis
On 19/02/2021 17:42, Joshua wrote:
> February 3, 2021 3:16 PM, "Graham Cobb" wrote:
>
>> On 03/02/2021 21:54, jos...@mailmag.net wrote:
>>
>>> Good Evening.
>>>
>>> I have a large BTRFS array, (14 Drives, ~100 TB RAW) which has been havin
On 03/02/2021 21:54, jos...@mailmag.net wrote:
> Good Evening.
>
> I have a large BTRFS array, (14 Drives, ~100 TB RAW) which has been having
> problems mounting on boot without timing out. This causes the system to drop
> to emergency mode. I am then able to mount the array in emergency mode an
On 23/01/2021 17:21, Zygo Blaxell wrote:
> On Sat, Jan 23, 2021 at 02:55:52PM +0000, Graham Cobb wrote:
>> On 22/01/2021 22:42, Zygo Blaxell wrote:
>> ...
>>>> So the point is: what happens if the root subvolume is not mounted ?
>>>
>>> It's n
On 22/01/2021 22:42, Zygo Blaxell wrote:
...
>> So the point is: what happens if the root subvolume is not mounted ?
>
> It's not an onerous requirement to mount the root subvol. You can do (*)
>
> tmp="$(mktemp -d)"
> mount -osubvolid=5 /dev/btrfs "$tmp"
> setfattr -n 'btrfs..
I am about to deploy my first btrfs filesystems on NVME. Does anyone
have any hints or advice? Initially they will be root disks, but I am
thinking about also moving home disks and other frequently used data to
NVME, but probably not backups and other cold data.
I am mostly wondering about non-fun
On 12/01/2021 11:27, Filipe Manana wrote:
> ...
> In other words, what I think we should have is a check that forbids
> using two roots for an incremental send that are not snapshots of the
> same subvolume (have different parent uuids).
Are you suggesting that rule should also apply for clone sou
On 10/01/2021 07:41, cedric.dew...@eclipso.eu wrote:
> I've tested some more.
>
> Repeatedly sending the difference between two consecutive snapshots creates a
> structure on the target drive where all the snapshots share data. So 10
> snapshots of 10 files of 100MB takes up 1GB, as expected.
>
On 07/01/2021 03:09, Zygo Blaxell wrote:
...
> I would only attempt to put the archives into long-term storage after>
> verifying that they produce correct output when fed to btrfs receive;>
otherwise, you could find out too late that a months-old archive was>
damaged, incomplete, or incorrect, an
On 05/01/2021 08:34, Forza wrote:
>
>
> On 2021-01-04 21:51, cedric.dew...@eclipso.eu wrote:
>> I have a master NAS that makes one read only snapshot of my data per
>> day. I want to transfer these snapshots to a slave NAS over a slow,
>> unreliable internet connection. (it's a cheap provider).
On 01/01/2021 14:42, cedric.dew...@eclipso.eu wrote:
...
> I'm looking for a program that can synchronize a btrfs snapshot via a
> network, and supports resuming of interrupted transfers.
Not an answer to your question... the way I would solve your problem is
to do the "btrfs send" to a local fil
On 21/12/2020 20:45, Claudius Ellsel wrote:
> I had a closer look at snapper now and have installed and set it up. This
> seems to be really the easiest way for me, I guess. My main confusion was
> probably that I was unsure whether I had to create a subvolume prior to this
> or not, which got s
On 17/10/2019 16:57, Chris Murphy wrote:
> On Wed, Oct 16, 2019 at 10:07 PM Jon Ander MB
> wrote:
>>
>> It would be interesting to know the pros and cons of this setup that
>> you are suggesting vs zfs.
>> +zfs detects and corrects bitrot (
>> http://www.zfsnas.com/2015/05/24/testing-bit-rot/ )
>
On 04/10/2019 09:11, Nikolay Borisov wrote:
>
>
> On 4.10.19 г. 10:50 ч., Anand Jain wrote:
>> btrfs_free_extra_devids() reorgs fs_devices::latest_bdev
>> to point to the bdev with greatest device::generation number.
>> For a typical-missing device the generation number is zero so
>> fs_devices::
Hi,
I seem to have another case where scrub gets confused when it is
cancelled and restarted many times (or, maybe, it is my error or
something). I will look into it further but, instead of just hacking
away at my script to work out what is going on, I thought I might try to
create a regression te
e scanning", ret);
>> ret = btrfs_register_all_devices();
>> error_on(ret,
Shouldn't "--verbose" be accepted as a long version of the option? That
would mean adding it to long_options.
The usage message cmd_device_scan_usage needs to be
On 29/09/2019 22:38, Robert Krig wrote:
> I'm running Debian Buster with Kernel 5.2.
> Btrfs-progs v4.20.1
I am running Debian testing (bullseye) and have chosen not to install
the 5.2 kernel yet because the version of it in bullseye
(linux-image-5.2.0-2-amd64) is based on 5.2.9 and (as far as I c
On 09/09/2019 13:18, Qu Wenruo wrote:
>
>
> On 2019/9/9 下午7:25, zedlr...@server53.web-hosting.com wrote:
>> What I am complaining about is that at one point in time, after issuing
>> the command:
>> btrfs balance start -dconvert=single -mconvert=single
>> and before issuing the 'btrfs delete'
On 30/07/2019 23:44, Swâmi Petaramesh wrote:
> Still, losing a given FS with subvols, snapshots etc, may be very
> annoying and very time consuming rebuilding.
I believe that in one of the earlier mails, Qu said that you can
probably mount the corrupted fs readonly and read everything.
If that is
On 12/07/2019 14:35, Patrik Lundquist wrote:
> On Fri, 12 Jul 2019 at 14:48, Anand Jain wrote:
>> I am unable to reproduce, I have tried with/without dm-crypt on both
>> oraclelinux and opensuse (I am yet to try debian).
>
> I'm using Debian testing 4.19.0-5-amd64 without problem. Raid1 with 5
>
On 12/07/2019 13:46, Anand Jain wrote:
> I am unable to reproduce, I have tried with/without dm-crypt on both
> oraclelinux and opensuse (I am yet to try debian).
I understand. I am going to be away for a week but I am happy to look
into trying to create a smaller reproducer (for example in a vm)
On 11/07/2019 03:46, Anand Jain wrote:
> Now the question I am trying to understand, why same device is being
> scanned every 2 mins, even though its already mount-ed. I am guessing
> its toggling the same device paths trying to mount the device-path
> which is not mounted. So autofs's check for th
Anand's Nov 2018 patch "btrfs: harden agaist duplicate fsid" has
recently percolated through to my Debian buster server system.
And it is spamming my log files.
Each of my btrfs filesystem devices logs 4 messages every 2 minutes.
Here is an example of the 4 messages related to one device:
Jul 10
On 05/07/2019 12:47, Remi Gauvin wrote:
> On 2019-07-05 7:06 a.m., Ulli Horlacher wrote:
>
>>
>> Ok, it seems my idea (replacing the original root subvolume with a
>> snapshot) is not possible.
>>
> ...
> It is common practice with installers now to mount your root and home on
> a subvolume for e
On 28/06/2019 18:40, David Sterba wrote:
> Hi,
>
> this is a pre-release of btrfs-progs, 5.2-rc1.
>
> The proper release is scheduled to next Friday, +7 days (2019-07-05), but can
> be postponed if needed.
>
> Scrub status has been reworked:
>
> UUID: bf8720e0-606b-4065-8320-b48df
On 18/06/2019 09:08, Graham R. Cobb wrote:
> When a scrub completes or is cancelled, statistics are updated for reporting
> in a later btrfs scrub status command and for resuming the scrub. Most
> statistics (such as bytes scrubbed) are additive so scrub adds the statistics
> from the current run t
On 08/06/2019 00:55, Graham R. Cobb wrote:
> When a scrub completes or is cancelled, statistics are updated for reporting
> in a later btrfs scrub status command. Most statistics (such as bytes
> scrubbed)
> are additive so scrub adds the statistics from the current run to the
> saved statistics.
On 06/06/2019 15:26, Graham Cobb wrote:
> However, after a few cancel/resume cycles, the scrub terminates. No
> errors are reported but one of the resumes will just immediately
> terminate claiming the scrub is done. It isn't. Nowhere near.
I believe I have found the problem. It i
I have a btrfs filesystem which I want to scrub. This is a multi-TB
filesystem and will take well over 24 hours to scrub.
Unfortunately, the scrub turns out to be quite intrusive into the system
(even when making sure it is very low priority for ionice and nice).
Operations on other disks run exce
On 17/05/2019 17:39, Steven Davies wrote:
> On 17/05/2019 16:28, Graham Cobb wrote:
>
>> That is why I created my "extents-list" stuff. This is a horrible hack
>> (one day I will rewrite it using the python library) which lets me
>> answer questions like: "
On 17/05/2019 14:57, Axel Burri wrote:
> btrfs fi du shows me the information wanted, but only for the last
> received subvolume (as you said it changes over time, and any later
> child will share data with it). For all others, it merely shows "this
> is what gets freed if you delete this subvolume
On 18/02/2019 19:58, André Malm wrote:
> What causes the extent to be incomplete? And can I avoid it?
Does it matter? I presume the send is working OK, it is just that it
sends a little more data than it needs to. Or have you seen any data loss?
Graham
On 04/12/2018 12:38, Austin S. Hemmelgarn wrote:
> In short, USB is _crap_ for fixed storage, don't use it like that, even
> if you are using filesystems which don't appear to complain.
That's useful advice, thanks.
Do you (or anyone else) have any experience of using btrfs over iSCSI? I
was thin
On 29/08/18 14:31, Jorge Bastos wrote:
> Thanks, that makes sense, so it's only possible to see how much space
> a snapshot is using with quotas enable, I remember reading that
> somewhere before, though there was a new way after reading this latest
> post .
My extents lists scripts (https://githu
On 08/01/18 16:34, Austin S. Hemmelgarn wrote:
> Ideally, I think it should be as generic as reasonably possible,
> possibly something along the lines of:
>
> A: While not strictly necessary, running regular filtered balances (for
> example `btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mli
On 05/12/17 18:01, Goffredo Baroncelli wrote:
> On 12/05/2017 04:42 PM, Graham Cobb wrote:
>> On 05/12/17 12:41, Austin S. Hemmelgarn wrote:
>>> On 2017-12-05 03:43, Qu Wenruo wrote:
>>>>
>>>>
>>>> On 2017年12月05日 16:25, Misono, Tomohiro wrot
On 05/12/17 12:41, Austin S. Hemmelgarn wrote:
> On 2017-12-05 03:43, Qu Wenruo wrote:
>>
>>
>> On 2017年12月05日 16:25, Misono, Tomohiro wrote:
>>> Hello all,
>>>
>>> I want to address some issues of subvolume usability for a normal user.
>>> i.e. a user can create subvolumes, but
>>> - Cannot dele
On 16/10/17 14:28, David Sterba wrote:
> On Sun, Oct 15, 2017 at 04:19:23AM +0300, Cerem Cem ASLAN wrote:
>> `btrfs send | btrfs receive` removes NOCOW attributes. Is it a bug or
>> a feature? If it's a feature, how can we keep these attributes if we
>> need to?
>
> This is a known defficiency of
On 30/09/17 19:17, Holger Hoffstätte wrote:
> On 09/30/17 19:56, Holger Hoffstätte wrote:
>> shell hackery as alternative. Anyway, I was sure that at the time the
>> other letters sounded even worse/were taken, but that may just have been
>> in my head. ;-)
>>
>> I just rechecked and -S is still av
On 30/09/17 14:08, Holger Hoffstätte wrote:
> A "root" subvolume is identified by a null parent UUID, so adding a new
> subvolume filter and flag -P ("Parent") does the trick.
I don't like the naming. The flag you are proposing is really nothing to
do with whether a subvolume is a parent or not:
On 19/09/17 01:41, Dave wrote:
> Would it be correct to say the following?
Like Duncan, I am just a user, and I haven't checked the code. I
recommend Duncan's explanation, but in case you are looking for
something simpler, how about thinking with the following analogy...
Think of -p as like doing
On 18/09/17 07:10, Dave wrote:
> For my understanding, what are the restrictions on deleting snapshots?
>
> What scenarios can lead to "ERROR: parent determination failed"?
The man page for btrfs-send is reasonably clear on the requirements
btrfs imposes. If you want to use incremental sends (i.e
On 14/08/17 16:53, Austin S. Hemmelgarn wrote:
> Quite a few applications actually _do_ have some degree of secondary
> verification or protection from a crash.
I am glad your applications do and you have no need of this feature.
You are welcome not to use it. I, on the other hand, definitely wa
On 14/08/17 15:23, Austin S. Hemmelgarn wrote:
> Assume you have higher level verification.
But almost no applications do. In real life, the decision
making/correction process will be manual and labour-intensive (for
example, running fsck on a virtual disk or restoring a file from backup).
> Wo
On 21/07/17 07:06, Paul Jackson wrote:
> What in god green's earth can kernel file system code be
> doing that takes fifteen minutes (so far, in this case) or
> fifty minutes (in the case I first reported on this thread?
I find that just doing a balance on a disk with lots of snapshots can
cause t
On 25/04/17 05:02, J. Hart wrote:
> I have a remote machine with a filesystem for which I periodically take
> incremental snapshots for historical reasons. These snapshots are
> stored in an archival filesystem tree on a file server. Older snapshots
> are removed and newer ones added on a rotatio
On 27/03/17 13:00, J. Hart wrote:
> That is a very interesting idea. I'll try some experiments with this.
You might want to look into two tools which I have found useful for
similar backups:
1) rsnapshot -- this uses rsync for backing up multiple systems and has
been stable for quite a long time
On 08/02/17 18:38, Libor Klepáč wrote:
> I'm interested in using:
...
> - send/receive for offisite backup
I don't particularly recommend that. I do use send/receive for onsite
backups (I actually use btrbk). But for offsite I use a traditional
backup tool (I use dar). For three main reasons:
1)
On 05/02/17 12:08, Kai Krakow wrote:
> Wrong. If you tend to not be in control of the permissions below a
> mountpoint, you prevent access to it by restricting permissions on a
> parent directory of the mountpoint. It's that easy and it always has
> been. That is standard practice. While your backu
On 03/02/17 16:01, Austin S. Hemmelgarn wrote:
> Ironically, I ended up having time sooner than I thought. The message
> doesn't appear to be in any of the archives yet, but the message ID is:
> <20170203134858.75210-1-ahferro...@gmail.com>
Ah. I didn't notice it until after I had sent my message
On 03/02/17 12:44, Austin S. Hemmelgarn wrote:
> I can look at making a patch for this, but it may be next week before I
> have time (I'm not great at multi-tasking when it comes to software
> development, and I'm in the middle of helping to fix a bug in Ansible
> right now).
That would be great,
On 02/02/17 00:02, Duncan wrote:
> If it's a workaround, then many of the Linux procedures we as admins and
> users use every day are equally workarounds. Setting 007 perms on a dir
> that doesn't have anything immediately security vulnerable in it, simply
> to keep other users from even potent
On 01/02/17 22:27, Duncan wrote:
> Graham Cobb posted on Wed, 01 Feb 2017 17:43:32 + as excerpted:
>
>> This first bug is more serious because it appears to allow a
>> non-privileged user to disrupt the correct operation of receive,
>> creating a form of denial-of-s
On 01/02/17 12:28, Austin S. Hemmelgarn wrote:
> On 2017-02-01 00:09, Duncan wrote:
>> Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as excerpted:
>>
>>> I have been testing btrfs send/receive. I like it.
>>>
>>> During those tests I discovered that it is possible to access and modify
On 30/01/17 22:37, Michael Born wrote:
> Also, I'm not interested in restoring the old Suse 13.2 system. I just
> want some configuration files from it.
If all you really want is to get some important information from some
specific config files, and it is so important it is worth an hour or so
of
On 28/11/16 02:56, Duncan wrote:
> It should still be worth turning on autodefrag on an existing somewhat
> fragmented filesystem. It just might take some time to defrag files you
> do modify, and won't touch those you don't, which in some cases might
> make it worth defragging those manually.
On 28/10/16 16:20, David Sterba wrote:
> I tend to agree with this approach. The usecase, with some random sample
> balance options:
>
> $ btrfs balance start --analyze -dusage=10 -musage=5 /path
Wouldn't a "balance analyze" command be better than "balance start
--analyze"? I would have guessed
On 13/10/16 00:47, Sean Greenslade wrote:
> I may just end up doing that. Hugo's responce gave me some crazy ideas
> involving a custom build of split that waits for a command after each
> output file fills, which would of course require an equally weird build
> of cat that would stall the pipe ind
On 20/09/16 19:53, Alexandre Poux wrote:
> As for moving data to an another volume, since it's only data and
> nothing fancy (no subvolume or anything), a simple rsync would do the trick.
> My problem in this case is that I don't have enough available space
> elsewhere to move my data.
> That's why
On 07/09/16 16:06, Austin S. Hemmelgarn wrote:
> It hasn't, because there's not any way it can be completely fixed. This
> particular case is an excellent example of why it's so hard to fix. To
> close this particular hole, BTRFS itself would have to become aware of
> whether whoever is running a
On 07/09/16 16:20, Austin S. Hemmelgarn wrote:
> I should probably add to this that you shouldn't be accepting
> send/receive data streams from untrusted sources anyway. While it
> probably won't crash your system, it's not intended for use as something
> like a network service. If you're sending
Thanks to Austin and Duncan for their replies.
On 06/09/16 13:15, Austin S. Hemmelgarn wrote:
> On 2016-09-05 05:59, Graham Cobb wrote:
>> Does the "path" argument of btrfs-receive mean that *all* operations are
>> confined to that path? For example, if a UUID or transi
Does anyone know of a security analysis of btrfs receive?
I assume that just using btrfs receive requires root (is that so?). But
I was thinking of setting up a backup server which would receive
snapshots from various client systems, each in their own path, and I
wondered how much the security of
On 03/08/16 22:55, Graham Cobb wrote:
> On 03/08/16 21:37, Adam Borowski wrote:
>> On Wed, Aug 03, 2016 at 08:56:01PM +0100, Graham Cobb wrote:
>>> Are there any btrfs commands (or APIs) to allow a script to create a
>>> list of all the extents referred to w
On 03/08/16 21:37, Adam Borowski wrote:
> On Wed, Aug 03, 2016 at 08:56:01PM +0100, Graham Cobb wrote:
>> Are there any btrfs commands (or APIs) to allow a script to create a
>> list of all the extents referred to within a particular (mounted)
>> subvolume? And is it a
Are there any btrfs commands (or APIs) to allow a script to create a
list of all the extents referred to within a particular (mounted)
subvolume? And is it a reasonably efficient process (i.e. doesn't
involve backrefs and, preferably, doesn't involve following directory
trees)?
I am not looking t
On 28/07/16 12:17, David Sterba wrote:
> diff --git a/cmds-filesystem.c b/cmds-filesystem.c
> index ef1f550b51c0..6b381c582ea7 100644
> --- a/cmds-filesystem.c
> +++ b/cmds-filesystem.c
> @@ -968,7 +968,7 @@ static const char * const cmd_filesystem_defrag_usage[] =
> {
> "-f flus
On 21/07/16 09:19, Qu Wenruo wrote:
> We don't usually get such large extent tree dump from a real world use
> case.
Let us know if you want some more :-)
I have a heavily used single disk BTRFS filesystem with about 3.7TB in
use and about 9 million extents. I am happy to provide an extent dump
On 21/06/16 12:51, Austin S. Hemmelgarn wrote:
> The scrub design works, but the whole state file thing has some rather
> irritating side effects and other implications, and developed out of
> requirements that aren't present for balance (it might be nice to check
> how many chunks actually got bal
On 19/05/16 02:33, Qu Wenruo wrote:
>
>
> Graham Cobb wrote on 2016/05/18 14:29 +0100:
>> A while ago I had a "no space" problem (despite fi df, fi show and fi
>> usage all agreeing I had over 1TB free). But this email isn't about
>> that.
>>
&g
On 19/05/16 05:09, Duncan wrote:
> So to Graham, are these 1.5K snapshots all of the same subvolume, or
> split into snapshots of several subvolumes? If it's all of the same
> subvolume or of only 2-3 subvolumes, you still have some work to do in
> terms of getting down to recommended snapshot
Hi,
I have a 6TB btrfs filesystem I created last year (about 60% used). It
is my main data disk for my home server so it gets a lot of usage
(particularly mail). I do frequent snapshots (using btrbk) so I have a
lot of snapshots (about 1500 now, although it was about double that
until I cut back
74 matches
Mail list logo