On 2016-07-22 12:06, Sanidhya Solanki wrote:
On Fri, 22 Jul 2016 10:58:59 -0400
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
On 2016-07-22 09:42, Sanidhya Solanki wrote:
+*stripesize=*;;
+Specifies the new stripe size for a filesystem instance. Multiple BTrFS
+fil
On 2016-07-22 09:42, Sanidhya Solanki wrote:
Adds the user-space component of making the RAID stripesize user configurable.
Updates the btrfs-documentation to provide the information to users.
Adds parsing capabilities for the new options.
Adds the means of transfering the data to kernel space.
On 2016-07-21 09:34, Chris Murphy wrote:
On Thu, Jul 21, 2016 at 6:46 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-20 15:58, Chris Murphy wrote:
On Sun, Jul 17, 2016 at 3:08 AM, Hendrik Friedel <hend...@friedels.name>
wrote:
Well, btrfs does write data ve
On 2016-07-20 15:58, Chris Murphy wrote:
On Sun, Jul 17, 2016 at 3:08 AM, Hendrik Friedel wrote:
Well, btrfs does write data very different to many other file systems. On
every write the file is copied to another place, even if just one bit is
changed. That's special
On 2016-07-18 15:05, Hendrik Friedel wrote:
Hello Austin,
thanks for your reply.
Ok, thanks; So, TGMR does not say whether or not the Device is SMR or
not, right?
I'm not 100% certain about that. Technically, the only non-firmware
difference is in the read head and the tracking. If it were
On 2016-07-18 14:31, Hendrik Friedel wrote:
Hello and thanks for your replies,
It's a Seagate Expansion Desktop 5TB (USB3). It is probably a
ST5000DM000.
this is TGMR not SMR disk:
TGMR is a derivative of giant magneto-resistance, and is what's been
used in hard disk drives for decades now.
On 2016-07-17 05:08, Hendrik Friedel wrote:
Hi Thomasz,
@Dave I have added you to the conversation, as I refer to your notes
(https://github.com/kdave/drafts/blob/master/btrfs/smr-mode.txt)
thanks for your reply!
It's a Seagate Expansion Desktop 5TB (USB3). It is probably a
ST5000DM000.
On 2016-07-15 14:45, Matt wrote:
On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover from this?
Thanks to RAID1 of the metadata I can still access the data residing on the
On 2016-07-13 12:38, David Sterba wrote:
On Tue, Jun 21, 2016 at 11:16:59AM -0400, Austin S. Hemmelgarn wrote:
Currently, balance operations are run synchronously in the foreground.
This is nice for interactive management, but is kind of crappy when you
start looking at automation and similar
On 2016-07-13 00:39, Andrei Borzenkov wrote:
12.07.2016 15:25, Austin S. Hemmelgarn пишет:
I'm not changing my init system just to add functionality that should
already exist in btrfs-progs. The fact that the balance ioctl is
synchronous was a poor design choice, and we need to provide
On 2016-07-12 11:22, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 12 Jul 2016 08:25:24 -0400 as
excerpted:
As far as daemonization, I have no man-page called daemon in section
seven, yet I have an up-to-date upstream copy of the Linux man pages. My
guess is that this is a systemd man page
On 2016-07-11 17:07, Chris Murphy wrote:
On Fri, Jul 8, 2016 at 6:24 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
To clarify, I'm not trying to argue against adding support, I'm arguing
against it being mandatory.
By "D-Bus support" I did not mean to indicate ma
On 2016-07-11 12:58, Tomasz Torcz wrote:
On Mon, Jul 11, 2016 at 07:17:28AM -0400, Austin S. Hemmelgarn wrote:
On 2016-07-11 03:26, Tomasz Torcz wrote:
On Tue, Jun 21, 2016 at 11:16:59AM -0400, Austin S. Hemmelgarn wrote:
Currently, balance operations are run synchronously in the foreground
On 2016-07-11 03:26, Tomasz Torcz wrote:
On Tue, Jun 21, 2016 at 11:16:59AM -0400, Austin S. Hemmelgarn wrote:
Currently, balance operations are run synchronously in the foreground.
This is nice for interactive management, but is kind of crappy when you
start looking at automation and similar
On 2016-07-08 12:10, Francesco Turco wrote:
On 2016-07-07 19:57, Chris Murphy wrote:
Use F3 to test flash:
http://oss.digirati.com.br/f3/
I tested my USB flash drive with F3 as you suggested, and there's no
indication it is a fake device.
---
# f3probe --destructive /dev/sdb
F3
On 2016-07-07 16:20, Chris Murphy wrote:
On Thu, Jul 7, 2016 at 1:59 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
D-Bus support needs to be optional, period. Not everybody uses D-Bus (I
have dozens of systems that get by just fine without it, and know hundreds
of other people
On 2016-07-08 07:14, Tomasz Kusmierz wrote:
Well, I was able to run memtest on the system last night, that passed with
flying colors, so I'm now leaning toward the problem being in the sas card.
But I'll have to run some more tests.
Seriously use the "stres.sh" for couple of days, When I was
On 2016-07-07 14:58, Chris Murphy wrote:
On Thu, Jul 7, 2016 at 12:23 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
Here's how I would picture the ideal situation:
* A device is processed by udev. It detects that it's part of a BTRFS
array, updates blkid and whateve
On 2016-07-07 12:52, Goffredo Baroncelli wrote:
On 2016-07-06 14:48, Austin S. Hemmelgarn wrote:
On 2016-07-06 08:39, Andrei Borzenkov wrote:
[]
To be entirely honest, if it were me, I'd want systemd to
fsck off. If the kernel mount(2) call succeeds, then the
filesystem was ready enough
On 2016-07-07 10:55, Francesco Turco wrote:
On 2016-07-07 16:27, Austin S. Hemmelgarn wrote:
This seems odd, are you trying to access anything over NFS or some other
network filesystem protocol here? If not, then I believe you've found a
bug, because I'm pretty certain we shouldn't
On 2016-07-07 09:49, Francesco Turco wrote:
I have a USB flash drive with an encrypted Btrfs filesystem where I
store daily backups. My problem is that this btrfs filesystem gets
corrupted very often, after a few days of usage. Usually I just reformat
it and move along, but this time I'd like to
On 2016-07-06 18:59, Tomasz Kusmierz wrote:
On 6 Jul 2016, at 23:14, Corey Coughlin wrote:
Hi all,
Hoping you all can help, have a strange problem, think I know what's going
on, but could use some verification. I set up a raid1 type btrfs filesystem on
an
On 2016-07-06 14:23, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 12:04 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-06 13:19, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 3:51 AM, Andrei Borzenkov <arvidj...@gmail.com>
wrote:
3) can we query btrfs whether it
On 2016-07-06 14:45, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 11:18 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-06 12:43, Chris Murphy wrote:
So does it make sense to just set the default to 180? Or is there a
smarter way to do this? I don't know.
Just th
On 2016-07-06 13:19, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 3:51 AM, Andrei Borzenkov wrote:
3) can we query btrfs whether it is mountable in degraded mode?
according to documentation, "btrfs device ready" (which udev builtin
follows) checks "if it has ALL of it’s
On 2016-07-06 12:43, Chris Murphy wrote:
On Wed, Jul 6, 2016 at 5:51 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-05 19:05, Chris Murphy wrote:
Related:
http://www.spinics.net/lists/raid/msg52880.html
Looks like there is some traction to figuring out what to do
On 2016-07-06 12:05, Austin S. Hemmelgarn wrote:
On 2016-07-06 11:22, Joerg Schilling wrote:
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
It should be obvious that a file that offers content also has
allocated blocks.
What you mean then is that POSIX _implies_ that
On 2016-07-06 11:22, Joerg Schilling wrote:
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
It should be obvious that a file that offers content also has allocated blocks.
What you mean then is that POSIX _implies_ that this is the case, but
does not say whether or no
On 2016-07-06 10:53, Joerg Schilling wrote:
Antonio Diaz Diaz wrote:
Joerg Schilling wrote:
POSIX requires st_blocks to be != 0 in case that the file contains data.
Please, could you provide a reference? I can't find such requirement at
On 2016-07-06 08:39, Andrei Borzenkov wrote:
Отправлено с iPhone
6 июля 2016 г., в 15:14, Austin S. Hemmelgarn <ahferro...@gmail.com> написал(а):
On 2016-07-06 07:55, Andrei Borzenkov wrote:
On Wed, Jul 6, 2016 at 2:45 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
O
On 2016-07-06 07:55, Andrei Borzenkov wrote:
On Wed, Jul 6, 2016 at 2:45 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul 5, 2016 at 11:10 PM, Chris Murphy <li...@colorremedies.com>
wrote:
I started a systemd-devel@
On 2016-07-05 19:05, Chris Murphy wrote:
Related:
http://www.spinics.net/lists/raid/msg52880.html
Looks like there is some traction to figuring out what to do about
this, whether it's a udev rule or something that happens in the kernel
itself. Pretty much the only hardware setup unaffected by
On 2016-07-06 05:51, Andrei Borzenkov wrote:
On Tue, Jul 5, 2016 at 11:10 PM, Chris Murphy wrote:
I started a systemd-devel@ thread since that's where most udev stuff
gets talked about.
https://lists.freedesktop.org/archives/systemd-devel/2016-July/037031.html
On 2016-07-05 05:28, Joerg Schilling wrote:
Andreas Dilger wrote:
I think in addition to fixing btrfs (because it needs to work with existing
tar/rsync/etc. tools) it makes sense to *also* fix the heuristics of tar
to handle this situation more robustly. One option is if
On 2016-06-29 14:12, Saint Germain wrote:
On Wed, 29 Jun 2016 11:28:24 -0600, Chris Murphy
wrote :
Already got a backup. I just really want to try to repair it (in
order to test BTRFS).
I don't know that this is a good test because I think the file system
has
On 2016-06-28 08:14, Steven Haigh wrote:
On 28/06/16 22:05, Austin S. Hemmelgarn wrote:
On 2016-06-27 17:57, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 10:17:04AM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 5:21 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-06
On 2016-06-27 17:57, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 10:17:04AM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 5:21 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-06-25 12:44, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
&l
On 2016-06-27 23:17, Zygo Blaxell wrote:
On Mon, Jun 27, 2016 at 08:39:21PM -0600, Chris Murphy wrote:
On Mon, Jun 27, 2016 at 7:52 PM, Zygo Blaxell
wrote:
On Mon, Jun 27, 2016 at 04:30:23PM -0600, Chris Murphy wrote:
Btrfs does have something of a work around
On 2016-06-27 13:29, Chris Murphy wrote:
On Sun, Jun 26, 2016 at 10:02 PM, Nick Austin wrote:
On Sun, Jun 26, 2016 at 8:57 PM, Nick Austin wrote:
sudo btrfs fi show /mnt/newdata
Label: '/var/data' uuid: e4a2eb77-956e-447a-875e-4f6595a5d3ec
On 2016-06-25 12:44, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
Well, the obvious major advantage that comes to mind for me to checksumming
parity is that it would let us scrub the parity data itself and verify it.
OK bu
On 2016-06-24 13:52, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 11:21 AM, Andrei Borzenkov wrote:
24.06.2016 20:06, Chris Murphy пишет:
On Fri, Jun 24, 2016 at 3:52 AM, Andrei Borzenkov wrote:
On Fri, Jun 24, 2016 at 11:50 AM, Hugo Mills
On 2016-06-24 13:43, Steven Haigh wrote:
On 25/06/16 03:40, Austin S. Hemmelgarn wrote:
On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get CREATED when I copy
the file that crashes the system in the target
On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get CREATED when I copy
the file that crashes the system in the target directory. I'm thinking
if I 'cp -an source/ target/' that it will make this somewhat easier
On 2016-06-24 06:59, Hugo Mills wrote:
On Fri, Jun 24, 2016 at 01:19:30PM +0300, Andrei Borzenkov wrote:
On Fri, Jun 24, 2016 at 1:16 PM, Hugo Mills wrote:
On Fri, Jun 24, 2016 at 12:52:21PM +0300, Andrei Borzenkov wrote:
On Fri, Jun 24, 2016 at 11:50 AM, Hugo Mills
On 2016-06-24 01:20, Chris Murphy wrote:
On Thu, Jun 23, 2016 at 8:07 PM, Zygo Blaxell
wrote:
With simple files changing one character with vi and gedit,
I get completely different logical and physical numbers with each
change, so it's clearly cowing the entire
On 2016-06-23 13:44, Steven Haigh wrote:
Hi all,
Relative newbie to BTRFS, but long time linux user. I pass the full
disks from a Xen Dom0 -> guest DomU and run BTRFS within the DomU.
I've migrated my existing mdadm RAID6 to a BTRFS raid6 layout. I have a
drive that threw a few UNC errors
get
progress information.
Because it simply daemonizes prior to calling the balance ioctl, this
doesn't actually need any kernel support.
Signed-off-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
---
This works as is, but there are two specific things I would love to
eventually fix but
On 2016-06-21 07:33, Hugo Mills wrote:
On Tue, Jun 21, 2016 at 07:24:24AM -0400, Austin S. Hemmelgarn wrote:
On 2016-06-21 04:55, Duncan wrote:
Dmitry Katsubo posted on Mon, 20 Jun 2016 18:33:54 +0200 as excerpted:
Dear btfs community,
I have added a drive to existing raid1 btrfs volume
On 2016-06-21 04:55, Duncan wrote:
Dmitry Katsubo posted on Mon, 20 Jun 2016 18:33:54 +0200 as excerpted:
Dear btfs community,
I have added a drive to existing raid1 btrfs volume and decided to
perform balancing so that data distributes "fairly" among drives. I have
started "btrfs balance
On 2016-06-10 18:39, Hans van Kranenburg wrote:
On 06/11/2016 12:10 AM, ojab // wrote:
On Fri, Jun 10, 2016 at 9:56 PM, Hans van Kranenburg
wrote:
You can work around it by either adding two disks (like Henk said),
or by
temporarily converting some chunks to
On 2016-06-12 06:35, boli wrote:
It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
These 90 hours seem like a rather long time, given that a rebalance/convert
from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub
takes about 7 hours
On 2016-06-10 15:26, Henk Slager wrote:
On Thu, Jun 9, 2016 at 3:54 PM, Brendan Hide <bren...@swiftspirit.co.za> wrote:
On 06/09/2016 03:07 PM, Austin S. Hemmelgarn wrote:
On 2016-06-09 08:34, Brendan Hide wrote:
Hey, all
I noticed this odd behaviour while migrating from a 1TB s
On 2016-06-10 13:22, Adam Borowski wrote:
On Fri, Jun 10, 2016 at 01:12:42PM -0400, Austin S. Hemmelgarn wrote:
On 2016-06-10 12:50, Adam Borowski wrote:
And, as of coreutils 8.25, the default is no reflink, with "never" not being
recognized even as a way to avoid an alias. A
On 2016-06-10 12:50, Adam Borowski wrote:
On Fri, Jun 10, 2016 at 08:54:36AM -0700, Nikolaus Rath wrote:
On Jun 10 2016, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
JFYI, if you've using GNU cp, you can pass '--reflink=never' to avoid
it making reflinks.
I would
On 2016-06-09 23:40, Nikolaus Rath wrote:
On May 11 2016, Nikolaus Rath wrote:
Hello,
I recently ran btrfsck on one of my file systems, and got the following
messages:
checking extents
checking free space cache
checking fs roots
root 5 inode 3149867 errors 400, nbytes
On 2016-06-09 08:34, Brendan Hide wrote:
Hey, all
I noticed this odd behaviour while migrating from a 1TB spindle to SSD
(in this case on a LUKS-encrypted 200GB partition) - and am curious if
this behaviour I've noted below is expected or known. I figure it is a
bug. Depending on the situation,
On 2016-06-09 02:16, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 03 Jun 2016 10:21:12 -0400 as
excerpted:
As far as BTRFS raid10 mode in general, there are a few things that are
important to remember about it:
1. It stores exactly two copies of everything, any extra disks just add
On 2016-06-07 09:52, Kai Hendry wrote:
On Tue, 7 Jun 2016, at 07:10 PM, Austin S. Hemmelgarn wrote:
Yes, although you would then need to be certain to run a balance with
-dconvert=raid1 -mconvert=raid1 to clean up anything that got allocated
before the new disk was added.
I don't quite
On 2016-06-07 00:02, Kai Hendry wrote:
Sorry I unsubscribed from linux-btrfs@vger.kernel.org since the traffic
was a bit too high for me.
Entirely understandable, although for what it's worth it's nowhere near
as busy as some other mailing lists (linux-ker...@vger.kernel.org for
example sees
On 2016-06-06 01:44, Kai Hendry wrote:
Hi there,
I planned to remove one of my disks, so that I can take it from
Singapore to the UK and then re-establish another remote RAID1 store.
delete is an alias of remove, so I added a new disk (devid 3) and
proceeded to run:
btrfs device delete
On 2016-06-05 22:40, James Johnston wrote:
On 06/06/2016 at 01:47, Chris Murphy wrote:
On Sun, Jun 5, 2016 at 4:45 AM, Mladen Milinkovic wrote:
On 06/03/2016 04:05 PM, Chris Murphy wrote:
Make certain the kernel command timer value is greater than the driver
error
On 2016-06-03 21:48, Chris Murphy wrote:
On Fri, Jun 3, 2016 at 6:48 PM, Nicholas D Steeves <nstee...@gmail.com> wrote:
On 3 June 2016 at 11:33, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
On 2016-06-03 10:11, Martin wrote:
Make certain the kernel command timer value is
On 2016-06-03 21:51, Christoph Anton Mitterer wrote:
On Fri, 2016-06-03 at 15:50 -0400, Austin S Hemmelgarn wrote:
There's no point in trying to do higher parity levels if we can't get
regular parity working correctly. Given the current state of things,
it might be better to break even
On 2016-06-05 16:31, Christoph Anton Mitterer wrote:
On Sun, 2016-06-05 at 09:36 -0600, Chris Murphy wrote:
That's ridiculous. It isn't incorrect to refer to only 2 copies as
raid1.
No, if there are only two devices then not.
But obviously we're talking about how btrfs does RAID1, in which
On 2016-06-03 13:38, Christoph Anton Mitterer wrote:
> Hey..
>
> Hm... so the overall btrfs state seems to be still pretty worrying,
> doesn't it?
>
> - RAID5/6 seems far from being stable or even usable,... not to talk
> about higher parity levels, whose earlier posted patches (e.g.
>
On 2016-06-03 10:11, Martin wrote:
Make certain the kernel command timer value is greater than the driver
error recovery timeout. The former is found in sysfs, per block
device, the latter can be get and set with smartctl. Wrong
configuration is common (it's actually the default) when using
On 2016-06-03 09:31, Martin wrote:
In general, avoid Ubuntu LTS versions when dealing with BTRFS, as well as
most enterprise distros, they all tend to back-port patches instead of using
newer kernels, which means it's functionally impossible to provide good
support for them here (because we
On 2016-06-03 05:49, Martin wrote:
Hello,
We would like to use urBackup to make laptop backups, and they mention
btrfs as an option.
https://www.urbackup.org/administration_manual.html#x1-8400010.6
So if we go with btrfs and we need 100TB usable space in raid6, and to
have it replicated each
On 2016-06-02 18:45, Henk Slager wrote:
On Thu, Jun 2, 2016 at 3:55 PM, MegaBrutal wrote:
2016-06-02 0:22 GMT+02:00 Henk Slager :
What is the kernel version used?
Is the fs on a mechanical disk or SSD?
What are the mount options?
How old is the fs?
On 2016-06-01 14:30, MegaBrutal wrote:
Hi all,
I have a 20 GB file system and df says I have about 2,6 GB free space,
yet I can't do anything on the file system because I get "No space
left on device" errors. I read that balance may help to remedy the
situation, but it actually doesn't.
Some
On 2016-05-26 18:12, Graham Cobb wrote:
On 19/05/16 02:33, Qu Wenruo wrote:
Graham Cobb wrote on 2016/05/18 14:29 +0100:
A while ago I had a "no space" problem (despite fi df, fi show and fi
usage all agreeing I had over 1TB free). But this email isn't about
that.
As part of fixing that
On 2016-05-29 16:45, Ferry Toth wrote:
Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
wrote:
On 05/29/16 19:53, Chris Murphy wrote:
But I'm skeptical of bcache using a hidden area historically for
On 2016-05-27 15:47, Nicholas D Steeves wrote:
On 16 May 2016 at 08:39, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
On 2016-05-16 08:14, Richard W.M. Jones wrote:
It would be really helpful if the btrfs tools had a machine-readable
output.
With machine-readable output, t
On 2016-05-25 07:11, David Sterba wrote:
On Wed, May 25, 2016 at 08:33:45AM +0800, Qu Wenruo wrote:
David Sterba wrote on 2016/05/24 11:51 +0200:
On Tue, May 24, 2016 at 08:31:01AM +0800, Qu Wenruo wrote:
This could be made static (with thread local storage) so the state does
not get
On 2016-05-25 07:07, Hugo Mills wrote:
On Wed, May 25, 2016 at 04:00:00AM -0700, H. Peter Anvin wrote:
On 05/25/16 02:29, Hugo Mills wrote:
On Wed, May 25, 2016 at 01:58:15AM -0700, H. Peter Anvin wrote:
Hi,
I'm looking at using a btrfs with snapshots to implement a generational
backup
On 2016-05-25 04:58, H. Peter Anvin wrote:
Hi,
I'm looking at using a btrfs with snapshots to implement a generational
backup capacity. However, doing it the naïve way would have the side
effect that for a file that has been partially modified, after
snapshotting the file would be written with
On 2016-05-20 18:26, Henk Slager wrote:
Yes, sorry, I took some shortcut in the discussion and jumped to a
method for avoiding this 0.5-2% slowdown that you mention. (Or a
kernel crashing in bcache code due to corrupt SB on a backing device
or corrupted caching device contents).
I am actually
On 2016-05-20 13:02, Ferry Toth wrote:
We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
partitions are in the same pool, which is in btrfs RAID10 format. /boot
is in subvolume @boot.
If you have GRUB
On 2016-05-19 19:23, Henk Slager wrote:
On Thu, May 19, 2016 at 8:51 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2016-05-19 14:09, Kai Krakow wrote:
Am Wed, 18 May 2016 22:44:55 + (UTC)
schrieb Ferry Toth <ft...@exalondelft.nl>:
Op Tue, 17 May 2016 20:33:35 +0
On 2016-05-19 17:01, Kai Krakow wrote:
Am Thu, 19 May 2016 14:51:01 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
For a point of reference, I've
got a pair of 250GB Crucial MX100's (they cost less than 0.50 USD per
GB when I got them and provide essentially the
On 2016-05-19 14:09, Kai Krakow wrote:
Am Wed, 18 May 2016 22:44:55 + (UTC)
schrieb Ferry Toth <ft...@exalondelft.nl>:
Op Tue, 17 May 2016 20:33:35 +0200, schreef Kai Krakow:
Am Tue, 17 May 2016 07:32:11 -0400 schrieb "Austin S. Hemmelgarn"
<ahferro...@gmail.com>:
On 2016-05-18 07:24, Austin S. Hemmelgarn wrote:
On 2016-05-17 13:30, Josef Bacik wrote:
Our enospc flushing sucks. It is born from a time where we were early
enospc'ing constantly because multiple threads would race in for the same
reservation and randomly starve other ones out. So I came up
hours now, nothing is breaking, and a number of the
tests are actually completing marginally faster, so you can add:
Tested-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to maj
On 2016-05-17 11:45, Peter Kese wrote:
I've been using btrfs on my main system for a few months. I know btrfs
is a little bit beta, but I thought not using any fancy features like
quotas, snapshotting, raid, etc. would keep me on the safe side.
Then I tried a software upgrade (Ubuntu 15.10 ->
On 2016-05-17 08:23, David Sterba wrote:
On Tue, May 17, 2016 at 07:14:12AM -0400, Austin S. Hemmelgarn wrote:
By this example I don't mean that JSON has to be the format -- in fact
it's a terrible format with all sorts of problems -- any format which
is parseable with C libraries would do
On 2016-05-17 07:24, Alex Lyakas wrote:
RFC: This patch not for merging, but only for review and discussion.
When mounting, we consider only the primary superblock on each device.
But when writing the superblocks, we might silently ignore errors
from the primary superblock, if we succeeded to
On 2016-05-17 02:27, Ferry Toth wrote:
Op Mon, 16 May 2016 01:05:24 +0200, schreef Kai Krakow:
Am Sun, 15 May 2016 21:11:11 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
Ferry Toth posted on Sun, 15 May 2016 12:12:09 + as excerpted:
You can go there with only one additional HDD
On 2016-05-16 23:42, Chris Murphy wrote:
On Mon, May 16, 2016 at 5:44 PM, Richard A. Lochner wrote:
Chris,
It has actually happened to me three times that I know of in ~7mos.,
but your point about the "larger footprint" for data corruption is a
good one. No doubt I have
On 2016-05-17 05:33, David Sterba wrote:
On Mon, May 16, 2016 at 01:14:56PM +0100, Richard W.M. Jones wrote:
I don't have time to implement this right now, so I'm just posting
this as a suggestion/request ...
Neither do have I, but agree with the idea and the proposed way. Here
are my notes
On 2016-05-16 08:14, Richard W.M. Jones wrote:
I don't have time to implement this right now, so I'm just posting
this as a suggestion/request ...
It would be really helpful if the btrfs tools had a machine-readable
output.
Libguestfs parses btrfs tools output in a number of places, eg:
On 2016-05-16 07:34, Andrei Borzenkov wrote:
16.05.2016 14:17, Austin S. Hemmelgarn пишет:
On 2016-05-13 17:35, Chris Murphy wrote:
On Fri, May 13, 2016 at 9:28 AM, Nikolaus Rath <nikol...@rath.org> wrote:
On May 13 2016, Duncan <1i5t5.dun...@cox.net> wrote:
Because btrfs can be
On 2016-05-16 02:20, Qu Wenruo wrote:
Duncan wrote on 2016/05/16 05:59 +:
Qu Wenruo posted on Mon, 16 May 2016 10:24:23 +0800 as excerpted:
IIRC clear_cache option is fs level option.
So the first mount with clear_cache, then all subvolume will have
clear_cache.
Question: Does
On 2016-05-16 02:07, Chris Murphy wrote:
Current hypothesis
"I suspected, and I still suspect that the error occurred upon a
metadata update that corrupted the checksum for the file, probably due
to silent memory corruption. If the checksum was silently corrupted,
it would be simply written to
On 2016-05-15 08:12, Ferry Toth wrote:
Is there anything going on in this area?
We have btrfs in RAID10 using 4 HDD's for many years now with a rotating
scheme of snapshots for easy backup. <10% files (bytes) change between
oldest snapshot and the current state.
However, the filesystem seems
On 2016-05-13 17:35, Chris Murphy wrote:
On Fri, May 13, 2016 at 9:28 AM, Nikolaus Rath wrote:
On May 13 2016, Duncan <1i5t5.dun...@cox.net> wrote:
Because btrfs can be multi-device, it needs some way to track which
devices belong to each filesystem, and it uses filesystem
On 2016-05-13 12:28, Goffredo Baroncelli wrote:
On 2016-05-11 21:26, Austin S. Hemmelgarn wrote:
(although it can't tell the difference between a corrupted checksum and a
corrupted block of data).
I don't think so. The data checksums are stored in metadata blocks, and as
metadata block
On 2016-05-12 16:54, Mark Fasheh wrote:
On Wed, May 11, 2016 at 07:36:59PM +0200, David Sterba wrote:
On Tue, May 10, 2016 at 07:52:11PM -0700, Mark Fasheh wrote:
Taking your history with qgroups out of this btw, my opinion does not
change.
With respect to in-memory only dedupe, it is my
On 2016-05-13 07:07, Niccolò Belli wrote:
On giovedì 12 maggio 2016 17:43:38 CEST, Austin S. Hemmelgarn wrote:
That's probably a good indication of the CPU and the MB being OK, but
not necessarily the RAM. There's two other possible options for
testing the RAM that haven't been mentioned yet
On 2016-05-12 13:49, Richard A. Lochner wrote:
Austin,
I rebooted the computer and reran the scrub to no avail. The error is
consistent.
The reason I brought this question to the mailing list is because it
seemed like a situation that might be of interest to the developers.
Perhaps, there
701 - 800 of 1331 matches
Mail list logo