On 2014-02-13 12:33, Chris Murphy wrote:
On Feb 13, 2014, at 1:50 AM, Frank Kingswood
fr...@kingswood-consulting.co.uk wrote:
On 12/02/14 17:13, Saint Germain wrote:
Ok based on your advices, here is what I have done so far to use UEFI
(remeber that the objective is to have a clean and
On 02/10/2014 08:41 AM, Brendan Hide wrote:
On 2014/02/10 04:33 AM, Austin S Hemmelgarn wrote:
snip
Apparently, trying to use -mconvert=dup or -sconvert=dup on a
multi-device filesystem using one of the RAID profiles for metadata
fails with a statement to look at the kernel log, which
On 02/14/2014 02:56 AM, Brendan Hide wrote:
On 14/02/14 05:42, Austin S. Hemmelgarn wrote:
On 2014/02/10 04:33 AM, Austin S Hemmelgarn wrote:
Do you happen to know which git repository and branch is
preferred to base patches on? I'm getting ready to write one to
fix this, and would like
This greatly reduces the chances of the operation causing data loss due
to a read error during the device delete.
Signed-off-by: Austin S. Hemmelgarn ahferro...@gmail.com
---
fs/btrfs/volumes.c | 21 +
1 file changed, 17 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/volumes.c b
This greatly reduces the chances of the operation causing data loss due
to a read error during the device delete.
Signed-off-by: Austin S. Hemmelgarn ahferro...@gmail.com
---
fs/btrfs/volumes.c | 21 +
1 file changed, 17 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/volumes.c b
On 2014-02-24 08:37, Ilya Dryomov wrote:
On Thu, Feb 20, 2014 at 6:57 PM, David Sterba dste...@suse.cz wrote:
On Wed, Feb 19, 2014 at 11:10:41AM -0500, Austin S Hemmelgarn wrote:
Currently, btrfs balance start fails when trying to convert metadata or
system chunks to dup profile on filesystems
On 2014-02-24 09:12, Ilya Dryomov wrote:
On Mon, Feb 24, 2014 at 3:44 PM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-02-24 08:37, Ilya Dryomov wrote:
On Thu, Feb 20, 2014 at 6:57 PM, David Sterba dste...@suse.cz wrote:
On Wed, Feb 19, 2014 at 11:10:41AM -0500, Austin S Hemmelgarn
This greatly reduces the chances of the operation causing data loss due
to a read error during the device delete.
Signed-off-by: Austin S. Hemmelgarn ahferro...@gmail.com
---
fs/btrfs/volumes.c | 21 +
1 file changed, 17 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/volumes.c b
On 03/09/2014 04:17 AM, Swâmi Petaramesh wrote:
Le dimanche 9 mars 2014 08:48:20 KC a écrit :
I am experiencing massive performance degradation on my BTRFS
root partition on SSD.
BTW, is BTRFS still a SSD-killer ? It had this reputation a while
ago, and I'm not sure if this still is the
On 2014-03-14 09:46, George Mitchell wrote:
Actually, an interesting concept would be to have the initial two drive
RAID 1 mirrored by 2 additional drives in 4-way configuration on a
second machine at a remote location on a private high speed network with
both machines up 24/7. In that case,
On 2014-04-04 04:02, Swâmi Petaramesh wrote:
Hi,
I'm going to receive a new small laptop with a 500 GB 5400 RPM mechanical
ole' rust HD, and I plan ton install BTRFS on it.
It will have a kernel 3.13 for now, until 3.14 gets released.
However I'm still concerned with chronic BTRFS
On 2014-04-04 08:48, Swâmi Petaramesh wrote:
Le vendredi 4 avril 2014 08:33:10 Austin S Hemmelgarn a écrit :
However I'm still concerned with chronic BTRFS dreadful performance and
still find that BRTFS degrades much over time even with periodic defrag
and best practices etc.
I keep hearing
On 2014-04-05 07:10, Swâmi Petaramesh wrote:
Le samedi 5 avril 2014 10:12:17 Duncan wrote [excellent performance advice
about disabling Akonadi in BTRFS etc]:
Thanks Duncan for all this excellent discussion.
However I'm still rather puzzled with a filesystem for which advice is if
you
On 2014-04-08 07:56, Clemens Eisserer wrote:
Hi,
This is because every other filesystem (except ZFS) doesn't use COW
semantics.
Nilfs2 also is COW based.
Regards, Clemens
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
On 2014-04-23 21:19, Marc MERLIN wrote:
Oh while we're at it, are there companies that can say they are using btrfs
in production?
Marc
Ohio Gravure Technologies is currently preparing to use it on our next
generation of production systems.
--
To unsubscribe from this list: send the line
On 2014-04-25 13:24, Chris Murphy wrote:
On Apr 25, 2014, at 8:57 AM, Steve Leung sjle...@shaw.ca wrote:
Hi list,
I've got a 3-device RAID1 btrfs filesystem that started out life as
single-device.
btrfs fi df:
Data, RAID1: total=1.31TiB, used=1.07TiB
System, RAID1: total=32.00MiB,
On 2014-04-25 14:43, Steve Leung wrote:
On 04/25/2014 12:12 PM, Austin S Hemmelgarn wrote:
On 2014-04-25 13:24, Chris Murphy wrote:
On Apr 25, 2014, at 8:57 AM, Steve Leung sjle...@shaw.ca wrote:
I've got a 3-device RAID1 btrfs filesystem that started out life as
single-device.
btrfs fi
On 2014-04-30 14:16, Felix Homann wrote:
Hi,
a couple of months ago there has been some discussion about issues
when using btrfs on bcache:
http://thread.gmane.org/gmane.comp.file-systems.btrfs/31018
From looking at the mailing list archives I cannot tell whether or not
this issue has
On 05/02/2014 03:21 PM, Chris Murphy wrote:
On May 2, 2014, at 2:23 AM, Duncan 1i5t5.dun...@cox.net wrote:
Something tells me btrfs replace (not device replace, simply
replace) should be moved to btrfs device replace…
The syntax for btrfs device is different though; replace is like
On 05/16/2014 04:41 PM, Tomasz Chmielewski wrote:
On Fri, 16 May 2014 14:06:24 -0400
Calvin Walton calvin.wal...@kepstin.ca wrote:
No comment on the performance issue, other than to say that I've seen
similar on RAID-10 before, I think.
Also, what happens when the system crashes, and one
On 2014-05-19 13:12, Konstantinos Skarlatos wrote:
On 19/5/2014 7:01 μμ, Brendan Hide wrote:
On 19/05/14 15:00, Scott Middleton wrote:
On 19 May 2014 09:07, Marc MERLIN m...@merlins.org wrote:
On Wed, May 14, 2014 at 11:36:03PM +0800, Scott Middleton wrote:
I read so much about BtrFS that I
On 2014-05-19 22:07, Russell Coker wrote:
On Mon, 19 May 2014 23:47:37 Brendan Hide wrote:
This is extremely difficult to measure objectively. Subjectively ... see
below.
[snip]
*What other failure modes* should we guard against?
I know I'd sleep a /little/ better at night knowing that a
On 2014-05-21 19:05, Martin wrote:
Very good comment from Ashford.
Sorry, but I see no advantages from Russell's replies other than for a
feel-good factor or a dangerous false sense of security. At best,
there is a weak justification that for metadata, again going from 2% to
4% isn't
On 05/24/2014 12:44 PM, john terragon wrote:
Hi.
I'm playing around with (software) raid0 on SSDs and since I remember
I read somewhere that intel recommends 128K stripe size for HDD arrays
but only 16K stripe size for SSD arrays, I wanted to see how a
small(er) stripe size would work on my
On 05/26/2014 05:04 PM, Michael Welsh Duggan wrote:
Michael Welsh Duggan m...@md5i.com writes:
I am now getting the following error when trying to do a btrfs send:
root@maru2:/usr/local/src/btrfs-progs# ./btrfs send
/usr/local/snapshots/2014-05-15 /backup/intermediate
At subvol
On 2014-06-16 03:54, Swâmi Petaramesh wrote:
Hi,
I created a BTRFS filesytem over LVM over LUKS encryption on an SSD [yes, I
know...], and I noticed that the FS got created with metadata in DUP mode,
contrary to what man mkfs.btrfs says for SSDs - it would be supposed to be
SINGLE...
On 2014-06-16 06:35, Russell Coker wrote:
On Mon, 16 Jun 2014 12:14:49 Lennart Poettering wrote:
On Mon, 16.06.14 10:17, Russell Coker (russ...@coker.com.au) wrote:
I am not really following though why this trips up btrfs though. I am
not sure I understand why this breaks btrfs COW behaviour.
On 2014-06-16 07:18, Swâmi Petaramesh wrote:
Hi Austin, and thanks for your reply.
Le lundi 16 juin 2014, 07:09:55 Austin S Hemmelgarn a écrit :
What mkfs.btrfs looks at is
/sys/block/whatever-device/queue/rotational, if that is 1 it knows
that the device isn't a SSD. I believe that LVM
On 06/16/2014 03:52 PM, Martin wrote:
On 16/06/14 17:05, Josef Bacik wrote:
On 06/16/2014 03:14 AM, Lennart Poettering wrote:
On Mon, 16.06.14 10:17, Russell Coker (russ...@coker.com.au) wrote:
I am not really following though why this trips up btrfs though. I am
not sure I understand why
On 2014-06-18 16:10, Chris Murphy wrote:
On Jun 18, 2014, at 1:29 PM, Daniel Cegiełka daniel.cegie...@gmail.com
wrote:
Hi,
I created btrfs directly to disk using such a scheme (no partitions):
dd if=/dev/zero of=/dev/sda bs=4096
mkfs.btrfs -L dev_sda /dev/sda
mount /dev/sda /mnt
cd
I have a few questions about the BTRFS_IOC_FILE_EXTENT_SAME ioctl, and
was hoping that I could get answers here without having to go source
diving or trying to test things myself:
1. What kind of overhead is there when it is called on a group of
extents that aren't actually the same (aside from
I somehow have doubts that a complex filesystem is the right project for
me to start learning C, so I'll have to pass :-) No huge corporation
with that itch behind me either, and I guess it will be more than a few
hours for a btrfs programmer so no way I could sponsor that on my own.
Whether
On 2014-06-27 12:34, Goffredo Baroncelli wrote:
Hi,
On 06/27/2014 05:44 PM, Zhe Zhang wrote:
Hi,
I setup 2 Linux servers to share the same device through iSCSI. Then I
created a btrfs on the device. Then I saw the problem that the 2 Linux
servers do not see a consistent file system image.
filesystem.
On Fri, 27 Jun 2014 13:15:16 Austin S Hemmelgarn wrote:
The reason it appears to work when using iSCSI and not with directly
connected parallel SCSI or SAS is that iSCSI doesn't provide low level
hardware access.
I've tried this with dual-attached FC and had no problems mounting
On 2014-07-07 09:54, Konstantinos Skarlatos wrote:
On 7/7/2014 4:38 μμ, André-Sebastian Liebe wrote:
Hello List,
can anyone tell me how much time is acceptable and assumable for a
multi-disk btrfs array with classical hard disk drives to mount?
I'm having a bit of trouble with my current
On 2014-07-09 22:10, Russell Coker wrote:
On Wed, 9 Jul 2014 16:48:05 Martin Steigerwald wrote:
- for someone using SAS or enterprise SATA drives with Linux, I
understand btrfs gives the extra benefit of checksums, are there any
other specific benefits over using mdadm or dmraid?
I think I
On 07/10/2014 07:32 PM, Tomasz Kusmierz wrote:
Hi all !
So it been some time with btrfs, and so far I was very pleased, but
since I've upgraded to ubuntu from 13.10 to 14.04 problems started to
occur (YES I know this might be unrelated).
So in the past I've had problems with btrfs which
On 07/20/2014 10:00 AM, Tomasz Torcz wrote:
On Sun, Jul 20, 2014 at 01:53:34PM +, Duncan wrote:
TM posted on Sun, 20 Jul 2014 08:45:51 + as excerpted:
One week for a raid10 rebuild 4x3TB drives is a very long time.
Any thoughts?
Can you share any statistics from your RAID10 rebuilds?
On 07/24/2014 05:28 PM, Chris Mason wrote:
On 06/26/2014 11:53 PM, Qu Wenruo wrote:
Current btrfs will only use the first superblock, making the backup
superblocks only useful for 'btrfs rescue super' command.
The old problem is that if we use backup superblocks when the first
superblock
On 07/27/2014 08:29 PM, Qu Wenruo wrote:
Original Message
Subject: Re: [PATCH RFC] btrfs: Use backup superblocks if and only if
the first superblock is valid but corrupted.
From: Austin S Hemmelgarn ahferro...@gmail.com
To: Chris Mason c...@fb.com, Qu Wenruo quwen
On 07/27/2014 04:47 PM, Nick Krause wrote:
This may be a bad idea , but compression in brtfs seems to be only
using one core to compress.
Depending on the CPU used and the amount of cores in the CPU we can
make this much faster
with multiple cores. This seems bad by my reading at least I
On 07/27/2014 11:21 PM, Nick Krause wrote:
On Sun, Jul 27, 2014 at 10:56 PM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 07/27/2014 04:47 PM, Nick Krause wrote:
This may be a bad idea , but compression in brtfs seems to be only
using one core to compress.
Depending on the CPU used
On 2014-07-28 11:57, Nick Krause wrote:
On Mon, Jul 28, 2014 at 11:13 AM, Nick Krause xerofo...@gmail.com
wrote:
On Mon, Jul 28, 2014 at 6:10 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 07/27/2014 11:21 PM, Nick Krause wrote:
On Sun, Jul 27, 2014 at 10:56 PM, Austin S Hemmelgarn
On 2014-07-29 13:08, Nick Krause wrote:
On Mon, Jul 28, 2014 at 2:36 PM, Nick Krause xerofo...@gmail.com wrote:
On Mon, Jul 28, 2014 at 12:19 PM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
On 2014-07-28 11:57, Nick Krause wrote:
On Mon, Jul 28, 2014 at 11:13 AM, Nick Krause xerofo
On 07/31/2014 07:54 PM, Timofey Titovets wrote:
Good time of day.
I have several questions about data deduplication on btrfs.
Sorry if i ask stupid questions or waste you time %)
What about implementation of offline data deduplication? I don't see
any activity on this place, may be i need
On 08/01/2014 02:55 PM, Mark Fasheh wrote:
On Fri, Aug 01, 2014 at 10:16:08AM -0400, Austin S Hemmelgarn wrote:
On 2014-08-01 09:23, David Sterba wrote:
On Fri, Aug 01, 2014 at 06:17:44AM -0400, Austin S Hemmelgarn wrote:
I do think however that having the option of a background thread doing
On 2014-08-04 09:17, Peter Waller wrote:
For anyone else having this problem, this article is fairly useful for
understanding disk full problems and rebalance:
http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html
It actually covers the problem that
On 2014-08-04 10:11, Peter Waller wrote:
On 4 August 2014 15:02, Austin S Hemmelgarn ahferro...@gmail.com wrote:
I really disagree with the statement that adding more storage is
difficult or expensive, all you need to do is plug in a 2G USB flash
drive, or allocate a ramdisk, and add
On 2014-08-04 06:31, Peter Waller wrote:
Thanks Hugo, this is the most informative e-mail yet! (more inline)
On 4 August 2014 11:22, Hugo Mills h...@carfax.org.uk wrote:
* btrfs fi show
- look at the total and used values. If used total, you're OK.
If used == total, then you
On 2014-08-05 04:20, Duncan wrote:
Austin S Hemmelgarn posted on Mon, 04 Aug 2014 13:09:23 -0400 as
excerpted:
Think of each chunk like a box, and each block as a block, and that you
have two different types of block (data and metadata) and two different
types of box (also data and metadata
On 08/10/2014 03:21 PM, Vimal A R wrote:
Hello,
I came across the to-do list at
https://btrfs.wiki.kernel.org/index.php/Project_ideas and would like to know
if this list is updated and recent.
I am looking for a project idea for my under graduate degree which can be
completed in
On 08/11/2014 04:27 PM, Chris Murphy wrote:
On Aug 10, 2014, at 8:53 PM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Another thing that isn't listed there, that I would personally
love to see is support for secure file deletion. To be truly
secure though, this would need to hook
On 2014-08-12 11:52, David Pottage wrote:
On 11/08/14 03:53, Austin S Hemmelgarn wrote:
Another thing that isn't listed there, that I would personally love to
see is support for secure file deletion. To be truly secure though,
this would need to hook into the COW logic so that files
On 2014-08-14 10:30, G. Richard Bellamy wrote:
On Wed, Aug 13, 2014 at 9:23 PM, Chris Murphy li...@colorremedies.com wrote:
lsattr /var/lib/libvirt/images/atlas.qcow2
Is the xattr actually in place on that file?
2014-08-14 07:07:36
$ filefrag /var/lib/libvirt/images/atlas.qcow2
On 2014-08-19 12:21, M G Berberich wrote:
Hello,
we are thinking about using BtrFS on standard hardware for a
fileserver with about 50T (100T raw) of storage (25×4TByte).
This is what I understood so far. Is this right?
· incremental send/receive works.
· There is no support for
On 08/19/2014 05:38 PM, Andrej Manduch wrote:
Hi,
On 08/19/2014 06:21 PM, M G Berberich wrote: · Are there any
reports/papers/web-pages about BtrFS-systems this size
in use? Praises, complains, performance-reviews, whatever…
I don't know about papers or benchmarks but few weeks ago
On 2014-08-20 23:22, Shriramana Sharma wrote:
Hello. People on this list have been kind enough to reply to my
technical questions. However, seeing the high number of mails on this
list, esp with the title PATCH, I have a question about the
development itself:
Is this just an indication of a
On 2014-08-22 07:59, Shriramana Sharma wrote:
Hello. I've seen repeated advices to use the latest kernel. While
hearing of the recent compression bug affecting recent kernels does
somewhat warn one off the previous advice, I would like to know what
people who are running regular distros do to
On 2014-08-22 14:22, Rich Freeman wrote:
On Fri, Aug 22, 2014 at 8:04 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
I personally use Gentoo Unstable on all my systems, so I build all my
kernels locally anyway, and stay pretty much in-line with the current
stable Mainline kernel
On 2014-08-24 15:48, Chris Murphy wrote:
On Aug 24, 2014, at 10:59 AM, Flash ROM flashromg...@yandex.com wrote:
While it sounds dumb, this strange thing being done to put partition table
in separate erase block, so it never read-modify-written when FAT entries
are updated. Should something
I wholeheartedly agree. Of course, getting something other than CFQ as
the default I/O scheduler is going to be a difficult task. Enough
people upstream are convinced that we all NEED I/O priorities, when most
of what I see people doing with them is bandwidth provisioning, which
can be done much
On 2014-09-02 14:31, G. Richard Bellamy wrote:
I thought I'd follow-up and give everyone an update, in case anyone
had further interest.
I've rebuilt the RAID10 volume in question with a Samsung 840 Pro for
bcache front device.
It's 5x600GB SAS 15k RPM drives RAID10, with the 512MB SSD
On 2014-09-07 16:38, Or Tal wrote:
Hi,
I've created a new raid10 array from 4, 4TB drives in order to migrate
old data to it.
As I didn't have enough sata ports, I:
- disconnected one of the raid10 disks to free a sata port,
- connected an old disk I wanted to migrate,
- mounted the array
On 2014-09-10 08:27, Bob Williams wrote:
I have two 2TB disks formatted as a btrfs raid1 array, mirroring both
data and metadata. Last night I started
# btrfs filesystem balance path
In general, unless things are really bad, you don't ever want to use
balance on such a big filesystem
On 2014-09-10 09:48, Rich Freeman wrote:
On Wed, Sep 10, 2014 at 9:06 AM, Austin S Hemmelgarn
ahferro...@gmail.com wrote:
Normally, you shouldn't need to run balance at all on most BTRFS
filesystems, unless your usage patterns vary widely over time (I'm
actually a good example of this, most
On 2014-09-11 02:40, Russell Coker wrote:
On Mon, 8 Sep 2014, Austin S Hemmelgarn ahferro...@gmail.com wrote:
Also, I've found out the hard way that system chunks really should be
RAID1, NOT RAID10, otherwise it's very likely that the filesystem
won't mount at all if you lose 2 disks.
Why
On 2014-09-11 07:38, Hugo Mills wrote:
On Thu, Sep 11, 2014 at 07:19:00AM -0400, Austin S Hemmelgarn wrote:
On 2014-09-11 02:40, Russell Coker wrote:
Also it would be nice if there was a N-way mirror option for system data.
As
such data is tiny (32MB on the 120G filesystem in my
So, I just recently had to hard reset a system running root on BTRFS,
and when it tried to come back up, it chocked on the root filesystem.
Based on the kernel messages, the primary issue is log corruption, and
in theory btrfs-zero-log should fix it. The actual issue however, is
that the primary
On 2014-09-16 16:57, Chris Murphy wrote:
On Sep 16, 2014, at 8:40 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
Based on the kernel messages, the primary issue is log corruption, and
in theory btrfs-zero-log should fix it.
Can you provide a complete dmesg somewhere for this initial
On 09/17/2014 02:57 PM, Chris Murphy wrote:
On Sep 17, 2014, at 5:23 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
Thanks for all the help.
Well, it's not much help. It seems possible to corrupt a primary superblock
that points to a corrupt tree root, and use btrfs rescure super
On 09/17/2014 04:22 PM, Duncan wrote:
Austin S Hemmelgarn posted on Wed, 17 Sep 2014 07:23:46 -0400 as
excerpted:
I've also discovered, when trying to use btrfs restore to copy out the
data to a different system, that 3.14.1 restore apparently chokes on
filesystem that have lzo compression
On 2014-09-19 08:18, Rob Spanton wrote:
Hi,
I have a particularly uncomplicated setup (a desktop PC with a hard
disk) and I'm seeing particularly slow performance from btrfs. A `git
status` in the linux source tree takes about 46 seconds after dropping
caches, whereas on other machines using
On 2014-09-19 08:25, Swâmi Petaramesh wrote:
Le vendredi 19 septembre 2014, 13:18:34 Rob Spanton a écrit :
I have a particularly uncomplicated setup (a desktop PC with a hard
disk) and I'm seeing particularly slow performance from btrfs.
Weeelll I have the same over-complicated kind of setup,
On 2014-09-19 08:49, Austin S Hemmelgarn wrote:
On 2014-09-19 08:18, Rob Spanton wrote:
Hi,
I have a particularly uncomplicated setup (a desktop PC with a hard
disk) and I'm seeing particularly slow performance from btrfs. A `git
status` in the linux source tree takes about 46 seconds after
On 2014-09-19 09:51, Holger Hoffstätte wrote:
On Fri, 19 Sep 2014 13:18:34 +0100, Rob Spanton wrote:
I have a particularly uncomplicated setup (a desktop PC with a hard
disk) and I'm seeing particularly slow performance from btrfs. A `git
status` in the linux source tree takes about 46
On 2014-09-19 13:07, Chris Murphy wrote:
Possibly btrfs-select-super can do some of the things I was doing the hard way. It's
possible to select a super to overwrite other supers, even if they're good
ones. Whereas btrfs rescue super-recover won't do that, and neither will btrfsck, hence
why
On 2014-09-19 13:54, Chris Murphy wrote:
On Sep 17, 2014, at 5:23 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
[ 30.920536] BTRFS: bad tree block start 0 130402254848
[ 30.924018] BTRFS: bad tree block start 0 130402254848
[ 30.926234] BTRFS: failed to read log tree
[ 30.953055
On 2014-09-19 14:10, Jeb Thomson wrote:
With the advanced features of btrfs, it would be an additional simple task to
make different platters run in parallel.
In this case, say a disk has three platters, and so three seek heads as well.
If we can identify that much, and what offsets they are
On 2014-09-22 16:51, Stefan G. Weichinger wrote:
Am 20.09.2014 um 11:32 schrieb Duncan:
What I do as part of my regular backup regime, is every few kernel cycles
I wipe the (first level) backup and do a fresh mkfs.btrfs, activating new
optional features as I believe appropriate. Then I boot
On 2014-09-23 09:06, Stefan G. Weichinger wrote:
Am 23.09.2014 um 14:08 schrieb Austin S Hemmelgarn:
On 2014-09-22 16:51, Stefan G. Weichinger wrote:
Is re-creating btrfs-filesystems *recommended* in any way?
Does that actually make a difference in the fs-structure?
I would recommend
On 2014-09-23 10:23, Tobias Holst wrote:
If it is unknown, which of these options have been used at btrfs
creation time - is it possible to check the state of these options
afterwards on a mounted or unmounted filesystem?
2014-09-23 15:38 GMT+02:00 Austin S Hemmelgarn ahferro...@gmail.com
On 2014-10-08 15:11, Eric Sandeen wrote:
I was looking at Marc's post:
http://marc.merlins.org/perso/btrfs/post_2014-03-19_Btrfs-Tips_-Btrfs-Scrub-and-Btrfs-Filesystem-Repair.html
and it feels like there isn't exactly a cohesive, overarching vision for
repair of a corrupted btrfs filesystem.
On 2014-10-09 07:53, Duncan wrote:
Austin S Hemmelgarn posted on Thu, 09 Oct 2014 07:29:23 -0400 as
excerpted:
Also, you should be running btrfs scrub regularly to correct bit-rot
and force remapping of blocks with read errors. While BTRFS
technically handles both transparently on reads
On 2014-10-09 08:12, Hugo Mills wrote:
On Thu, Oct 09, 2014 at 08:07:51AM -0400, Austin S Hemmelgarn wrote:
On 2014-10-09 07:53, Duncan wrote:
Austin S Hemmelgarn posted on Thu, 09 Oct 2014 07:29:23 -0400 as
excerpted:
Also, you should be running btrfs scrub regularly to correct bit-rot
On 2014-10-09 08:34, Duncan wrote:
On Thu, 09 Oct 2014 08:07:51 -0400
Austin S Hemmelgarn ahferro...@gmail.com wrote:
On 2014-10-09 07:53, Duncan wrote:
Austin S Hemmelgarn posted on Thu, 09 Oct 2014 07:29:23 -0400 as
excerpted:
Also, you should be running btrfs scrub regularly to correct
On 2014-10-10 13:43, Bob Marley wrote:
On 10/10/2014 16:37, Chris Murphy wrote:
The fail safe behavior is to treat the known good tree root as the
default tree root, and bypass the bad tree root if it cannot be
repaired, so that the volume can be mounted with default mount options
(i.e. the
On 2014-10-10 18:05, Eric Sandeen wrote:
On 10/10/14 2:35 PM, Austin S Hemmelgarn wrote:
On 2014-10-10 13:43, Bob Marley wrote:
On 10/10/2014 16:37, Chris Murphy wrote:
The fail safe behavior is to treat the known good tree root as
the default tree root, and bypass the bad tree root
On 2014-10-12 06:14, Martin Steigerwald wrote:
Am Freitag, 10. Oktober 2014, 10:37:44 schrieb Chris Murphy:
On Oct 10, 2014, at 6:53 AM, Bob Marley bobmar...@shiftmail.org wrote:
On 10/10/2014 03:58, Chris Murphy wrote:
* mount -o recovery
Enable autorecovery attempts if a bad tree
On 2014-10-14 18:25, Robert White wrote:
I've got no idea if this is possible given the current storage layout,
but it would be Really Nice™ if there were a way to have a single
subvolume exist in more than one place in hirearchy. I know this can be
faked via mount tricks (bind or use of
On 2014-10-20 09:02, Zygo Blaxell wrote:
On Mon, Oct 20, 2014 at 04:38:28AM +, Duncan wrote:
Russell Coker posted on Sat, 18 Oct 2014 14:54:19 +1100 as excerpted:
# find . -name *546
./1412233213.M638209P10546 # ls -l ./1412233213.M638209P10546 ls: cannot
access
On 2014-10-21 05:29, Duncan wrote:
David Sterba posted on Mon, 20 Oct 2014 18:34:03 +0200 as excerpted:
On Thu, Oct 16, 2014 at 01:33:37PM +0200, David Sterba wrote:
I'd like to make it default with the 3.17 release of btrfs-progs.
Please let me know if you have objections.
For the record,
On 2014-10-21 11:34, Cristian Falcas wrote:
I will start investigating how can we build our own rpms from the 3.16
sources. Until then we are stuck with the ones from the official repos
or elrepo. Which means 3.10 is the latest for el6. We used this until
now and seems we where lucky enough to
On 2014-10-21 16:44, Arnaud Kapp wrote:
Hello,
I would like to ask if the balance time is related to the number of
snapshot or if this is related only to data (or both).
I currently have about 4TB of data and around 5k snapshots. I'm thinking
of going raid1 instead of single. From the numbers
On 2014-10-21 21:10, Robert White wrote:
I don't think balance will _ever_ move the contents of a read only
snapshot. I could be wrong. I think you just end up with an endlessly
fragmented storage space and balance has to take each chunk and search
for someplace else it might better fit. Which
On 2014-10-22 16:08, Robert White wrote:
So the documentation is clear that you can't mount a swap file through
BTRFS (unless you use a loop device).
Why isn't a NOCOW file that has been fully pre-allocated -- as with
fallocate(1) -- not suitable for swapping?
I found one reference to an
On 2014-10-23 05:19, Miao Xie wrote:
On Wed, 22 Oct 2014 14:40:47 +0200, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th device to my
FS, it took
On 2014-10-26 13:20, Larkin Lowrey wrote:
On 10/24/2014 10:28 PM, Duncan wrote:
Robert White posted on Fri, 24 Oct 2014 19:41:32 -0700 as excerpted:
On 10/24/2014 04:49 AM, Marc MERLIN wrote:
On Thu, Oct 23, 2014 at 06:04:43PM -0500, Larkin Lowrey wrote:
I have a 240GB VirtualBox vdi image
On 2014-10-30 05:26, lu...@plaintext.sk wrote:
Hi,
I want to ask, if deduplicated file content will be cached in linux kernel just
once for two deduplicated files.
To explain in deep:
- I use btrfs for whole system with few subvolumes with some compression on
some subvolumes.
- I have two
On 2014-11-18 02:29, Brendan Hide wrote:
Hey, guys
See further below extracted output from a daily scrub showing csum
errors on sdb, part of a raid1 btrfs. Looking back, it has been getting
errors like this for a few days now.
The disk is patently unreliable but smartctl's output implies there
On 2014-11-20 09:10, Duncan wrote:
Bardur Arantsson posted on Thu, 20 Nov 2014 14:17:52 +0100 as excerpted:
If you have no other backups, I would really recommend that you *don't*
use btrfs for your backup, or at least have a *third* backup which isn't
on btrfs -- there are *still* problems
1 - 100 of 1331 matches
Mail list logo