On Mon, Jul 29, 2019 at 4:17 PM Swâmi Petaramesh wrote:
>
> On 7/29/19 3:29 PM, Lionel Bouton wrote:
> > For another reference point, my personal laptop reports 17 days of
> > uptime on 5.2.0-arch2-1-ARCH.
> > I use BTRFS both over LUKS over LVM and directly over LVM. The system
> > is suspended d
On Mon, Oct 29, 2018 at 7:20 AM Dave wrote:
>
> This is one I have not seen before.
>
> When running a simple, well-tested and well-used script that makes
> backups using btrfs send | receive, I got these two errors:
>
> At subvol snapshot
> ERROR: rename o131621-1091-0 ->
> usr/lib/node_modules/n
On Thu, Oct 18, 2018 at 6:04 AM Jean-Denis Girard wrote:
>
> Hi list,
>
> My goal is to duplicate some SD cards, to prepare 50 similar Raspberry Pi.
>
> First, I made a tar of my master SD card (unmounted). Then I made a
> script, which creates 2 partitions (50 MB for boot, 14 GB for /),
> creates
On Wed, Oct 17, 2018 at 10:29 AM Libor Klepáč wrote:
>
> Hello,
> i have new 32GB SSD in my intel nuc, installed debian9 on it, using btrfs as
> a rootfs.
> Then i created subvolumes /system and /home and moved system there.
>
> System was installed using kernel 4.9.x and filesystem created using
On Tue, Sep 5, 2017 at 1:45 PM, Austin S. Hemmelgarn
wrote:
>> - You end up duplicating more data than is strictly necessary. This
>> is, IIRC, something like 128 KiB for a write.
>
> FWIW< I'm pretty sure you can mitigate this first issue by running a regular
> defrag on a semi-regular bas
On Mon, Sep 4, 2017 at 12:34 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> * Autodefrag works very well when these internal-rewrite-pattern files
> are relatively small, say a quarter GiB or less, but, again with near-
> capacity throughput, not necessarily so well with larger databases or VM
> image
On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> i'm trying to speed up big btrfs volumes.
>
> Some facts:
> - Kernel will be 4.13-rc7
> - needed volume size is 60TB
>
> Currently without any ssds i get the best speed with:
> - 4x HW Raid 5 with 1GB controller memor
On Mon, Sep 4, 2017 at 11:31 AM, Marat Khalili wrote:
> Hello list,
> good time of the day,
>
> More than once I see mentioned in this list that autodefrag option solves
> problems with no apparent drawbacks, but it's not the default. Can you
> recommend to just switch it on indiscriminately on al
On Mon, Sep 4, 2017 at 7:19 AM, Russell Coker
wrote:
> I have a system with less than 50% disk space used. It just started rejecting
> writes due to lack of disk space. I ran "btrfs balance" and then it started
> working correctly again. It seems that a btrfs filesystem if left alone will
> eve
On Mon, Aug 7, 2017 at 7:12 AM, Thomas Wurfbaum wrote:
> Now i do a btrfs-find-root, but it runs now since 5 day without a result.
> How long should i wait? Or is it already to late to hope?
>
> mainframe:~ # btrfs-find-root.static /dev/sdb1
> parent transid verify failed on 29376512 wanted 132772
On Thu, Jul 27, 2017 at 9:24 PM, Hans van Kranenburg
wrote:
> Device ID numbers always start at 1, not at 0. The first IOC_DEV_INFO
> call does not make sense, since it will always return ENODEV.
When there is a btrfs-replace ongoing, there is a Device ID 0
> ioctl(3, BTRFS_IOC_DEV_INFO, {devid=
On Mon, Jun 19, 2017 at 1:26 PM, Henk Slager wrote:
> On 16-06-17 03:43, Qu Wenruo wrote:
>> Since incompat feature NO_HOLES still allow us to have explicit hole
>> file extent, current check is too restrict and will cause false alert
>> like:
>>
>> root 5 EXTE
On Mon, Jun 26, 2017 at 12:37 PM, Lu Fengqi wrote:
> The normal back reference counting doesn't care about the extent referred
> by the extent data in the shared leaf. The check_extent_data_backref
> function need to skip the leaf that owner mismatch with the root_id.
>
> Reported-by: Marc MERLIN
>> I think I leave it as is for the time being, unless there is some news
>> how to fix things with low risk (or maybe via a temp overlay snapshot
>> with DM). But the lowmem check took 2 days, that's not really fun.
>> The goal for the 8TB fs is to have an up to 7 year snapshot history at
>> somet
On Thu, Jun 15, 2017 at 9:13 AM, Qu Wenruo wrote:
>
>
> At 06/14/2017 09:39 PM, Henk Slager wrote:
>>
>> On Tue, Jun 13, 2017 at 12:47 PM, Henk Slager wrote:
>>>
>>> On Tue, Jun 13, 2017 at 7:24 AM, Kai Krakow wrote:
>>>>
>>>&
ict hole file extent check.
>
> Reported-by: Henk Slager
> Signed-off-by: Qu Wenruo
> ---
> cmds-check.c | 6 +-
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/cmds-check.c b/cmds-check.c
> index c052f66e..7bd57677 100644
> --- a/cmds-check.c
On Tue, Jun 13, 2017 at 12:47 PM, Henk Slager wrote:
> On Tue, Jun 13, 2017 at 7:24 AM, Kai Krakow wrote:
>> Am Mon, 12 Jun 2017 11:00:31 +0200
>> schrieb Henk Slager :
>>
>>> Hi all,
>>>
>>> there is 1-block corruption a 8TB filesystem that sh
On Tue, Jun 13, 2017 at 7:24 AM, Kai Krakow wrote:
> Am Mon, 12 Jun 2017 11:00:31 +0200
> schrieb Henk Slager :
>
>> Hi all,
>>
>> there is 1-block corruption a 8TB filesystem that showed up several
>> months ago. The fs is almost exclusively a btrfs recei
Hi all,
there is 1-block corruption a 8TB filesystem that showed up several
months ago. The fs is almost exclusively a btrfs receive target and
receives monthly sequential snapshots from two hosts but 1 received
uuid. I do not know exactly when the corruption has happened but it
must have been rou
On Wed, Apr 19, 2017 at 11:44 AM, Henk Slager wrote:
> I also have a WD40EZRX and the fs on it is also almost exclusively a
> btrfs receive target and it has now for the second time csum (just 5 )
> errors. Extended selftest at 16K hours shows no problem and I am not
> fully sure
> At 04/18/2017 08:41 PM, Werner Braun wrote:
>>
>> Hi,
>>
>> i have a WD WD40EZRX with strange beaviour off btrfs check vs. btrfs scrub
>>
>> running btrfs check --check-data-csum returns no errors on the disk
>>
>> running btrfs scrub on the disk finds tons of errors
>>
>> i could clear the disk
On Wed, Mar 29, 2017 at 12:01 AM, Jakob Schürz
wrote:
[...]
> There is Subvolume A on the send- and the receive-side.
> There is also Subvolume AA on the send-side from A.
> The parent-ID from send-AA is the ID from A.
> The received-ID from A on received-side A is the ID from A.
>
> To send the A
On Sun, Dec 4, 2016 at 7:30 PM, Chris Murphy wrote:
> Hi,
>
> [chris@f25s ~]$ uname -r
> 4.8.11-300.fc25.x86_64
> [chris@f25s ~]$ rpm -q btrfs-progs
> btrfs-progs-4.8.5-1.fc26.x86_64
>
>
> I'm not finding any pattern to this so far, but it's definitely not
> always reliable. Here is today's exampl
Hi all,
I noticed that a monthly differential snapshot creation ended with an
error although the created snapshot itself seemed OK. To test en be
more confident, I also transferred the diff between the 2016-12-01 en
2016-12-02 and that went without an error from send or receive. The
send runs on a
On Fri, Nov 11, 2016 at 4:38 PM, David Sterba wrote:
> Hi,
>
> btrfs-progs version 4.8.3 have been released. Handful of fixes and lots of
> cleanups.
>
> Changes:
> * check:
> * support for clearing space cache (v1)
> * size reduction of inode backref structure
> * send:
> * fix ha
> FWIW, I use BTRFS for /boot, but it's not for snapshotting or even the COW,
> it's for DUP mode and the error recovery it provides. Most people don't
> think about this if it hasn't happened to them, but if you get a bad read
> from /boot when loading the kernel or initrd, it can essentially nuk
On Mon, Aug 15, 2016 at 8:30 PM, Hugo Mills wrote:
> On Mon, Aug 15, 2016 at 10:32:25PM +0800, Anand Jain wrote:
>>
>>
>> On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote:
>> >On 2016-08-15 10:08, Anand Jain wrote:
>> >>
>> >>
>> IMHO it's better to warn user about 2 devices RAID5 or 3 devic
On Wed, Jul 20, 2016 at 11:15 AM, Libor Klepáč wrote:
> Hello,
> we use backuppc to backup our hosting machines.
>
> I have recently migrated it to btrfs, so we can use send-recieve for offsite
> backups of our backups.
>
> I have several btrfs volumes, each hosts nspawn container, which runs in
>>> It's a Seagate Expansion Desktop 5TB (USB3). It is probably a
>>> ST5000DM000.
>>
>>
>> this is TGMR not SMR disk:
>>
>> http://www.seagate.com/www-content/product-content/desktop-hdd-fam/en-us/docs/100743772a.pdf
>> So it still confirms to standard record strategy ...
>
>
> I am not convinced.
On Sun, Jul 17, 2016 at 10:26 AM, Matthias Prager
wrote:
> from my experience btrfs does work as badly with SMR drives (I only had
> the opportunity to test on a 8TB Seagate device-managed drive) as ext4.
> The initial performance is fine (for a few gigabytes / minutes), but
> drops of a cliff as
On Fri, Jul 8, 2016 at 11:50 PM, Chris Murphy wrote:
> On Fri, Jul 8, 2016 at 3:39 PM, Chris Murphy wrote:
>> On Fri, Jul 8, 2016 at 2:08 PM, Kai Herlemann wrote:
>>
>>> If here any developers read along: I'd like to suggest that there's
>>> automatically made a subvolume "@" by default, which i
>> Device is GOOD
>>
>> I also created a big file with dd using /dev/urandom with the same size
>> as my flash drive, copied it once and read it three times. The SHA-1
>> checksum is always the same and matches the original one on the hard disk.
>>
>> So after much testing I feel I can conclude tha
On Fri, Jul 8, 2016 at 9:22 AM, Stanislaw Kaminski
wrote:
> Huh.
>
> I left defrag running overnight, and now I'm back to my >200 GiB free
> space. Also, I got no "out of space" messages in Transmission, and it
> successfully downloaded few GBs.
>
> But in dmesg I have 209 traces, see attached.
>
On Thu, Jul 7, 2016 at 7:40 PM, Chris Murphy wrote:
> On Thu, Jul 7, 2016 at 10:01 AM, Henk Slager wrote:
>
>> What the latest debian likes as naming convention I dont know, but in
>> openSuSE @ is a directory in the toplevel volume (ID=5 or ID=0 as
>> alias) and
On Thu, Jul 7, 2016 at 5:17 PM, Stanislaw Kaminski
wrote:
> Hi Chris, Alex, Hugo,
>
> Running now: Linux archb3 4.6.2-1-ARCH #1 PREEMPT Mon Jun 13 02:11:34
> MDT 2016 armv5tel GNU/Linux
>
> Seems to be working fine. I started a defrag, and it seems I'm getting
> my space back:
> $ sudo btrfs fi us
On Thu, Jul 7, 2016 at 11:46 AM, M G Berberich wrote:
> Hello,
>
> On a filesystem with 40 G free space and 54 G used, ‘fstrim -v’ gave
> this result:
>
> # fstrim -v /
> /: 0 B (0 bytes) trimmed
>
> After running balance it gave a more sensible
>
> # fstrim -v /
> /: 37.3 GiB (400
On Thu, Jul 7, 2016 at 2:17 PM, Kai Herlemann wrote:
> Hi,
>
> I want to rollback a snapshot and have done this by execute "btrfs sub
> set-default / 618".
maybe just a typo here, command syntax is:
# sudo btrfs sub set-default
btrfs subvolume set-default: too few arguments
usage: btrfs subvolume
On Wed, Jul 6, 2016 at 2:20 PM, Tomasz Kusmierz wrote:
>
>> On 6 Jul 2016, at 02:25, Henk Slager wrote:
>>
>> On Wed, Jul 6, 2016 at 2:32 AM, Tomasz Kusmierz
>> wrote:
>>>
>>> On 6 Jul 2016, at 00:30, Henk Slager wrote:
>>>
>>&
On Wed, Jul 6, 2016 at 2:32 AM, Tomasz Kusmierz wrote:
>
> On 6 Jul 2016, at 00:30, Henk Slager wrote:
>
> On Mon, Jul 4, 2016 at 11:28 PM, Tomasz Kusmierz
> wrote:
>
> I did consider that, but:
> - some files were NOT accessed by anything with 100% certainty (well if
&g
On Tue, Jul 5, 2016 at 1:15 AM, Dmitry Katsubo wrote:
> On 2016-07-01 22:46, Henk Slager wrote:
>> (email ends up in gmail spamfolder)
>> On Fri, Jul 1, 2016 at 10:14 PM, Dmitry Katsubo wrote:
>>> Hello everyone,
>>>
>>> Question #1:
>>>
ithub.com/knorrie/python-btrfs
so that maybe you see how block-groups/chunks are filled etc.
> (ps. this email client on OS X is driving me up the wall … have to correct
> the corrections all the time :/)
>
>> On 4 Jul 2016, at 22:13, Henk Slager wrote:
>>
>> On Sun,
On Sun, Jul 3, 2016 at 1:36 AM, Tomasz Kusmierz wrote:
> Hi,
>
> My setup is that I use one file system for / and /home (on SSD) and a
> larger raid 10 for /mnt/share (6 x 2TB).
>
> Today I've discovered that 14 of files that are supposed to be over
> 2GB are in fact just 4096 bytes. I've checked
On Sun, Jul 3, 2016 at 12:33 PM, Kai Krakow wrote:
> Am Fri, 1 Jul 2016 22:14:00 +0200
> schrieb Dmitry Katsubo :
>
>> Hello everyone,
>>
>> Question #1:
>>
>> While doing defrag I got the following message:
>>
>> # btrfs fi defrag -r /home
>> ERROR: defrag failed on /home/user/.dropbox-dist/dropb
(email ends up in gmail spamfolder)
On Fri, Jul 1, 2016 at 10:14 PM, Dmitry Katsubo wrote:
> Hello everyone,
>
> Question #1:
>
> While doing defrag I got the following message:
>
> # btrfs fi defrag -r /home
> ERROR: defrag failed on /home/user/.dropbox-dist/dropbox: Success
> total 1 failures
>
I have filed:
https://bugzilla.kernel.org/show_bug.cgi?id=121321
On Sat, Jun 25, 2016 at 1:17 AM, Henk Slager wrote:
> Hi,
>
> the virtual machine images files I create and use are mostly sparse,
> so that not too much space on a filesystem with snapshots and on
> filesystems t
On Tue, Jun 28, 2016 at 3:46 PM, Francesco Turco wrote:
> On 2016-06-27 23:26, Henk Slager wrote:
>> btrfs-debug does not show metadata ans system chunks; the balancing
>> problem might come from those.
>> This script does show all chunks:
>> https://github.com/knorri
On Tue, Jun 28, 2016 at 2:56 PM, M G Berberich wrote:
> Hello,
>
> Am Montag, den 27. Juni schrieb Henk Slager:
>> On Mon, Jun 27, 2016 at 3:33 PM, M G Berberich
>> wrote:
>> > Am Montag, den 27. Juni schrieb M G Berberich:
>> >> after a balance ‘btrf
On Mon, Jun 27, 2016 at 9:24 PM, Chris Murphy wrote:
> On Mon, Jun 27, 2016 at 12:32 PM, Francesco Turco wrote:
>> On 2016-06-27 20:18, Chris Murphy wrote:
>>> If you can grab btrfs-debugfs from
>>> https://github.com/kdave/btrfs-progs/blob/master/btrfs-debugfs
>>>
>>> And then attach the output
On Mon, Jun 27, 2016 at 6:17 PM, Chris Murphy wrote:
> On Mon, Jun 27, 2016 at 5:21 AM, Austin S. Hemmelgarn
> wrote:
>> On 2016-06-25 12:44, Chris Murphy wrote:
>>>
>>> On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
>>> wrote:
>>>
Well, the obvious major advantage that comes to min
On Mon, Jun 27, 2016 at 3:33 PM, M G Berberich wrote:
> Am Montag, den 27. Juni schrieb M G Berberich:
>> after a balance ‘btrfs filesystem du’ probably shows false data about
>> shared data.
>
> Oh, I forgot: I have btrfs-progs v4.5.2 and kernel 4.6.2.
With btrfs-progs v4.6.1 and kernel 4.7-rc5
Hi,
the virtual machine images files I create and use are mostly sparse,
so that not too much space on a filesystem with snapshots and on
filesystems that are receive targets is used.
But I noticed that with just starting up and shutting down a virtual
machine, the difference between 2 nightly sna
On Sun, Jun 12, 2016 at 11:22 PM, Maximilian Böhm wrote:
> Hi there, I did something terribly wrong, all blame on me. I wanted to
> write to an USB stick but /dev/sdc wasn't the stick in this case but
> an attached HDD with GPT and an 8 TB btrfs partition…
GPT has a secondary copy at the end of t
On Sun, Jun 12, 2016 at 7:03 PM, boli wrote:
>>> It's done now, and took close to 99 hours to rebalance 8.1 TB of data from
>>> a 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining
>>> 3x6TB raid1 (9 TB capacity).
>>
>> Indeed, it not clear why it takes 4 days for such an actio
Bearcat Şándor gmail.com> writes:
> Is there a fix for the bad tree block error, which seems to be the
> root (pun intended) of all this?
I think the root cause is some memory corruption. It might be known case,
maybe someone else recognizes something.
Anyhow, if you can't and won't reproduce
On Sun, Jun 12, 2016 at 12:35 PM, boli wrote:
>> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
>>
>> These 90 hours seem like a rather long time, given that a rebalance/convert
>> from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a
>> scrub take
On Fri, Jun 10, 2016 at 8:04 PM, ojab // wrote:
> [Please CC me since I'm not subscribed to the list]
> Hi,
> I've tried to `/usr/bin/btrfs fi defragment -r` my btrfs partition,
> but it's failed w/ "No space left on device" and now I can't get any
> free space on that partition (deleting some fil
On Sat, May 14, 2016 at 10:19 AM, Jukka Larja wrote:
> In short:
>
> I added two 8TB Seagate Archive SMR disk to btrfs pool and tried to delete
> one of the old disks. After some errors I ended up with file system that can
> be mounted read-only, but crashes the kernel if mounted normally. Tried
>
On Thu, Jun 9, 2016 at 3:54 PM, Brendan Hide wrote:
>
>
> On 06/09/2016 03:07 PM, Austin S. Hemmelgarn wrote:
>>
>> On 2016-06-09 08:34, Brendan Hide wrote:
>>>
>>> Hey, all
>>>
>>> I noticed this odd behaviour while migrating from a 1TB spindle to SSD
>>> (in this case on a LUKS-encrypted 200GB p
On Fri, Jun 10, 2016 at 7:22 PM, Adam Borowski wrote:
> On Fri, Jun 10, 2016 at 01:12:42PM -0400, Austin S. Hemmelgarn wrote:
>> On 2016-06-10 12:50, Adam Borowski wrote:
>> >And, as of coreutils 8.25, the default is no reflink, with "never" not being
>> >recognized even as a way to avoid an alias
On Thu, Jun 9, 2016 at 5:41 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Hans van Kranenburg posted on Thu, 09 Jun 2016 01:10:46 +0200 as
> excerpted:
>
>> The next question is what files these extents belong to. To find out, I
>> need to open up the extent items I get back and follow a backreference
On Fri, Jun 10, 2016 at 10:17 AM, Qu Wenruo wrote:
>
>
> At 06/02/2016 10:56 PM, Nikolaus Rath wrote:
>>
>> On Jun 02 2016, Qu Wenruo wrote:
>>>
>>> At 06/02/2016 11:06 AM, Nikolaus Rath wrote:
Hello,
For one of my btrfs volumes, btrfsck reports a lot of the following
war
>> > - OTOH, defrag seems to be viable for important use cases (VM
>> > images,
>> > DBs,... everything where large files are internally re-written
>> > randomly).
>> > Sure there is nodatacow, but with that one effectively completely
>> > looses one of the core features/promises of btrfs (
On Thu, Jun 2, 2016 at 3:55 PM, MegaBrutal wrote:
> 2016-06-02 0:22 GMT+02:00 Henk Slager :
>> What is the kernel version used?
>> Is the fs on a mechanical disk or SSD?
>> What are the mount options?
>> How old is the fs?
>
> Linux 4.4.0-22-generic (Ubuntu
On Thu, Jun 2, 2016 at 9:26 AM, Benedikt Morbach
wrote:
> Hi all,
>
> I've encountered a bug in btrfs-receive. When receiving a certain
> incremental send, it will error with:
>
> ERROR: cannot open
> backup/detritus/root/root.20160524T1800/var/log/journal/9cbb44cf160f4c1089f77e32ed376a0b/user
On Wed, Jun 1, 2016 at 11:06 PM, MegaBrutal wrote:
> Hi Peter,
>
> I tried. I either get "Done, had to relocate 0 out of 33 chunks" or
> "ERROR: error during balancing '/': No space left on device", and
> nothing changes.
>
>
> 2016-06-01 22:29 GMT+02:00 Peter Becker :
>> try this:
>>
>> btrfs fi
There is a division by 2 missing in the code. With that added, the
RAID10 numbers make more sense. See also:
http://permalink.gmane.org/gmane.comp.file-systems.btrfs/53989
More detail in here:
https://www.spinics.net/lists/linux-btrfs/msg52882.html
And if you want to look at allocation in a diffe
On Wed, May 25, 2016 at 10:58 AM, H. Peter Anvin wrote:
> Hi,
>
> I'm looking at using a btrfs with snapshots to implement a generational
> backup capacity. However, doing it the naïve way would have the side
> effect that for a file that has been partially modified, after
> snapshotting the file
bcache protective superblocks is a one-time procedure which can be done
online. The bcache devices act as normal HDD if not attached to a
caching SSD. It's really less pain than you may think. And it's a
solution available now. Converting back later is easy: Just detach the
On Fri, May 20, 2016 at 7:59 PM, Austin S. Hemmelgarn
wrote:
> On 2016-05-20 13:02, Ferry Toth wrote:
>>
>> We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
>> then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
>> partitions are in the same pool, which is in bt
On Thu, May 19, 2016 at 8:51 PM, Austin S. Hemmelgarn
wrote:
> On 2016-05-19 14:09, Kai Krakow wrote:
>>
>> Am Wed, 18 May 2016 22:44:55 + (UTC)
>> schrieb Ferry Toth :
>>
>>> Op Tue, 17 May 2016 20:33:35 +0200, schreef Kai Krakow:
>>>
Am Tue, 17 May 2016 07:32:11 -0400 schrieb "Austin S.
On Wed, May 11, 2016 at 11:10 PM, Nikolaus Rath wrote:
> Hello,
>
> I recently ran btrfsck on one of my file systems, and got the following
> messages:
>
> checking extents
> checking free space cache
> checking fs roots
> root 5 inode 3149867 errors 400, nbytes wrong
> root 5 inode 3150237 errors
On Tue, May 10, 2016 at 9:35 PM, wrote:
> He guys!
>
>
> while testing/stressing (dd'ing 200GB random to the drive) a brand new
> 8TB seagate drive i ran into an kernel ooops.
>
> i think it happend after i finished dd'ing and while removing the drive.
> saw it a few minutes afterwards.
Strictly
On Thu, Apr 28, 2016 at 7:09 AM, Matthias Bodenbinder
wrote:
> Am 26.04.2016 um 18:19 schrieb Henk Slager:
>> It looks like a JMS567 + SATA port multipliers behaind it are used in
>> this drivebay. The command lsusb -v could show that. So your HW
>> setup is like JBOD,
On Thu, Apr 21, 2016 at 7:27 PM, Matthias Bodenbinder
wrote:
> Am 21.04.2016 um 13:28 schrieb Henk Slager:
>>> Can anyone explain this behavior?
>>
>> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in
>> this test. What is on WD20 is unclear to me, b
On Sat, Apr 23, 2016 at 9:07 AM, Matthias Bodenbinder
wrote:
>
> Here is my newest test. The backports provide a 4.5 kernel:
>
>
> kernel: 4.5.0-0.bpo.1-amd64
> btrfs-tools: 4.4-1~bpo8+1
>
>
> This time the raid1 is automatically unmounted after I unplug the device and
> it can not be m
On Thu, Apr 21, 2016 at 8:23 AM, Satoru Takeuchi
wrote:
> On 2016/04/20 14:17, Matthias Bodenbinder wrote:
>>
>> Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
>>>
>>> BTW, it would be better to post the dmesg for better debug.
>>
>>
>> So here we. I did the same test again. Here is a full log of what
> I have /dev/sdb , /dev/sdc. Using wipefs -fa I cleared both devices and
> created btrfs on /dev/sdb. Mounted and written some files and unmounted it.
>
> Then I ran btrfs-image /dev/sdc /img1.img and got the dump.
It looks like you imaged the wrong device, that might clarify the IO
errors later
>>> Reproduction case after running into the same problem as Paride
>>> Legovini:
>>> http://article.gmane.org/gmane.comp.file-systems.btrfs/48706/match=send
Your case is not the same as in this thread from Paride IMO. The error
message is the same, but that doesn't mean the call tree leading to i
On Mon, Apr 18, 2016 at 4:26 PM, Roman Mamedov wrote:
> On Mon, 18 Apr 2016 16:13:28 +0200
> Henk Slager wrote:
>
>> (your email keeps ending up in gmail spam folder)
>>
>> On Mon, Apr 18, 2016 at 9:24 AM, sri wrote:
>> > I tried btrfs-image and created im
(your email keeps ending up in gmail spam folder)
On Mon, Apr 18, 2016 at 9:24 AM, sri wrote:
> I tried btrfs-image and created image file and ran btrfs-image -r to a
> different disk. Once recovered and mounted, I can able to see data is
> not zeroed out as mentioned in btrfs-image man page.
"d
On Fri, Apr 15, 2016 at 9:49 PM, Yauhen Kharuzhy
wrote:
> Hi.
>
> I have discovered case when replacement of missing devices causes
> metadata corruption. Does anybody know anything about this?
I just can confirm that there is corruption when doing replacement for
both raid5 and raid6, and not on
On Fri, Apr 15, 2016 at 2:49 PM, Hugo Mills wrote:
> On Fri, Apr 15, 2016 at 12:41:36PM +, sri wrote:
>> Hi,
>>
>> I have couple of queries related to btrfs-image, btrfs send and with
>> combination of two.
>> 1)
>> I would like to know if a btrfs source file system is spread across more
>> th
On Tue, Apr 12, 2016 at 5:52 PM, Julian Taylor
wrote:
> smaller testcase that shows the immediate enospc after fallocate -> rm,
> though I don't know if it is really related to the full filesystem
> bugging out as the balance does work if you wait a few seconds after the
> balance.
> But this sequ
On Tue, Apr 12, 2016 at 5:52 PM, Julian Taylor
wrote:
> smaller testcase that shows the immediate enospc after fallocate -> rm,
> though I don't know if it is really related to the full filesystem
> bugging out as the balance does work if you wait a few seconds after the
> balance.
> But this sequ
r crashes with btrfs RAID5 on older disks). If it can't
correct, there is something else wrong and likely affecting more
devices than the RAID profile is able to correct.
> On Sun, Apr 10, 2016 at 1:25 PM, Henk Slager wrote:
>> It was not fully clear what the sequence of event
>> You might want this patch:
>> http://www.spinics.net/lists/linux-btrfs/msg53552.html
>>
>> As workaround, you can reset the counters on new/healty device with:
>>
>> btrfs device stats [-z] |
>>
>
> I did reset the stats and launched another scrub, and still, since the
> logical blocks are the s
On Fri, Apr 8, 2016 at 1:01 PM, Martin Steigerwald wrote:
> Hello!
>
> As far as I understood, for differential btrfs send/receive – I didn´t use it
> yet – I need to keep a snapshot on the source device to then tell btrfs send
> to send the differences between the snapshot and the current state.
On Tue, Apr 5, 2016 at 4:37 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Gareth Pye posted on Tue, 05 Apr 2016 09:36:48 +1000 as excerpted:
>
>> I've got a btrfs file system set up on 6 drbd disks running on 2Tb
>> spinning disks. The server is moderately loaded with various regular
>> tasks that use
On Mon, Apr 4, 2016 at 9:50 AM, Jérôme Poulin wrote:
> Hi all,
>
> I have a BTRFS on disks running in RAID10 meta+data, one of the disk
> has been going bad and scrub was showing 18 uncorrectable errors
> (which is weird in RAID10). I tried using --repair-sector with hdparm
> even if it shouldn't
On Sat, Apr 2, 2016 at 11:00 AM, Kai Krakow wrote:
> Am Fri, 1 Apr 2016 01:27:21 +0200
> schrieb Henk Slager :
>
>> It is not clear to me what 'Gentoo patch-set r1' is and does. So just
>> boot a vanilla v4.5 kernel from kernel.org and see if you get csum
>>
On Fri, Apr 1, 2016 at 10:40 PM, Marc Haber wrote:
> On Fri, Apr 01, 2016 at 09:20:52PM +0200, Henk Slager wrote:
>> On Fri, Apr 1, 2016 at 6:50 PM, Marc Haber
>> wrote:
>> > On Fri, Apr 01, 2016 at 06:30:20PM +0200, Marc Haber wrote:
>> >> On Fri, Apr 01, 2
On Fri, Apr 1, 2016 at 6:50 PM, Marc Haber wrote:
> On Fri, Apr 01, 2016 at 06:30:20PM +0200, Marc Haber wrote:
>> On Fri, Apr 01, 2016 at 05:44:30PM +0200, Henk Slager wrote:
>> > On Fri, Apr 1, 2016 at 3:40 PM, Marc Haber
>> > wrote:
>> > > btrfs balanc
On Fri, Apr 1, 2016 at 3:40 PM, Marc Haber wrote:
> Hi,
>
> just for a change, this is another btrfs on a different host. The host
> is also running Debian unstable with mainline kernels, the btrfs in
> question was created (not converted) in March 2015 with btrfs-tools
> 3.17. It is the root fs o
On Thu, Mar 31, 2016 at 10:44 PM, Kai Krakow wrote:
> Hello!
>
> I already reported this in another thread but it was a bit confusing by
> intermixing multiple volumes. So let's start a new thread:
>
> Since one of the last kernel upgrades, I'm experiencing one VDI file
> (containing a NTFS image
>> Would you please try the following patch based on v4.5 btrfs-progs?
>> https://patchwork.kernel.org/patch/8706891/
>>
>> According to your output, all the output is false alert.
>> All the extent starting bytenr can be divided by 64K, and I think at
>> initial time, its 'max_size' may be set to
On Mon, Mar 28, 2016 at 4:37 PM, Marc Haber wrote:
> Hi,
>
> I have a btrfs which btrfs check --repair doesn't fix:
>
> # btrfs check --repair /dev/mapper/fanbtr
> bad metadata [4425377054720, 4425377071104) crossing stripe boundary
> bad metadata [4425380134912, 4425380151296) crossing stripe bou
On Thu, Mar 31, 2016 at 4:23 AM, Qu Wenruo wrote:
>
>
> Henk Slager wrote on 2016/03/30 16:03 +0200:
>>
>> On Wed, Mar 30, 2016 at 9:00 AM, Qu Wenruo
>> wrote:
>>>
>>> First of all.
>>>
>>> The "crossing stripe boundary"
On Thu, Mar 31, 2016 at 2:28 AM, Qu Wenruo wrote:
>
>
> Henk Slager wrote on 2016/03/30 16:03 +0200:
>>
>> On Wed, Mar 30, 2016 at 9:00 AM, Qu Wenruo
>> wrote:
>>>
>>> First of all.
>>>
>>> The "crossing stripe boundary"
On Thu, Mar 31, 2016 at 5:31 PM, Andreas Dilger wrote:
> On Mar 31, 2016, at 1:55 AM, Christoph Hellwig wrote:
>>
>> On Wed, Mar 30, 2016 at 05:32:42PM -0700, Liu Bo wrote:
>>> Well, btrfs fallocate doesn't allocate space if it's a shared one
>>> because it thinks the space is already allocated.
On Wed, Mar 30, 2016 at 9:00 AM, Qu Wenruo wrote:
> First of all.
>
> The "crossing stripe boundary" error message itself is *HARMLESS* for recent
> kernels.
>
> It only means, that metadata extent won't be checked by scrub on recent
> kernels.
> Because scrub by its codes, has a limitation that,
1 - 100 of 164 matches
Mail list logo