On 2018-10-25 20:49, Chris Murphy wrote:
I would say the first step no matter what if you're using an older
kernel, is to boot a current Fedora or Arch live or install media,
mount the Btrfs and try to read the problem files and see if the
problem still happens. I can't even being to estimate
Dear btrfs community,
My excuses for the dumps for rather old kernel (4.9.25), nevertheless I
wonder
about your opinion about the below reported kernel crashes.
As I could understand the situation (correct me if I am wrong), it
happened
that some data block became corrupted which resulted
On 2018-10-24 20:05, Chris Murphy wrote:
I think about the best we can expect in the short term is that Btrfs
goes read-only before the file system becomes corrupted in a way it
can't recover with a normal mount. And I'm not certain it is in this
state of development right now for all cases. And
On 2018-10-17 00:14, Dmitry Katsubo wrote:
As a workaround I can monitor dmesg output but:
1. It would be nice if I could tell btrfs that I would like to mount
read-only
after a certain error rate per minute is reached.
2. It would be nice if btrfs could detect that both drives
Dear btrfs team / community,
Sometimes it happens that kernel resets USB subsystem (looks like hardware
problem). Nevertheless all USB devices are unattached and attached back. After
few hours of struggle btrfs finally comes to the situation when read-only
filesystem mount is necessary. During
Dear btrfs team,
I often observe kernel traces on linux-4.14.0 (mostly likely due to background
"btrfs scrub") which contain the following "characterizing" line (for the rest
see attachments):
btrfs_remove_chunk+0x26a/0x7e0 [btrfs]
I wonder if somebody from developers team knows anything about
On 2018-01-03 05:58, Qu Wenruo wrote:
> On 2018年01月03日 09:12, Dmitry Katsubo wrote:
>> Dear btrfs team,
>>
>> I send a kernel crash report which I have observed recently during btrfs
>> scrub.
>> It looks like scrub itself has completed without errors.
>
&g
Dear btrfs team,
I send a kernel crash report which I have observed recently during btrfs scrub.
It looks like scrub itself has completed without errors.
# btrfs scrub status /home
scrub status for 83a3cb60-3334-4d11-9fdf-70b8e8703167
scrub started at Mon Jan 1 06:52:01 2018 and
On 2016-07-01 22:46, Henk Slager wrote:
> (email ends up in gmail spamfolder)
> On Fri, Jul 1, 2016 at 10:14 PM, Dmitry Katsubo <dm...@mail.ru> wrote:
>> Hello everyone,
>>
>> Question #1:
>>
>> While doing defrag I got the following message:
>>
Hello everyone,
Question #1:
While doing defrag I got the following message:
# btrfs fi defrag -r /home
ERROR: defrag failed on /home/user/.dropbox-dist/dropbox: Success
total 1 failures
I feel that something went wrong, but the message is a bit misleading.
Provided that Dropbox is running in
Hi everyone,
I got the following message:
# btrfs fi defrag -r /home
ERROR: defrag failed on /home/user/.dropbox-dist/dropbox: Success
total 1 failures
I feel that something went wrong, but the message is a bit misleading.
Anyway: Provided that Dropbox is running in the system, does it mean
On 2016-06-21 15:17, Graham Cobb wrote:
> On 21/06/16 12:51, Austin S. Hemmelgarn wrote:
>> The scrub design works, but the whole state file thing has some rather
>> irritating side effects and other implications, and developed out of
>> requirements that aren't present for balance (it might be
Dear btfs community,
I have added a drive to existing raid1 btrfs volume and decided to
perform balancing so that data distributes "fairly" among drives. I have
started "btrfs balance start", but it stalled for about 5-10 minutes
intensively doing the work. After that time it has printed
On 2015-11-11 12:38, Dmitry Katsubo wrote:
> On 2015-11-09 14:25, Austin S Hemmelgarn wrote:
>> On 2015-11-07 07:22, Dmitry Katsubo wrote:
>>> Hi everyone,
>>>
>>> I have noticed the following in the log. The system continues to run,
>>> but I am not
On 2016-05-29 22:45, Ferry Toth wrote:
Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
wrote:
On 05/29/16 19:53, Chris Murphy wrote:
But I'm skeptical of bcache using a hidden area historically for
On 2016-05-25 21:03, Duncan wrote:
> Dmitry Katsubo posted on Wed, 25 May 2016 16:45:41 +0200 as excerpted:
>> * Would be nice if 'btrfs scrub status' shows estimated finishing time
>> (ETA) and throughput (in Mb/s).
>
> That might not be so easy to implement. (Caveat
Dear btrfs community,
I hope btrfs developers are open for suggestions.
btrfs-scrub:
* Would be nice if 'btrfs scrub status' shows estimated finishing time
(ETA) and throughput (in Mb/s).
* Not possible to start scrub for all devices in the volume without
mounting it.
btrfs-restore:
* It
On 2016-05-25 11:29, Hugo Mills wrote:
On Wed, May 25, 2016 at 01:58:15AM -0700, H. Peter Anvin wrote:
Hi,
I'm looking at using a btrfs with snapshots to implement a
generational
backup capacity. However, doing it the naïve way would have the side
effect that for a file that has been
Dear btrfs community,
I am interested in spare volumes and hot auto-replacement feature [1]. I have a
couple of questions:
* Which kernel version this feature will be included?
* The description says that replacement happens automatically when there is any
write failed or flush failed. Is it
Hello,
If somebody is interested in digging into the problem, I would be happy to
provide
more information and/or do the testing.
On 2016-04-27 04:44, Dmitry Katsubo wrote:
> # cat /mnt/tmp/file > /dev/null
> [ 11.432059] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 a
On 2016-04-25 09:12, Dmitry Katsubo wrote:
> I have run "btrfs check /dev/sda" two times. One time it has completed
> OK, actually showing only one error. The 2nd time it has shown many messages
>
> "parent transid verify failed on NNN wanted AAA found BBB"
>
On 2016-04-19 09:58, Duncan wrote:
> Dmitry Katsubo posted on Tue, 19 Apr 2016 07:45:40 +0200 as excerpted:
>
>> Actually btrfs restore has recovered many files, however I was not able
>> to run in fully unattended mode as it complains about "looping a lot"
On 2016-04-18 02:19, Chris Murphy wrote:
> With two device failure on raid1 volume, the file system is actually
> broken. There's a big hole in the metadata, not just missing data,
> because there are only two copies of metadata, distributed across
> three drives.
Thanks, I understand that. Well,
On 2016-04-14 22:30, Dmitry Katsubo wrote:
> Dear btrfs community,
>
> I have the following setup:
>
> # btrfs fi show /home
> Label: none uuid: 865f8cf9-27be-41a0-85a4-6cb4d1658ce3
> Total devices 3 FS bytes used 55.68GiB
> devid1 size 52.91GiB u
Dear btrfs community,
I have the following setup:
# btrfs fi show /home
Label: none uuid: 865f8cf9-27be-41a0-85a4-6cb4d1658ce3
Total devices 3 FS bytes used 55.68GiB
devid1 size 52.91GiB used 0.00B path /dev/sdd2
devid2 size 232.89GiB used 59.03GiB path /dev/sda
Many thanks to Duncan for such a verbose clarification. I am thinking
about another parallel similar to SimSity, and that is memory management
in virtual machines like Java. If heap is full, it does not really mean
that there is no free memory. In this case JVM forces garbage collector
and if
If I may add:
Information for "System"
System, DUP: total=32.00MiB, used=16.00KiB
is also quite technical, as for end user system = metadata (one can call
it "filesystem metadata" perhaps). For simplicity the numbers can be
added to "Metadata" thus eliminating that line as well.
For those
On 2015-11-20 14:52, Austin S Hemmelgarn wrote:
> On 2015-11-20 08:27, Hugo Mills wrote:
>> On Fri, Nov 20, 2015 at 08:21:31AM -0500, Austin S Hemmelgarn wrote:
>>> On 2015-11-20 06:39, Dmitry Katsubo wrote:
>>>> For those power users who really want to see the
On 2015-11-12 13:47, Austin S Hemmelgarn wrote:
>> That's a pretty unusual setup, so I'm not surprised there's no quick and
>> easy answer. The best solution in my opinion would be to shuffle your
>> partitions around and combine sda3 and sda8 into a single partition.
>> There's generally no
On 2015-11-09 14:25, Austin S Hemmelgarn wrote:
> On 2015-11-07 07:22, Dmitry Katsubo wrote:
>> Hi everyone,
>>
>> I have noticed the following in the log. The system continues to run,
>> but I am not sure for how long it will be stable. Should I start
>
Hi everyone,
I have noticed the following in the log. The system continues to run,
but I am not sure for how long it will be stable. Should I start
worrying? Thanks in advance for the opinion.
# uname -a
Linux Debian 4.2.3-2~bpo8+1 (2015-10-20) i686 GNU/Linux
# mount | grep /var
/dev/sdd2 on
Hi everyone,
I have noticed the following in the log. The system continues to run,
but I am not sure for how long it will be stable.
# uname -a
Linux Debian 4.2.3-2~bpo8+1 (2015-10-20) i686 GNU/Linux
# mount | grep /var
/dev/sdd2 on /var type btrfs
On 2015-10-21 00:40, Henk Slager wrote:
> I had a similar issue some time ago, around the time kernel 4.1.6 was
> just there.
> In case you don't want to wait for new disk or decide to just run the
> filesystem with 1 disk less or maybe later on replace 1 of the still
> healthy disks with a
On 16/10/2015 10:18, Duncan wrote:
> Dmitry Katsubo posted on Thu, 15 Oct 2015 16:10:13 +0200 as excerpted:
>
>> On 15 October 2015 at 02:48, Duncan <1i5t5.dun...@cox.net> wrote:
>>
>>> [snipped]
>>
>> Thanks for this information. As far as I can see,
On 15 October 2015 at 02:48, Duncan <1i5t5.dun...@cox.net> wrote:
> Dmitry Katsubo posted on Wed, 14 Oct 2015 22:27:29 +0200 as excerpted:
>
>> On 14/10/2015 16:40, Anand Jain wrote:
>>>> # mount -o degraded /var Oct 11 18:20:15 kernel: BTRFS: too many
>>
Dear btrfs community,
I am facing several problems regarding to btrfs, and I will be very
thankful if someone can help me with. Also while playing with btrfs I
have few suggestions – would be nice if one can comment on those.
While starting the system, /var (which is btrfs volume) failed to be
On 14/10/2015 16:40, Anand Jain wrote:
>> # mount -o degraded /var
>> Oct 11 18:20:15 kernel: BTRFS: too many missing devices, writeable
>> mount is not allowed
>>
>> # mount -o degraded,ro /var
>> # btrfs device add /dev/sdd1 /var
>> ERROR: error adding the device '/dev/sdd1' - Read-only file
37 matches
Mail list logo