He Guys!
Again now in a readable from
I have an Ubuntu 16.04 machine (K4.4 and K4.11, btrfs-progs v4.5.2) running /
on btrfs raid1 and for some
month (after i had to replace a / disk) super_numdevices and num_devices are
out of sync peventing me to boot
Kernels newer than around 4.7
He Guys!I have an ubuntu 16.04 machine (K4.4 and K4.11, btrfs-progs v4.5.2)
running / on btrfs raid1 and for somemonth (after i had to replace a / disk)
super_numdevices and num_devices are out of sync peventing me to bootKernels
newer than around 4.7 (ifaik), where the kernels began the be
> -Ursprüngliche Nachricht-
> Von: Dmitrii Tcvetkov
> Gesendet: Di. 22.08.2017 12:28
> An: g6094...@freenet.de
> Kopie: linux-btrfs@vger.kernel.org
> Betreff: Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable
> to find block group for 0
>
> On Tue, 22 Aug 2017 11:31:23
He guys,
picking up this old topic cause i'm running into a similar problem.
Running a Ubuntu 16.04 (HWE K4.8) server with 2 nvme SSD as Raid1 as /.
Since one nvme died i had to replace it, where the trouble began. I
replaced the nvme, bootet degraded, added the new disk to the raid
(btrfs dev
Hi Stefan,
1st you should run an balance on system data to move the single data to
raid1. imho.
then do the scrub again.
btw are there any scrubbing errors in dmesg? disks are ok?! any
compression involved? changed freespacecache to v2?
sash
Am 15.09.2016 um 17:48 schrieb Stefan Malte
Am 10.08.2016 um 13:16 schrieb Matt McKinnon:
> I performed a quick balance which gave me:
>
> [39020.030638] BTRFS info (device sda1): relocating block group
> 25428383236096 flags 1
> [39020.206097] BTRFS warning (device sda1): block group 23113395863552
> has wrong amount of free space
>
Hi,
from what i see you have a non finished balance ongoing, since you have
system and metadata DUP and single information on disk.
so you should (re)run a balance for this data.
sash
Am 10.08.2016 um 02:17 schrieb Matt McKinnon:
> -o usebackuproot worked well.
>
> after the file system
Am 01.07.2016 um 20:25 schrieb Chris Murphy:
> On Fri, Jul 1, 2016 at 12:24 PM, Chris Murphy wrote:
>> This probably needs two more pieces of information:
>> 1. What's the workload going on at the time? There's stuff being
>> written and cleaned up.
>> 2. Issue sysrq+w
>
he guys,
yes i know its *buntu xenial and nobody knows what they have backported,
but maybe its still interesting for someone
its a kvm guest:
uname -a
Linux Storage 4.4.0-29-generic #48-Ubuntu SMP Tue Jun 28 18:34:37 UTC
2016 x86_64 x86_64 x86_64 GNU/Linux
btrfs --version
btrfs-progs
Am 09.06.2016 um 16:52 schrieb Duncan:
> Fugou Nashi posted on Sun, 05 Jun 2016 10:12:31 +0900 as excerpted:
>
>> Hi,
>>
>> Do I need to worry about this?
>>
>> Thanks.
>>
>> Linux nakku 4.6.0-040600-generic #201605151930 SMP Sun May 15 23:32:59
>> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> There's
he guys!
I´m running Debian Sid where i have found several kernel errors today
and most of them are btrfs related.
uname -a
Linux NAS 4.6.0-trunk-amd64 #1 SMP Debian 4.6-1~exp1 (2016-05-17) x86_64
GNU/Linux
Inside are 2 disks as Raid0, another disk as single and the system disk
also as
Hi Chris,
since you are using a recent LTS kernel on your centos/rockstor, i guess
the kernel errors might help to find some bugs here.
can you give the devs the errors from your logs?
additionally basic info on your raid settings would be nice to, but
which specific details the devs should ask
He Guys,
i'm doing some restore right now. In fakt about 8TB.
I saw Justin sending the patch below in 2014 adding the "all" option when
hitting the looping promt.
It would be nice to have this as command switch to, because if you have to
recover the amount of data as i do atm, you probably get
Hi Henk!
Thanks for clarification! Its indeed an Seagate Archive 8TB drive.
So it is vital info to let the drive settle down a bit when using it at
least in an HotPlug or USB szenario
sash
Am 11.05.2016 um 03:02 schrieb Henk Slager:
> On Tue, May 10, 2016 at 9:35 PM,
He guys!
while testing/stressing (dd'ing 200GB random to the drive) a brand new
8TB seagate drive i ran into an kernel ooops.
i think it happend after i finished dd'ing and while removing the drive.
saw it a few minutes afterwards.
uname -a
Linux MacBookPro 4.4.0-22-generic #39~14.04.1-Ubuntu
Am 08.05.2016 um 02:54 schrieb Martin:
> On 07/05/16 10:39, g6094...@freenet.de wrote:
>> a brand new disk which has an upcounting raw error rate
> Note that is the "raw error rate".
>
> For a brand new disk being run for the first time at maximum data
> writes, the "raw error rate" may well be
He guys,
i'm running in an rare error which isnt covered by an implemented usecase atm.i
have a 4 hdd raid5 array. i like to replace the 4 disks with bigger ones to
gain more usable space the NAS has only 4 drive bays so i need to use an usb
bay.
since the replace code was unreliable at least
17 matches
Mail list logo