I have a laptop root hard drive (Samsung SSD 850 EVO 1TB), which is
within warranty.
I can't mount it read-write ("no rw mounting after error").
The data are not really critical (I will overcome the shock of losing
them within a couple of days).
Btrfs check --repair throws an error:
sudo btrfs
Hi everybody!
Btrfs filesystem could not be mounted because /dev/sdc1 had unreadable sectors.
It is/was a single filesystem (not raid1 or raid0) over /dev/sda1 and
/dev/sdc1.
I wrote the unreadable sectors with hdparm but filesystem still cannot
be mounted (sectors were too early perhaps).
What
Thank you very much for your response:
# file -s /dev/sda1 /dev/sdc1
/dev/sda1: BTRFS Filesystem label "partition", sectorsize 4096,
nodesize 4096, leafsize 4096,
UUID=c1eb1aaf-665a-4337-9d04-3c3921aa67e0, 1683870334976/3010310701056
bytes used, 2 devices
/dev/sdc1: data
--
To unsubscribe from thi
Thank you very much for your advice. It worked!
I verified that the superblocks 1 and 2 had similar information with
btrfs-show-super -i 1 /dev/sdc1 (and -i 2) and then with crossed
fingers:
btrfs-select-super -s 2 /dev/sdc1
which restored my btrfs filesystem.
Then I runned scrub. For future ref
unsubscribe
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
2:
How do I properly correct them? (Again by deleting their files? :( )
Question 3:
How do I prevent this from happening?
Thanks a lot!
constantine
PS.
The disks can be considered old (some with > 15000 hrs online), but
SMART long tests complete without errors. I have this filesystem:
# bt
On Mon, Dec 22, 2014 at 12:24 AM, Chris Murphy wrote:
> smartctl -l scterc /dev/sdX
That's really good to know. My drives are desktop and this feature is
not supported; hence, I get "SCT Error Recovery Control command not
supported".
I'll definitely go for enterprise/raid class drives that suppo
Hello everybody,
I have a Raid-1 btrfs filesystem with 5 devices and I was running
btrfs scrub once a week. Unfortunately, one disk (4TB) failed.
I added two new (6TB each) disks in the array and now I get:
# btrfs filesystem df /mnt/mountpoint
Data, RAID1: total=6.58TiB, used=6.57TiB
System, RA
I have Arch Linux:
# uname -a
Linux hostname 3.19.0-1-mainline #1 SMP PREEMPT Wed Dec 24 00:27:17
WET 2014 x86_64 GNU/Linux
btrfs-progs 3.17.3-1
dmesg:
[0.00] Initializing cgroup subsys cpuset
[0.00] Initializing cgroup subsys cpu
[0.00] Initializing cgroup subsys cpuacct
Thank you very much for your help. I do not have any recovery backup
and I need these data :(
Before my problems begun I was running btrfs-scrub in a weekly basis
and I only got 17 uncorrectable errors for this array, concerning
files that I do not care of, so I ignored them. I clearly should not.
By the way, /dev/sdc just completed the extended offline test without
any error... I feel so confused,
constantine
On Sun, Feb 8, 2015 at 11:04 PM, constantine wrote:
> Thank you very much for your help. I do not have any recovery backup
> and I need these data :(
>
> Before
to minimize my risk.
Now I should do
# btrfs device delete /dev/sdc1 ?
or
# btrfs check --repair --init-csum-tree ?
constantine
On Sun, Feb 8, 2015 at 11:34 PM, Chris Murphy wrote:
> On Sun, Feb 8, 2015 at 4:09 PM, constantine wrote:
>> By the way, /dev/sdc just completed the extende
> Second, SMART is only saying its internal test is good. The errors are
> related to data transfer, so that implicates the enclosure (bridge
> chipset or electronics), the cable, or the controller interface.
> Actually it could also be a flaky controller or RAM on the drive
> itself too which I do
Thank you everybody for your support, care, cheerful comments and
understandable criticism. I am in the process of backing up every
file.
Could you please answer two questions?:
1. I am testing various files and all seem readable. Is there a way
to list every file that resides on a particular de
14 matches
Mail list logo