Urgently need money? We can help you!
Are you by the current situation in trouble or threatens you in trouble?
In this way, we give you the ability to take a new development.
As a rich person I feel obliged to assist people who are struggling to give
them a chance. Everyone deserved a second
Thomas Mohr posted on Thu, 06 Dec 2018 12:31:15 +0100 as excerpted:
> We wanted to convert a file system to a RAID0 with two partitions.
> Unfortunately we had to reboot the server during the balance operation
> before it could complete.
>
> Now following happens:
>
> A mount attempt of the
30523392ERROR: failed to repair root items:
Operation not permitted
Any ideas what is going on or how to recover the file system ? I would
greatly appreciate your help !!!
best,
Thomas
uname -a:
Linux server2 4.19.5-1-default #1 SMP PREEMPT Tue Nov 27 19:56:09 UTC
2018 (6210279) x86_64
On Tue, Dec 4, 2018 at 3:09 AM Patrick Dijkgraaf
wrote:
>
> Hi Chris,
>
> See the output below. Any suggestions based on it?
If they're SATA drives, they may not support SCT ERC; and if they're
SAS, depending on what controller they're behind, smartctl might need
a hint to properly ask the drive
Hi Chris,
See the output below. Any suggestions based on it?
Thanks!
--
Groet / Cheers,
Patrick Dijkgraaf
On Mon, 2018-12-03 at 20:16 -0700, Chris Murphy wrote:
> Also useful information for autopsy, perhaps not for fixing, is to
> know whether the SCT ERC value for every drive is less than
gt; I have been a happy BTRFS user for quite some time. But now I'm
> > > > facing
> > > > a potential ~45TB dataloss... :-(
> > > > I hope someone can help!
> > > >
> > > > I have Server A and Server B. Both having a 20-devices BTRFS
> >
Also useful information for autopsy, perhaps not for fixing, is to
know whether the SCT ERC value for every drive is less than the
kernel's SCSI driver block device command timeout value. It's super
important that the drive reports an explicit read failure before the
read command is considered
On 2018/12/3 上午4:30, Andrei Borzenkov wrote:
> 02.12.2018 23:14, Patrick Dijkgraaf пишет:
>> I have some additional info.
>>
>> I found the reason the FS got corrupted. It was a single failing drive,
>> which caused the entire cabinet (containing 7 drives) to reset. So the
>> FS suddenly lost 7
;> On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
>>> On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
>>>> Hi all,
>>>>
>>>> I have been a happy BTRFS user for quite some time. But now I'm
>>>> facing
>>>> a potential ~45T
53, Patrick Dijkgraaf wrote:
>>> Hi all,
>>>
>>> I have been a happy BTRFS user for quite some time. But now I'm
>>> facing
>>> a potential ~45TB dataloss... :-(
>>> I hope someone can help!
>>>
>>> I have Server A and Serve
02.12.2018 23:14, Patrick Dijkgraaf пишет:
> I have some additional info.
>
> I found the reason the FS got corrupted. It was a single failing drive,
> which caused the entire cabinet (containing 7 drives) to reset. So the
> FS suddenly lost 7 drives.
>
This remains mystery for me. btrfs is
1 at 07:57 +0800, Qu Wenruo wrote:
> > On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> > > Hi all,
> > >
> > > I have been a happy BTRFS user for quite some time. But now I'm
> > > facing
> > > a potential ~45TB dataloss... :-(
> > > I hop
ome time. But now I'm
> > facing
> > a potential ~45TB dataloss... :-(
> > I hope someone can help!
> >
> > I have Server A and Server B. Both having a 20-devices BTRFS RAID6
> > filesystem. Because of known RAID5/6 risks, Server B was a backup
> > of
> >
On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> Hi all,
>
> I have been a happy BTRFS user for quite some time. But now I'm facing
> a potential ~45TB dataloss... :-(
> I hope someone can help!
>
> I have Server A and Server B. Both having a 20-devices BTRFS RAID6
Hi all,
I have been a happy BTRFS user for quite some time. But now I'm facing
a potential ~45TB dataloss... :-(
I hope someone can help!
I have Server A and Server B. Both having a 20-devices BTRFS RAID6
filesystem. Because of known RAID5/6 risks, Server B was a backup of
Server A.
After
Explicitly states that -d requires root privileges.
Also, update some option handling with regard to -d option.
Signed-off-by: Misono Tomohiro
---
Documentation/btrfs-subvolume.asciidoc | 3 ++-
cmds-subvolume.c | 8
2 files changed, 10 insertions(+), 1
Currently "sub list -o" lists only child subvolumes of the specified
path. So, update help message and variable name more appropriately.
Signed-off-by: Misono Tomohiro
---
Documentation/btrfs-subvolume.asciidoc | 2 +-
cmds-subvolume.c | 10 +-
2 files
From: Jeff Mahoney
The usage definitions for send and receive follow the command
definitions, which use them. This works because we declare them
in commands.h. When we move to using cmd_struct as the entry point,
these declarations will be removed, breaking the commands. Since
ndle_special_globals(int shift, int argc, char **argv)
{
- int has_help = 0;
- int has_full = 0;
+ bool has_help = false;
+ bool has_full = false;
int i;
for (i = 0; i < shift; i++) {
if (strcmp(argv[i], "--help") == 0)
Dear Sir/Madam,
I am Sgt Swanson Dennis, I have a good business proposal for you.
There are no risks involved and it is easy. Please reply for briefs
and procedures.
Best regards,
Sgt Swanson Dennis
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a
t has_help = 0;
> - int has_full = 0;
> + bool has_help = false;
> + bool has_full = false;
> int i;
>
> for (i = 0; i < shift; i++) {
> if (strcmp(argv[i], "--help") == 0)
> - has_help
From: Jeff Mahoney
The usage definitions for send and receive follow the command
definitions, which use them. This works because we declare them
in commands.h. When we move to using cmd_struct as the entry point,
these declarations will be removed, breaking the commands. Since
argc, char **argv)
{
- int has_help = 0;
- int has_full = 0;
+ bool has_help = false;
+ bool has_full = false;
int i;
for (i = 0; i < shift; i++) {
if (strcmp(argv[i], "--help") == 0)
- has_help = 1;
+
> -Original Message-
> From: Anand Jain [mailto:anand.j...@oracle.com]
> Sent: Monday, 26 February 2018 7:27 PM
> To: Paul Jones <p...@pauljones.id.au>; linux-btrfs@vger.kernel.org
> Subject: Re: Help with leaf parent key incorrect
>
>
>
> > Th
> There is one io error in the log below,
Apparently, that's not a real EIO. We need to fix it.
But can't be the root cause we are looking for here.
> Feb 24 22:41:59 home kernel: BTRFS: error (device dm-6) in
btrfs_run_delayed_refs:3076: errno=-5 IO failure
> Feb 24 22:41:59 home
On 02/25/2018 06:16 PM, Paul Jones wrote:
Hi all,
I was running dedupe on my filesystem and something went wrong overnight, by
the time I noticed the fs was readonly.
Thanks for the report. I have few questions..
Kind of raid profile used here?
Dedupe tool that was used?
Was the fs
Hi all,
I was running dedupe on my filesystem and something went wrong overnight, by
the time I noticed the fs was readonly.
When trying to check it this is what I get:
vm-server ~ # btrfs check /dev/mapper/a-backup--a
parent transid verify failed on 2371034071040 wanted 62977 found 62893
parent
Am Fri, 17 Nov 2017 06:51:52 +0300
schrieb Andrei Borzenkov :
> 16.11.2017 19:13, Kai Krakow пишет:
> ...
> > > BTW: From user API perspective, btrfs snapshots do not guarantee
> > perfect granular consistent backups.
>
> Is it documented somewhere? I was relying on
16.11.2017 19:13, Kai Krakow пишет:
...
> > BTW: From user API perspective, btrfs snapshots do not guarantee
> perfect granular consistent backups.
Is it documented somewhere? I was relying on crash-consistent
write-order-preserving snapshots in NetApp for as long as I remember.
And I was sure
Link 2 slipped away, adding it below...
Am Tue, 14 Nov 2017 15:51:57 -0500
schrieb Dave :
> On Tue, Nov 14, 2017 at 3:50 AM, Roman Mamedov wrote:
> >
> > On Mon, 13 Nov 2017 22:39:44 -0500
> > Dave wrote:
> >
> > > I have my
Am Tue, 14 Nov 2017 15:51:57 -0500
schrieb Dave :
> On Tue, Nov 14, 2017 at 3:50 AM, Roman Mamedov wrote:
> >
> > On Mon, 13 Nov 2017 22:39:44 -0500
> > Dave wrote:
> >
> > > I have my live system on one block device and a
On Tue, Nov 14, 2017 at 3:50 AM, Roman Mamedov wrote:
>
> On Mon, 13 Nov 2017 22:39:44 -0500
> Dave wrote:
>
> > I have my live system on one block device and a backup snapshot of it
> > on another block device. I am keeping them in sync with hourly
On Mon, 13 Nov 2017 22:39:44 -0500
Dave wrote:
> I have my live system on one block device and a backup snapshot of it
> on another block device. I am keeping them in sync with hourly rsync
> transfers.
>
> Here's how this system works in a little more detail:
>
> 1. I
On Tue, 14 Nov 2017 10:14:55 +0300
Marat Khalili wrote:
> Don't keep snapshots under rsync target, place them under ../snapshots
> (if snapper supports this):
> Or, specify them in --exclude and avoid using --delete-excluded.
Both are good suggestions, in my case each system does
On 14/11/17 06:39, Dave wrote:
My rsync command currently looks like this:
rsync -axAHv --inplace --delete-delay --exclude-from="/some/file"
"$source_snapshop/" "$backup_location"
As I learned from Kai Krakow in this maillist, you should also add
--no-whole-file if both sides are local.
On Wed, Nov 1, 2017 at 1:15 AM, Roman Mamedov wrote:
> On Wed, 1 Nov 2017 01:00:08 -0400
> Dave wrote:
>
>> To reconcile those conflicting goals, the only idea I have come up
>> with so far is to use btrfs send-receive to perform incremental
>> backups
Am Thu, 2 Nov 2017 23:24:29 -0400
schrieb Dave :
> On Thu, Nov 2, 2017 at 4:46 PM, Kai Krakow
> wrote:
> > Am Wed, 1 Nov 2017 02:51:58 -0400
> > schrieb Dave :
> >
> [...]
> [...]
> [...]
> >>
> >> Thanks for
On Thu, Nov 2, 2017 at 4:46 PM, Kai Krakow wrote:
> Am Wed, 1 Nov 2017 02:51:58 -0400
> schrieb Dave :
>
>> >
>> >> To reconcile those conflicting goals, the only idea I have come up
>> >> with so far is to use btrfs send-receive to perform
Am Wed, 1 Nov 2017 02:51:58 -0400
schrieb Dave :
> >
> >> To reconcile those conflicting goals, the only idea I have come up
> >> with so far is to use btrfs send-receive to perform incremental
> >> backups
> >
> > As already said by Romain Mamedov, rsync is viable
[ ... ]
> The poor performance has existed from the beginning of using
> BTRFS + KDE + Firefox (almost 2 years ago), at a point when
> very few snapshots had yet been created. A comparison system
> running similar hardware as well as KDE + Firefox (and LVM +
> EXT4) did not have the performance
On Wed, Nov 1, 2017 at 4:34 AM, Marat Khalili wrote:
>> We do experience severe performance problems now, especially with
>> Firefox. Part of my experiment is to reduce the number of snapshots on
>> the live volumes, hence this question.
>
> Just for statistics, how many snapshots
On 01/11/17 09:51, Dave wrote:
As already said by Romain Mamedov, rsync is viable alternative to
send-receive with much less hassle. According to some reports it can even be
faster.
Thanks for confirming. I must have missed those reports. I had never
considered this idea until now -- but I like
On Wed, Nov 1, 2017 at 2:19 AM, Marat Khalili wrote:
> You seem to have two tasks: (1) same-volume snapshots (I would not call them
> backups) and (2) updating some backup volume (preferably on a different
> box). By solving them separately you can avoid some complexity...
Yes, it
On Wed, Nov 1, 2017 at 1:15 AM, Roman Mamedov wrote:
> On Wed, 1 Nov 2017 01:00:08 -0400
> Dave wrote:
>
>> To reconcile those conflicting goals, the only idea I have come up
>> with so far is to use btrfs send-receive to perform incremental
>> backups
I'm active user of backup using btrfs snapshots. Generally it works with
some caveats.
You seem to have two tasks: (1) same-volume snapshots (I would not call
them backups) and (2) updating some backup volume (preferably on a
different box). By solving them separately you can avoid some
On Wed, 1 Nov 2017 01:00:08 -0400
Dave wrote:
> To reconcile those conflicting goals, the only idea I have come up
> with so far is to use btrfs send-receive to perform incremental
> backups as described here:
> https://btrfs.wiki.kernel.org/index.php/Incremental_Backup
Our use case requires snapshots. btrfs snapshots are best solution we
have found for our requirements, and over the last year snapshots have
proven their value to us.
(For this discussion I am considering both the "root" volume and the
"home" volume on a typical desktop workstation. Also, all
State that the 'delete' is the alias of 'remove' as the man page says.
Signed-off-by: Tomohiro Misono
Reviewed-by: Satoru Takeuchi
---
cmds-device.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-device.c
State the 'delete' is the alias of 'remove' as the man page says.
Signed-off-by: Tomohiro Misono
Reviewed-by: Satoru Takeuchi
---
cmds-device.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-device.c
State the 'delete' is the alias of 'remove' as the man page says.
Signed-off-by: Tomohiro Misono
Reviewed-by: Satoru Takeuchi
---
cmds-device.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-device.c
On 2017/10/11 6:22, Satoru Takeuchi wrote:
> At Tue, 3 Oct 2017 17:12:39 +0900,
> Misono, Tomohiro wrote:
>>
>> This patch updates help/document of "btrfs device remove" in two points:
>>
>> 1. Add explanation of 'missing' for 'device remove'. This
At Tue, 3 Oct 2017 17:12:39 +0900,
Misono, Tomohiro wrote:
>
> This patch updates help/document of "btrfs device remove" in two points:
>
> 1. Add explanation of 'missing' for 'device remove'. This is only
> written in wikipage currently.
> (https://btr
On Tue, Oct 03, 2017 at 03:49:25PM -0700, Stephen Nesbitt wrote:
>
> On 10/3/2017 2:11 PM, Hugo Mills wrote:
> >Hi, Stephen,
> >
> >On Tue, Oct 03, 2017 at 08:52:04PM +, Stephen Nesbitt wrote:
> >>Here it i. There are a couple of out-of-order entries beginning at 117. And
> >>yes I did
On 10/3/2017 2:11 PM, Hugo Mills wrote:
Hi, Stephen,
On Tue, Oct 03, 2017 at 08:52:04PM +, Stephen Nesbitt wrote:
Here it i. There are a couple of out-of-order entries beginning at 117. And
yes I did uncover a bad stick of RAM:
btrfs-progs v4.9.1
leaf 2589782867968 items 134 free
Hi, Stephen,
On Tue, Oct 03, 2017 at 08:52:04PM +, Stephen Nesbitt wrote:
> Here it i. There are a couple of out-of-order entries beginning at 117. And
> yes I did uncover a bad stick of RAM:
>
> btrfs-progs v4.9.1
> leaf 2589782867968 items 134 free space 6753 generation 3351574 owner 2
On Tue, Oct 03, 2017 at 01:06:50PM -0700, Stephen Nesbitt wrote:
> All:
>
> I came back to my computer yesterday to find my filesystem in read
> only mode. Running a btrfs scrub start -dB aborts as follows:
>
> btrfs scrub start -dB /mnt
> ERROR: scrubbing /mnt failed for device id 4: ret=-1,
All:
I came back to my computer yesterday to find my filesystem in read only
mode. Running a btrfs scrub start -dB aborts as follows:
btrfs scrub start -dB /mnt
ERROR: scrubbing /mnt failed for device id 4: ret=-1, errno=5
(Input/output error)
ERROR: scrubbing /mnt failed for device id 5:
This patch updates help/document of "btrfs device remove" in two points:
1. Add explanation of 'missing' for 'device remove'. This is only
written in wikipage currently.
(https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices)
2. Add example of device removal
details could help the development of BTRFS and maybe
avoid this happening or having a recovery option.
Marc
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
6944
>
> Now i'm really panicked. Is the FS toast? Can any recovery be attempted?
First I'm a user and list regular, not a dev. With luck they can help
beyond the below suggestions...
However, there's no need to panic in any case, due to the sysadmin's
first rule of backups: The true valu
Hello,
I will try to provide all information pertinent to the situation i find myself
in.
Yesterday while trying to write some data to a BTRFS filesystem on top of a
mdadm raid5 array encrypted with dmcrypt comprising of 4 1tb HDD my system
became unresponsive and i had no choice but to hard
On 2017-09-11 17:33, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 11 Sep 2017 11:11:01 -0400 as
excerpted:
On 2017-09-11 09:16, Marat Khalili wrote:
Patrik, Duncan, thank you for the help. The `btrfs replace start
/dev/sdb7 /dev/sdd7 /mnt/data` worked without a hitch (though I didn't
try
Austin S. Hemmelgarn posted on Mon, 11 Sep 2017 11:11:01 -0400 as
excerpted:
> On 2017-09-11 09:16, Marat Khalili wrote:
>> Patrik, Duncan, thank you for the help. The `btrfs replace start
>> /dev/sdb7 /dev/sdd7 /mnt/data` worked without a hitch (though I didn't
>> try to r
On 2017-09-11 09:16, Marat Khalili wrote:
Patrik, Duncan, thank you for the help. The `btrfs replace start
/dev/sdb7 /dev/sdd7 /mnt/data` worked without a hitch (though I didn't
try to reboot yet, still have grub/efi/several mdadm partitions to copy).
It also worked much faster than mdadm
Patrik, Duncan, thank you for the help. The `btrfs replace start
/dev/sdb7 /dev/sdd7 /mnt/data` worked without a hitch (though I didn't
try to reboot yet, still have grub/efi/several mdadm partitions to copy).
It also worked much faster than mdadm would take, apparently only moving
126GB used
On 2017-09-10 02:33, Marat Khalili wrote:
It doesn't need replaced disk to be readable, right? Then what prevents same
procedure to work without a spare bay?
In theory, nothing.
In practice, there are reliability issues with mounting a filesystem
degraded (and you should be avoiding running
Thanks everyone for the helpful and detailed responses.
Now that you confirmed that everything is fine with my FS, I'm all
relaxed because I can for sure live with the output of df.
On Mon, Sep 11, 2017 at 5:29 AM, Andrei Borzenkov wrote:
> 10.09.2017 23:17, Dmitrii
10.09.2017 23:17, Dmitrii Tcvetkov пишет:
>>> Drive1 Drive2Drive3
>>> X X
>>> X X
>>> X X
>>>
>>> Where X is a chunk of raid1 block group.
>>
>> But this table clearly shows that adding third drive increases free
>> space by 50%.
FLJ posted on Sun, 10 Sep 2017 15:45:42 +0200 as excerpted:
> I have a BTRFS RAID1 volume running for the past year. I avoided all
> pitfalls known to me that would mess up this volume. I never
> experimented with quotas, no-COW, snapshots, defrag, nothing really.
> The volume is a RAID1 from day
Am Sun, 10 Sep 2017 20:15:52 +0200
schrieb Ferenc-Levente Juhos :
> >Problem is that each raid1 block group contains two chunks on two
> >separate devices, it can't utilize fully three devices no matter
> >what. If that doesn't suit you then you need to add 4th disk. After
>
> > Drive1 Drive2Drive3
> > X X
> > X X
> > X X
> >
> > Where X is a chunk of raid1 block group.
>
> But this table clearly shows that adding third drive increases free
> space by 50%. You need to reallocate data to actually
10.09.2017 19:11, Dmitrii Tcvetkov пишет:
>> Actually based on http://carfax.org.uk/btrfs-usage/index.html I
>> would've expected 6 TB of usable space. Here I get 6.4 which is odd,
>> but that only 1.5 TB is available is even stranger.
>>
>> Could anyone explain what I did wrong or why my
10.09.2017 18:47, Kai Krakow пишет:
> Am Sun, 10 Sep 2017 15:45:42 +0200
> schrieb FLJ :
>
>> Hello all,
>>
>> I have a BTRFS RAID1 volume running for the past year. I avoided all
>> pitfalls known to me that would mess up this volume. I never
>> experimented with quotas,
>Problem is that each raid1 block group contains two chunks on two
>separate devices, it can't utilize fully three devices no matter what.
>If that doesn't suit you then you need to add 4th disk. After
>that FS will be able to use all unallocated space on all disks in raid1
>profile. But even then
> @Kai and Dmitrii
> thank you for your explanations if I understand you correctly, you're
> saying that btrfs makes no attempt to "optimally" use the physical
> devices it has in the FS, once a new RAID1 block group needs to be
> allocated it will semi-randomly pick two devices with enough space
@Kai and Dmitrii
thank you for your explanations if I understand you correctly, you're
saying that btrfs makes no attempt to "optimally" use the physical
devices it has in the FS, once a new RAID1 block group needs to be
allocated it will semi-randomly pick two devices with enough space and
>Actually based on http://carfax.org.uk/btrfs-usage/index.html I
>would've expected 6 TB of usable space. Here I get 6.4 which is odd,
>but that only 1.5 TB is available is even stranger.
>
>Could anyone explain what I did wrong or why my expectations are wrong?
>
>Thank you in advance
I'd say df
Am Sun, 10 Sep 2017 15:45:42 +0200
schrieb FLJ :
> Hello all,
>
> I have a BTRFS RAID1 volume running for the past year. I avoided all
> pitfalls known to me that would mess up this volume. I never
> experimented with quotas, no-COW, snapshots, defrag, nothing really.
> The
Hello all,
I have a BTRFS RAID1 volume running for the past year. I avoided all
pitfalls known to me that would mess up this volume. I never
experimented with quotas, no-COW, snapshots, defrag, nothing really.
The volume is a RAID1 from day 1 and is working reliably until now.
Until yesterday it
On 10 September 2017 at 08:33, Marat Khalili wrote:
> It doesn't need replaced disk to be readable, right?
Only enough to be mountable, which it already is, so your read errors
on /dev/sdb isn't a problem.
> Then what prevents same procedure to work without a spare bay?
It is
It doesn't need replaced disk to be readable, right? Then what prevents same
procedure to work without a spare bay?
--
With Best Regards,
Marat Khalili
On September 9, 2017 1:29:08 PM GMT+03:00, Patrik Lundquist
wrote:
>On 9 September 2017 at 12:05, Marat Khalili
Patrik Lundquist posted on Sat, 09 Sep 2017 12:29:08 +0200 as excerpted:
> On 9 September 2017 at 12:05, Marat Khalili wrote:
>> Forgot to add, I've got a spare empty bay if it can be useful here.
>
> That makes it much easier since you don't have to mount it degraded,
> with the
On 9 September 2017 at 12:05, Marat Khalili wrote:
> Forgot to add, I've got a spare empty bay if it can be useful here.
That makes it much easier since you don't have to mount it degraded,
with the risks involved.
Add and partition the disk.
# btrfs replace start /dev/sdb7
Forgot to add, I've got a spare empty bay if it can be useful here.
--
With Best Regards,
Marat Khalili
On September 9, 2017 10:46:10 AM GMT+03:00, Marat Khalili wrote:
>Dear list,
>
>I'm going to replace one hard drive (partition actually) of a btrfs
>raid1. Can you please spell
On 9 September 2017 at 09:46, Marat Khalili wrote:
>
> Dear list,
>
> I'm going to replace one hard drive (partition actually) of a btrfs raid1.
> Can you please spell exactly what I need to do in order to get my filesystem
> working as RAID1 again after replacement, exactly as it
Dear list,
I'm going to replace one hard drive (partition actually) of a btrfs
raid1. Can you please spell exactly what I need to do in order to get my
filesystem working as RAID1 again after replacement, exactly as it was
before? I saw some bad examples of drive replacement in this list so I
>
[Sun Jun 18 04:02:43 2017] BTRFS critical (device sdb2): corrupt node,
bad key order: block=5123372711936, root=1, slot=82
>From the archives, most likely it's bad RAM. I see this system also
uses XFS v4 file system, if it were made as XFS v5 using metadata
csums you'd probably eventually run
;
> It's worth noting that vger lists have rules different to those in most of
> Free Software communities: on vger, you're supposed to send copies to
> everyone -- pretty much everywhere else you are expected to send to the list
> only. This is done by "Reply List" (in Thunderbir
ere else you are expected to send to the list
only. This is done by "Reply List" (in Thunderbird, 'L' in mutt, ...).
Such lists do add a set of "List-*:" headers that help the client.
Мяу!
--
⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ A dumb species has no way to open a tuna can.
⢿⡄⠘⠷⠚⠋⠀ A smart spe
I just noticed a series of seemingly btrfs related call traces that
for the first time, did not lock up the system.
I have uploaded dmesg to https://paste.ee/p/An8Qy
Anyone able to help advise on these?
Thanks
Jesse
On 19 June 2017 at 17:19, Jesse <btrfs_mail_l...@mymail.isbest.biz>
2017-06-19 13:15 GMT+03:00 Jesse :
> Thanks again. So am I to understand that you go into your 'sent'
> folder, find a mail to the mail list (that is not CC to yourself),
> then you reply to this and add the mail list when you need to update
> your own post that
2017-06-19 13:03 GMT+03:00 Jesse :
> Thanks Ivan.
> What about when initiating a post, do I do the same eg:
> TO: myself
> CC: mailing list
>
> or do I
> TO: mailing list
> CC: myself
If your mail client doesn't have "sent" folder, you can, of course,
follow one
2017-06-19 13:03 GMT+03:00 Jesse :
> Thanks Ivan.
> What about when initiating a post, do I do the same eg:
> TO: myself
> CC: mailing list
>
> or do I
> TO: mailing list
> CC: myself
When initiating a post you should to specify "TO: mailing list" only,
without
Thanks Ivan.
What about when initiating a post, do I do the same eg:
TO: myself
CC: mailing list
or do I
TO: mailing list
CC: myself
TIA
On 19 June 2017 at 17:48, Ivan Sizov wrote:
> 2017-06-19 12:32 GMT+03:00 Jesse :
>> So I guess that
2017-06-19 12:32 GMT+03:00 Jesse :
> So I guess that means when I initiate a post, I also need to send it
> to myself as well as the mail list.
You need to do it in the reply only, not in the initial post.
> Does it make any difference where I put respective
Ok thanks Ivan.
So I guess that means when I initiate a post, I also need to send it
to myself as well as the mail list.
Does it make any difference where I put respective addresses, eg: TO: CC: BCC:
Regards
Jesse
On 19 June 2017 at 17:20, Ivan Sizov wrote:
> You should
You should reply both to linux-btrfs@vger.kernel.org and the person
whom you talk to.
2017-06-19 11:37 GMT+03:00 Jesse :
> I have subscribed successfully and am able to post successfully and
> eventually view the post on spinics.net when it becomes available:
>
is related to the crashing. AFAIK rsync
should be creating the temp file in the destination drive (xfs),
unless there is some part of rsync that I am not understanding that
would be writing to the file system drive (btrfs) which is also in
the case the source hdd (btrfs).
Can someone please help with these btrf
I have subscribed successfully and am able to post successfully and
eventually view the post on spinics.net when it becomes available:
eg: http://www.spinics.net/lists/linux-btrfs/msg66605.html
However I do not know how to reply to messages, especially my own to
add more information, such as a
My Linux Mint system is starting up and usable, however, I am unable
to complete any scrub as they abort before finished. There are various
inode errors in dmesg. Badblocks (readonly) finds no errors. checking
extents gives bad block 5123372711936 on both /dev/sda2 and /dev/sda2.
A btrfscheck
1 - 100 of 557 matches
Mail list logo