Re: Kernel lockup, might be helpful log.

2015-12-14 Thread Duncan
Hugo Mills posted on Mon, 14 Dec 2015 08:35:24 + as excerpted:

> It's not just btrfs. Invalid opcode is the way that the kernel's BUG and
> BUG_ON macro is implemented.

Thanks.  I indicated that I suspected broader kernel use further down the 
reply, but it's very nice to have confirmation, both of invalid opcode 
use elsewhere, and of it being the kernel's general implementation for 
BUG and BUG_ON.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel lockup, might be helpful log.

2015-12-14 Thread Filipe Manana
On Sun, Dec 13, 2015 at 10:55 PM, Birdsarenice  wrote:
> I've finally finished deleting all those nasty unreliable Seagate drives
> from my array. During the process I crashed my server - over, and over, and
> over. Completely gone - screen blank, controls unresponsive, no network
> activity (no, I don't have root on btrfs - data only). Most annoying, but I
> think btrfs survived it all somehow - it's scrubbing now.
>
> Meanwhile, I did get lucky: At one crash I happened to be logged in and was
> able to hit dmesg seconds before it went completely. So what I have here is
> information that looks like it'll help you track down a rarely-encountered
> and hard-to-reproduce bug which can cause the system to lock up completely
> in event of certain types of hard drive failure. It might be nothing, but
> perhaps someone will find it of use - because it'd be a tricky one to both
> reproduce and get a good error report if it did occur.
>
> I see an 'invalid opcode' error in here, that's pretty unusual - and again
> it even gives a file name and line number to look at. The root cause of all
> my issues is the NCQ issue with Seagate 8TB archive drives, which is Someone
> Else's Problem - but I think some good can come of this, as these exotic
> forms of corruption and weird drive semi-failures have revealed ways in
> which btrfs's error handling could be made more graceful.
>
> Meanwhile I remain impressed that btrfs appears to have kept all my data
> intact even though all these issues.

Regarding the trace you got, from a BUG_ON, it's due a regression
present in 4.2 and 4.3 kernels that got fixed in 4.4-rc. The fixes are
scheduled for the next stable releases of 4.2.x and 4.3.x. A ton of
people have hit this (one example report
http://www.spinics.net/lists/linux-btrfs/msg49766.html).



-- 
Filipe David Manana,

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel lockup, might be helpful log.

2015-12-14 Thread Birdsarenice
I've no need for a fix. I know exactly what the underlying cause is: 
Those Seagate 8TB Archive drives and their known compatibility issues 
with some kernel versions. I just shared the log because it's a 
situation that btrfs handles very, very poorly, and the error handling 
could be improved. If a drive is unresponsive, btrfs really should be 
able to just cease using it and treat it as failed, or even unmount the 
entire filesystem - either would be preferable to what actually happens 
(at least for me), a system hang that leaves nothing functional whatsoever.


I've 'solved' it by removing all drives of that model. It's been running 
without issue since I did that.


On 14/12/15 07:36, Chris Murphy wrote:

I can't help with the call traces. But several (not all) of the hard
resetting link messages are hallmark cases where the SCSI command
timer default of 30 seconds looks like it's being hit while the drive
itself is hung up doing a sector read recovery (multiple attempts).
It's worth seeing if 'smartctl -l scterc ' will report back that
SCT is supported and that it's just disabled, meaning you can change
this to something sane like with 'smartctl -l 70,70 ' which will
make the drive time out before the linux kernel command timer. That'll
let Btrfs do the right thing, rather than constantly getting poked in
both eyes by link resets.


Chris Murphy



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel lockup, might be helpful log.

2015-12-14 Thread Hugo Mills
On Mon, Dec 14, 2015 at 06:51:41AM +, Duncan wrote:
> Birdsarenice posted on Sun, 13 Dec 2015 22:55:19 + as excerpted:
> 
> > Meanwhile, I did get lucky: At one crash I happened to be logged in and
> > was able to hit dmesg seconds before it went completely. So what I have
> > here is information that looks like it'll help you track down a
> > rarely-encountered and hard-to-reproduce bug which can cause the system
> > to lock up completely in event of certain types of hard drive failure.
> > It might be nothing, but perhaps someone will find it of use - because
> > it'd be a tricky one to both reproduce and get a good error report if it
> > did occur.
> > 
> > I see an 'invalid opcode' error in here, that's pretty unusual
> 
> Disclaimer:  I'm a list regular and (small-scale) sysadmin, not a dev, 
> and most certainly not a btrfs dev.  Take what I saw with that in mind, 
> tho I've been active on-list for over a year and thus now have a 
> reasonable level of practical sysadmin configuration and crisis recovery 
> level btrfs experience.
> 
> You could well be quite correct with the unusual crash log and its value, 
> I'll leave that up to the devs to decide, but that "invalid opcode: " 
> bit is in fact not at all unusual on btrfs.  Tho I can say it fooled me 
> originally as well, because it certainly /looks/ both suspicious and in 
> general unusual.
> 
> Based on how a dev explained it to me, I believe btrfs actually 
> deliberately uses opcode  to trigger a semi-controlled crash in 
> instances where code that "should never happen" actually gets executed 
> for some reason, leaving the kernel is an unknown and thus not 
> trustworthy enough to reliably write to storage devices and do a 
> controlled shutdown.  That's of course why the tracebacks are there, to 
> help the devs figure out where it was and what triggered it, but the  
> opcode itself is actually quite frequently found in these tracebacks, 
> because it's the method chosen to deliberately trigger them.

   It's not just btrfs. Invalid opcode is the way that the kernel's
BUG and BUG_ON macro is implemented.

   Hugo.

-- 
Hugo Mills | Great oxymorons of the world, no. 10:
hugo@... carfax.org.uk | Business Ethics
http://carfax.org.uk/  |
PGP: E2AB1DE4  |


signature.asc
Description: Digital signature


Kernel lockup, might be helpful log.

2015-12-13 Thread Birdsarenice
I've finally finished deleting all those nasty unreliable Seagate drives 
from my array. During the process I crashed my server - over, and over, 
and over. Completely gone - screen blank, controls unresponsive, no 
network activity (no, I don't have root on btrfs - data only). Most 
annoying, but I think btrfs survived it all somehow - it's scrubbing now.


Meanwhile, I did get lucky: At one crash I happened to be logged in and 
was able to hit dmesg seconds before it went completely. So what I have 
here is information that looks like it'll help you track down a 
rarely-encountered and hard-to-reproduce bug which can cause the system 
to lock up completely in event of certain types of hard drive failure. 
It might be nothing, but perhaps someone will find it of use - because 
it'd be a tricky one to both reproduce and get a good error report if it 
did occur.


I see an 'invalid opcode' error in here, that's pretty unusual - and 
again it even gives a file name and line number to look at. The root 
cause of all my issues is the NCQ issue with Seagate 8TB archive drives, 
which is Someone Else's Problem - but I think some good can come of 
this, as these exotic forms of corruption and weird drive semi-failures 
have revealed ways in which btrfs's error handling could be made more 
graceful.


Meanwhile I remain impressed that btrfs appears to have kept all my data 
intact even though all these issues.
[11668.697976] BTRFS info (device sde1): relocating block group 5932520046592 
flags 17
[11676.977183] BTRFS info (device sde1): found 20 extents
[11686.138376] BTRFS info (device sde1): found 20 extents
[11686.567242] BTRFS info (device sde1): relocating block group 5935741272064 
flags 17
[11695.452025] BTRFS info (device sde1): found 17 extents
[11704.627191] BTRFS info (device sde1): found 17 extents
[11705.966792] BTRFS info (device sde1): relocating block group 5938962497536 
flags 17
[11715.343790] BTRFS info (device sde1): found 15 extents
[11724.219660] BTRFS info (device sde1): found 15 extents
[11724.910970] BTRFS info (device sde1): relocating block group 5940036239360 
flags 17
[11733.289804] BTRFS info (device sde1): found 22 extents
[11741.538676] BTRFS info (device sde1): found 22 extents
[11742.019752] BTRFS info (device sde1): relocating block group 5941109981184 
flags 17
[11751.676514] BTRFS info (device sde1): found 14 extents
[11759.404371] [ cut here ]
[11759.404439] kernel BUG at ../fs/btrfs/extent-tree.c:1832!
[11759.404514] invalid opcode:  [#1] PREEMPT SMP 
[11759.404600] Modules linked in: xt_nat nf_conntrack_ipv6 nf_defrag_ipv6 
ip6table_filter ip6_tables xt_conntrack xt_tcpudp ipt_MASQUERADE 
nf_nat_masquerade_ipv4 iptable_filter iptable_nat nf_conntrack_ipv4 
nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables af_packet 
bridge stp llc iscsi_ibft iscsi_boot_sysfs btrfs xor x86_pkg_temp_thermal 
intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul 
crc32c_intel raid6_pq aesni_intel aes_x86_64 lrw gf128mul iTCO_wdt glue_helper 
ablk_helper iTCO_vendor_support cryptd pcspkr i2c_i801 ib_mthca lpc_ich tpm_tis 
8250_fintek ie31200_edac mfd_core shpchp battery edac_core thermal tpm video 
fan button processor hid_generic usbhid uas usb_storage amdkfd amd_iommu_v2 
radeon igb dca i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt
[11759.405914]  fb_sys_fops ttm drm xhci_pci xhci_hcd ehci_pci ehci_hcd usbcore 
usb_common e1000e ptp pps_core fjes vhost_net tun vhost macvtap macvlan sg 
rpcrdma sunrpc rdma_cm iw_cm ib_ipoib ib_cm ib_sa ib_umad ib_mad ib_core ib_addr
[11759.406328] CPU: 2 PID: 2060 Comm: btrfs Not tainted 4.3.0-2-default #1
[11759.406414] Hardware name: FUJITSU PRIMERGY TX100 S3P/D3009-B1, BIOS 
V4.6.5.3 R1.10.0 for D3009-B1x 12/18/2012
[11759.406555] task: 88042f832040 ti: 88041cae4000 task.ti: 
88041cae4000
[11759.406659] RIP: 0010:[]  [] 
insert_inline_extent_backref+0xc6/0xd0 [btrfs]
[11759.406815] RSP: 0018:88041cae7830  EFLAGS: 00010293
[11759.406889] RAX:  RBX:  RCX: 0001
[11759.406986] RDX: 8800 RSI: 0001 RDI: 
[11759.407085] RBP: 88041cae7890 R08: 4000 R09: 88041cae7748
[11759.407184] R10:  R11: 0003 R12: 880412615800
[11759.407283] R13:  R14:  R15: 8800c92aef50
[11759.407383] FS:  7f2e3b1678c0() GS:88042fd0() 
knlGS:
[11759.407497] CS:  0010 DS:  ES:  CR0: 80050033
[11759.407576] CR2: 55f473f59f28 CR3: 0004180be000 CR4: 001406e0
[11759.407675] Stack:
[11759.407706]   0102  

[11759.407831]  0001 88041170d800 32b6 
88041170d800
[11759.407949]  88030f0203b0 8800c92aef50 0102 
88040b22e000
[11759.408069] Call Trace:
[11759.408127]  

Re: Kernel lockup, might be helpful log.

2015-12-13 Thread Duncan
Birdsarenice posted on Sun, 13 Dec 2015 22:55:19 + as excerpted:

> Meanwhile, I did get lucky: At one crash I happened to be logged in and
> was able to hit dmesg seconds before it went completely. So what I have
> here is information that looks like it'll help you track down a
> rarely-encountered and hard-to-reproduce bug which can cause the system
> to lock up completely in event of certain types of hard drive failure.
> It might be nothing, but perhaps someone will find it of use - because
> it'd be a tricky one to both reproduce and get a good error report if it
> did occur.
> 
> I see an 'invalid opcode' error in here, that's pretty unusual

Disclaimer:  I'm a list regular and (small-scale) sysadmin, not a dev, 
and most certainly not a btrfs dev.  Take what I saw with that in mind, 
tho I've been active on-list for over a year and thus now have a 
reasonable level of practical sysadmin configuration and crisis recovery 
level btrfs experience.

You could well be quite correct with the unusual crash log and its value, 
I'll leave that up to the devs to decide, but that "invalid opcode: " 
bit is in fact not at all unusual on btrfs.  Tho I can say it fooled me 
originally as well, because it certainly /looks/ both suspicious and in 
general unusual.

Based on how a dev explained it to me, I believe btrfs actually 
deliberately uses opcode  to trigger a semi-controlled crash in 
instances where code that "should never happen" actually gets executed 
for some reason, leaving the kernel is an unknown and thus not 
trustworthy enough to reliably write to storage devices and do a 
controlled shutdown.  That's of course why the tracebacks are there, to 
help the devs figure out where it was and what triggered it, but the  
opcode itself is actually quite frequently found in these tracebacks, 
because it's the method chosen to deliberately trigger them.

I'd guess the same technique is actually used in various other (non-
btrfs) kernel code as well, but in fully stable code it actually is very 
rarely seen, precisely because it /does/ mean the kernel reached code 
that it is never expected to reach, meaning something specific went wrong 
to get to that point, and in fully stable code, it's rare that any code 
paths actually leading to that sort of execution point remain, as they've 
all been found over the years.

But of course btrfs, while no longer experimental, remains "still 
stabilizing and maturing, not yet fully stable or mature", so there's 
still code paths left that do still occasionally reach these intended to 
be unreachable code points, and when that happens, triggering a crash and 
hopefully getting a traceback that helps the devs figure out which code 
path has the bug and why, is a good thing to do, and this is apparently 
the way it's done.

(BTW, compliments on the nick and email address. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel lockup, might be helpful log.

2015-12-13 Thread Chris Murphy
I can't help with the call traces. But several (not all) of the hard
resetting link messages are hallmark cases where the SCSI command
timer default of 30 seconds looks like it's being hit while the drive
itself is hung up doing a sector read recovery (multiple attempts).
It's worth seeing if 'smartctl -l scterc ' will report back that
SCT is supported and that it's just disabled, meaning you can change
this to something sane like with 'smartctl -l 70,70 ' which will
make the drive time out before the linux kernel command timer. That'll
let Btrfs do the right thing, rather than constantly getting poked in
both eyes by link resets.


Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html