On 12/22/2020 10:09 AM, mike tancsa wrote:
> On 12/22/2020 10:07 AM, Mark Johnston wrote:
>> Could you go to frame 11 and print zone->uz_name and
>> bucket->ub_bucket[18]? I'm wondering if the item pointer was mangled
>> somehow.
> Thank you for looking!
>
> (kgdb) frame 11
>
> #11
On 12/22/2020 10:07 AM, Mark Johnston wrote:
>
> Could you go to frame 11 and print zone->uz_name and
> bucket->ub_bucket[18]? I'm wondering if the item pointer was mangled
> somehow.
Thank you for looking!
(kgdb) frame 11
#11 0x80ca47d4 in bucket_drain (zone=0xf800037da000,
On Tue, Dec 22, 2020 at 09:05:01AM -0500, mike tancsa wrote:
> Hmmm, another one. Not sure if this is hardware as it seems different ?
>
>
>
> Fatal trap 12: page fault while in kernel mode
> cpuid = 11; apic id = 0b
> fault virtual address = 0x0
> fault code = supervisor write
Hmmm, another one. Not sure if this is hardware as it seems different ?
Fatal trap 12: page fault while in kernel mode
cpuid = 11; apic id = 0b
fault virtual address = 0x0
fault code = supervisor write data, page not present
instruction pointer = 0x20:0x80ca0826
stack
Was doing a backup via zfs send | zfs recv when the box panic'd. Its a
not so old RELENG_12 box from last week. Any ideas if this is a hardware
issue or a bug ? Its r368493 from last Wednesday. I dont see an ECC
errors logged, so dont think its hardware.
Reading symbols from
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=235683
Rodney W. Grimes changed:
What|Removed |Added
CC|sta...@freebsd.org |rgri...@freebsd.org
---
Summary|ZFS kernel panic when |[zfs] Panic during data
|access to data or scrub |access or scrub on
||12.0-STABLE r343904 (blkptr
||at DVA 0 has invalid
Hi,
Anyone offer any suggestions about this?
kernel: panic: solaris assert: arc_decompress(buf) == 0 (0x5 == 0x0), file:
/usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c, line: 4923
kernel: cpuid = 1
kernel: KDB: stack backtrace:
kernel: #0 0x80aadac7 at kdb_backtrace+0x67
Got another, very similar panic again on recent 9-STABLE (r255602); I
assume the latest 9.2 release candidate is affected too. Anybody have any
idea of what could be causing this, and of a workaround other than turning
compression off?
Unlike the last panic I reported, this one did not occur
One last piece of information I just got: the problem is not specific to
LZJB compression. I switched to LZ4 and get the same sort of panic:
Fatal trap 12: page fault while in kernel mode
cpuid = 8; apic id = 28
fault virtual address = 0xff8581c48000
fault code = supervisor read data, page
19.07.2013 07:04, olivier wrote:
Hi,
Running 9.2-PRERELEASE #19 r253313 I got the following panic
Fatal trap 12: page fault while in kernel mode
cpuid = 22; apic id = 46
fault virtual address = 0xff827ebca30c
fault code = supervisor read data, page not present
instruction
Hi,
Running 9.2-PRERELEASE #19 r253313 I got the following panic
Fatal trap 12: page fault while in kernel mode
cpuid = 22; apic id = 46
fault virtual address = 0xff827ebca30c
fault code = supervisor read data, page not present
instruction pointer = 0x20:0x81983055
on 01/07/2013 21:50 Jeremy Chadwick said the following:
The issue is that ZFS on FreeBSD is still young compared to other
filesystems (specifically UFS).
That's a fact.
Nothing is perfect, but FFS/UFS tends
to have a significantly larger number of bugs worked out of it to the
point where
On Tue, Jul 02, 2013 at 08:59:56AM +0300, Andriy Gapon wrote:
on 01/07/2013 21:50 Jeremy Chadwick said the following:
The issue is that ZFS on FreeBSD is still young compared to other
filesystems (specifically UFS).
That's a fact.
Nothing is perfect, but FFS/UFS tends
to have a
On Tue, Jul 02, 2013 at 12:57:16AM -0700, Jeremy Chadwick wrote:
But in the OP's case, the situation sounds dire given the limitations --
limitations that someone (apparently not him) chose, which greatly
hinder debugging/troubleshooting. Had a heterogeneous setup been
chosen, the
Hello,
I have not had much time to research this problem yet, so please let me
know what further information I might be able to provide.
This weekend I attempted to upgrade a computer from 8.2-RELEASE-p3 to 8.4
using freebsd-update. After I rebooted to test the new kernel, I got a
panic. I had
*** Sorry for partial first message! (gmail sent after multiple returns
apparently?) ***
Hello,
I have not had much time to research this problem yet, so please let me
know what further information I might be able to provide.
This weekend I attempted to upgrade a computer from 8.2-RELEASE-p3 to
On Mon, Jul 01, 2013 at 11:35:30AM -0400, Scott Sipe wrote:
*** Sorry for partial first message! (gmail sent after multiple returns
apparently?) ***
Hello,
I have not had much time to research this problem yet, so please let me
know what further information I might be able to provide.
On Mon, Jul 01, 2013 at 08:49:25AM -0700, Jeremy Chadwick wrote:
- Is there a reason you do not have dumpdev defined in /etc/rc.conf (or
alternately, no swap device defined in /etc/fstab (which will get
used/honoured by the dumpdev=auto (the default)) ?
This should have read or
- Original Message -
From: Jeremy Chadwick j...@koitsu.org
To: Scott Sipe csco...@gmail.com
Cc: freebsd-stable List freebsd-stable@freebsd.org
Sent: Monday, July 01, 2013 4:49 PM
Subject: Re: ZFS Panic after freebsd-update
On Mon, Jul 01, 2013 at 11:35:30AM -0400, Scott Sipe wrote
On Jul 1, 2013, at 11:49 AM, Jeremy Chadwick j...@koitsu.org wrote:
On Mon, Jul 01, 2013 at 11:35:30AM -0400, Scott Sipe wrote:
*** Sorry for partial first message! (gmail sent after multiple returns
apparently?) ***
Hello,
I have not had much time to research this problem yet, so please
On Mon, Jul 01, 2013 at 12:23:45PM -0400, Paul Mather wrote:
On Jul 1, 2013, at 11:49 AM, Jeremy Chadwick j...@koitsu.org wrote:
On Mon, Jul 01, 2013 at 11:35:30AM -0400, Scott Sipe wrote:
*** Sorry for partial first message! (gmail sent after multiple returns
apparently?) ***
Hello,
On Mon, Jul 1, 2013 at 1:04 PM, Jeremy Chadwick j...@koitsu.org wrote:
On Mon, Jul 01, 2013 at 12:23:45PM -0400, Paul Mather wrote:
On Jul 1, 2013, at 11:49 AM, Jeremy Chadwick j...@koitsu.org wrote:
Of course when I see lines like this:
Trying to mount root from zfs:zroot
on 01/07/2013 20:04 Jeremy Chadwick said the following:
People are operating with the belief that ZFS just
works, when reality shows it works until it doesn't
That reality applies to everything that a man creates with a purpose to work.
I am not sure why you are so over-focused on ZFS.
Please
On Mon, Jul 01, 2013 at 02:04:24PM -0400, Scott Sipe wrote:
On Mon, Jul 1, 2013 at 1:04 PM, Jeremy Chadwick j...@koitsu.org wrote:
On Mon, Jul 01, 2013 at 12:23:45PM -0400, Paul Mather wrote:
On Jul 1, 2013, at 11:49 AM, Jeremy Chadwick j...@koitsu.org wrote:
Of course when I see
On Mon, Jul 01, 2013 at 09:10:45PM +0300, Andriy Gapon wrote:
on 01/07/2013 20:04 Jeremy Chadwick said the following:
People are operating with the belief that ZFS just
works, when reality shows it works until it doesn't
That reality applies to everything that a man creates with a purpose
- Original Message -
From: Scott Sipe csco...@gmail.com
So I realize that neither 8.2-RELEASE or 8.4-RELEASE are stable, but I
ultimately wasn't sure where the right place to go for discuss 8.4 is?
Beyond the FS mailing list, was there a better place for my question? I'll
provide the
Am 01.07.2013 um 20:56 schrieb Steven Hartland kill...@multiplay.co.uk:
- Original Message - From: Scott Sipe csco...@gmail.com
So I realize that neither 8.2-RELEASE or 8.4-RELEASE are stable, but I
ultimately wasn't sure where the right place to go for discuss 8.4 is?
Beyond the FS
On Jul 1, 2013, at 19:04, Jeremy Chadwick j...@koitsu.org wrote:
But even stable/X doesn't provide enough coverage at times (the recent
fxp(4)/dhclient issue is proof of that). It's just too bad so many
people have this broken mindset of what stability means on FreeBSD.
As one of the few
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 07/01/13 09:10, Steven Hartland wrote:
[...]
This says your running a 8.2-RELEASE-p3 kernel not an 8.4-RELEASE
kernel.
Did the upgrade fail or is that dmesg / uname from your old
kernel?
Looking at the context, he used freebsd-update to
Hi, I'm a bit at the end of my tether.
We had a ZFS panic last night on a machine that hosts all my mail and web; it
was rebooted and it now panics mounting the ZFS root filesystem.
The call stack info is:
solaris assert: ss == NULL, file:
/usr/src/sys/modules/zfs/../../cddl/contrib
14.06.2013 12:55, Dr Josef Karthauser:
Hi, I'm a bit at the end of my tether.
We had a ZFS panic last night on a machine that hosts all my mail and web; it
was rebooted and it now panics mounting the ZFS root filesystem.
The call stack info is:
solaris assert: ss == NULL, file:
/usr
On 14 Jun 2013, at 12:00, Volodymyr Kostyrko c.kw...@gmail.com wrote:
14.06.2013 12:55, Dr Josef Karthauser:
Hi, I'm a bit at the end of my tether.
p.s. the config, btw, is a ZFS mirror on two ad devices. It's got a ZFS root
file system.
If you are fairly sure about your devices you can:
14.06.2013 15:51, Dr Josef Karthauser:
On 14 Jun 2013, at 12:00, Volodymyr Kostyrko c.kw...@gmail.com wrote:
14.06.2013 12:55, Dr Josef Karthauser:
Hi, I'm a bit at the end of my tether.
p.s. the config, btw, is a ZFS mirror on two ad devices. It's got a ZFS root
file system.
If you are
Hiroki Sato h...@freebsd.org wrote
in 20110911.054601.1424617155148336027@allbsd.org:
hr Hiroki Sato h...@freebsd.org wrote
hr in 20110910.044841.232160047547388224@allbsd.org:
hr
hr hr Hiroki Sato h...@freebsd.org wrote
hr hr in 20110907.094717.2272609566853905102@allbsd.org:
Hiroki Sato h...@freebsd.org wrote
in 20110910.044841.232160047547388224@allbsd.org:
hr Hiroki Sato h...@freebsd.org wrote
hr in 20110907.094717.2272609566853905102@allbsd.org:
hr
hr hr During this investigation an disk has to be replaced and resilvering
hr hr it is now in progress.
Hiroki Sato h...@freebsd.org wrote
in 20110907.094717.2272609566853905102@allbsd.org:
hr During this investigation an disk has to be replaced and resilvering
hr it is now in progress. A deadlock and a forced reboot after that
hr make recovering of the zfs datasets take a long time (for
On 11/16/2010 8:41 PM, Terry Kennedy wrote:
I would say it is definitely very odd that writes are a problem. Sounds
like it might be a hardware problem. Is it possible to export the pool,
remove the ZIL and re-import it? I myself would be pretty nervous trying
that, but it would help isolate
I would say it is definitely very odd that writes are a problem. Sounds
like it might be a hardware problem. Is it possible to export the pool,
remove the ZIL and re-import it? I myself would be pretty nervous trying
that, but it would help isolate the problem? If you can risk it.
I
I can give a developer remote console / root access to the box if that would
help. I have a couple days before I will need to nuke the pool and restore it
from backups.
I haven't heard from anyone that wants to look into this. I need to get the
pool back into service soon. If I don't get
I am no ZFS kernel-code dude or anything, but it is well known that losing
the ZIL can corrupt things pretty bad with ZFS.
First, thanks for writing back!
I agree that this could be the problem. As I mentioned in my original post,
I followed the steps recommended by zpool status - clearing
Hi Terry,
I am no ZFS kernel-code dude or anything, but it is well known that losing the
ZIL can corrupt things pretty bad with ZFS.
You may want to skim the archives at OpenSolaris ZFS discuss
zfs-disc...@opensolaris.org
All in all, if I was in your situation I would give a whirl at
Hi, sorry for not completely digesting your original post.
I would say it is definitely very odd that writes are a problem. Sounds like
it might be a hardware problem. Is it possible to export the pool, remove the
ZIL and re-import it? I myself would be pretty nervous trying that, but it
I'm posting this to the freebsd-stable and freebsd-fs mailing lists. Followups
should probably happen on freebsd-fs.
I have a ZFS pool configured as:
zpool create data raidz da1 da2 da3 da4 da5 raidz da6 da7 da8 da9 da10
raidz da11 da12 da13 da14 da15 spare da16 log da0
where da1-16 are
Quoting Dmitry Morozovsky ma...@rinet.ru (from Tue, 26 Jan 2010
01:16:28 +0300 (MSK)):
On Mon, 25 Jan 2010, Dmitry Morozovsky wrote:
DM PJD I had a crash durinc rsync to ZFS today:
DM PJD
DM PJD Do you have recent 7-STABLE? Not sure if it was the same before MFC,
DM
DM
On Tue, 26 Jan 2010, Alexander Leidinger wrote:
AL Well, after updating to fresh system scrub finished without errors, and
AL now
AL rsync is running, now copied 15G out of 150.
AL
AL You may want to switch the checksum algorithm to fletcher4. It (fletcher4
AL the default instead of
will do, thank you. is fletcher4 faster?
Not necessarily. But it does work as a checksum much better. See
following link for the details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6740597
--Artem
___
freebsd-stable@freebsd.org mailing
On Tue, 26 Jan 2010, Artem Belevich wrote:
AB will do, thank you. is fletcher4 faster?
AB Not necessarily. But it does work as a checksum much better. See
AB following link for the details.
AB
AB http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6740597
Yes, I already read some
Dear colleagues,
I had a crash durinc rsync to ZFS today:
(kgdb) bt
#0 doadump () at pcpu.h:196
#1 0xc050c688 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:418
#2 0xc050c965 in panic (fmt=Variable fmt is not available.
) at /usr/src/sys/kern/kern_shutdown.c:574
#3 0xc08e95ce in
On Mon, Jan 25, 2010 at 10:04:20PM +0300, Dmitry Morozovsky wrote:
Dear colleagues,
I had a crash durinc rsync to ZFS today:
Do you have recent 7-STABLE? Not sure if it was the same before MFC,
probably not, because what you see is impossible in case of source I'm
looking at. At the begining
On Mon, 25 Jan 2010, Pawel Jakub Dawidek wrote:
PJD On Mon, Jan 25, 2010 at 10:04:20PM +0300, Dmitry Morozovsky wrote:
PJD Dear colleagues,
PJD
PJD I had a crash durinc rsync to ZFS today:
PJD
PJD Do you have recent 7-STABLE? Not sure if it was the same before MFC,
r...@woozle:/var/crash#
On Mon, 25 Jan 2010, Dmitry Morozovsky wrote:
DM PJD I had a crash durinc rsync to ZFS today:
DM PJD
DM PJD Do you have recent 7-STABLE? Not sure if it was the same before MFC,
DM
DM r...@woozle:/var/crash# uname -a
DM FreeBSD woozle.rinet.ru 7.2-STABLE FreeBSD 7.2-STABLE #4: Mon Dec 14
Sometime on sunday our main server paniced with the following error:
panic: solaris assert: sm-sm_space == space (0x5e45000 == 0x5e45600), file:
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/space_map.c,
line: 361
I did some goolging and found a couple of refereces
Hi,
Yesterday I had the opportunity to play around with my yet-to-become new
fileserver a bit more. Originally I had installed 7.2-R, which I upgraded
to 8-0-RC2 yesterday. After that I upgraded my zpool consisting of 4 disks
in raidz1 constallation to v13.
Some time later I tried to use powerd
Hi,
I got the following panic when rebooting after a crash on 7.2-REL:
panic: solaris assert: dmu_read(os, smo-smo_object, offset, size,
entry_map) == 0 (0x5 == 0x0), file:
/usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs/spa
ce_map.c, line: 341
This seems to be the same
panic: mtx_lock() of destroyed mutex @ /usr/src/sys/kern/vfs_subrc:2467
cpuid = 1
I was doing a zfs destroy -r of a dataset. The dataset has had many
snapshot receives done.
# uname -a
FreeBSD 8.0-RC1 FreeBSD 8.0-RC1 #1: Tue Oct 13 14:11:08 CEST 2009
root@:/usr/obj/usr/src/sys/DEBUG
km...@freebsd.org aka Kip Macy schrieb
mit Datum Fri, 12 Jun 2009 13:54:40 -0700 in m2n.fbsd.stable:
|show sleepchain
|show thread 100263
|
|On Fri, Jun 12, 2009 at 6:56 AM, Andriy Gapona...@icyb.net.ua wrote:
|
| I did zfs rollback x...@yyy
| And then did ls on a directory in the rolled-back
-Original Message-
From: owner-freebsd-sta...@freebsd.org [mailto:owner-freebsd-
sta...@freebsd.org] On Behalf Of Andriy Gapon
Sent: 28 May 2009 18:11
To: Lawrence Farr
Cc: freebsd-stable@freebsd.org
Subject: Re: ZFS panic in zfs_fuid_create
on 27/05/2009 19:25 Lawrence Farr
on 27/05/2009 19:25 Lawrence Farr said the following:
I updated my backup boxes to the latest and greatest ZFS code,
and started getting the following panic on them all (3 machines):
panic: zfs_fuid_create
cpuid = 1
Uptime: 1h28m48s
Cannot dump. No dump device defined.
Automatic reboot in
I updated my backup boxes to the latest and greatest ZFS code,
and started getting the following panic on them all (3 machines):
panic: zfs_fuid_create
cpuid = 1
Uptime: 1h28m48s
Cannot dump. No dump device defined.
Automatic reboot in 15 seconds - press a key on the console to abort
A quick
while rebooting:
(kgdb) bt
#0 doadump () at pcpu.h:196
#1 0x80514298 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:418
#2 0x80514575 in panic (fmt=Variable fmt is not available.
) at /usr/src/sys/kern/kern_shutdown.c:574
#3 0x806a74d4 in trap_fatal (frame=0xbf5b9a24, eva=12) at
On Tue, Feb 17, 2009 at 09:43:31PM -0800, Cy Schubert wrote:
I got this panic after issuing reboot(8).
FreeBSD 7.1-STABLE FreeBSD 7.1-STABLE #0: Tue Feb 17 19:29:23 PST 2009
c...@cwsys:/export/obj/export/home/cy/test/test-stable7/sys/DEBUG i386
FreeBSD/i386 (bob) (ttyd0)
login:
In message 20090218162126.gq41...@deviant.kiev.zoral.com.ua, Kostik
Belousov
writes:
--v+Mbu5iuT/5Blw/K
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On Tue, Feb 17, 2009 at 09:43:31PM -0800, Cy Schubert wrote:
I got
In message 20090218162126.gq41...@deviant.kiev.zoral.com.ua, Kostik
Belousov
writes:
--v+Mbu5iuT/5Blw/K
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On Tue, Feb 17, 2009 at 09:43:31PM -0800, Cy Schubert wrote:
I got
I got this panic after issuing reboot(8).
FreeBSD 7.1-STABLE FreeBSD 7.1-STABLE #0: Tue Feb 17 19:29:23 PST 2009
c...@cwsys:/export/obj/export/home/cy/test/test-stable7/sys/DEBUG i386
FreeBSD/i386 (bob) (ttyd0)
login: Feb 17 21:22:56 bob reboot: rebooted by root
Feb 17 21:22:56 bob
Cy Schubert wrote:
I got this panic after issuing reboot(8).
FreeBSD 7.1-STABLE FreeBSD 7.1-STABLE #0: Tue Feb 17 19:29:23 PST 2009
c...@cwsys:/export/obj/export/home/cy/test/test-stable7/sys/DEBUG i386
FreeBSD/i386 (bob) (ttyd0)
login: Feb 17 21:22:56 bob reboot: rebooted by root
LI Xin wrote:
Hi,
The following iozone test case on ZFS would reliably trigger panic:
/usr/local/bin/iozone -M -e -+u -T -t 128 -S 4096 -L 64 -R -r 4k -s 30g
-i 0 -i 1 -i 2 -i 8 -+p 70 -C
It can also be (eventually) triggered by blogbench -c 100 -i 30 -r 50
-w 10 -W 10 and heavy IO load
On Sun, Mar 02, 2008 at 03:49:03AM -0800, LI Xin wrote:
Hi,
The following iozone test case on ZFS would reliably trigger panic:
/usr/local/bin/iozone -M -e -+u -T -t 128 -S 4096 -L 64 -R -r 4k -s 30g
-i 0 -i 1 -i 2 -i 8 -+p 70 -C
Thanks, I'll try to reproduce it.
[...]
#19
Pawel Jakub Dawidek wrote:
On Sun, Mar 02, 2008 at 03:49:03AM -0800, LI Xin wrote:
Hi,
The following iozone test case on ZFS would reliably trigger panic:
/usr/local/bin/iozone -M -e -+u -T -t 128 -S 4096 -L 64 -R -r 4k -s 30g
-i 0 -i 1 -i 2 -i 8 -+p 70 -C
Thanks, I'll try to reproduce
Tue, 04 Mar 2008 03:27:35 +0800,Xin LI [EMAIL PROTECTED]:
The kernel is
FreeBSD fs12.sina.com.cn 7.0-STABLE FreeBSD 7.0-STABLE #0: Sun Mar 2
18:50:05 CST 2008 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/ZFORK
amd64
the get all at below:
fs12# zfs get all
NAME PROPERTY VALUE
Hi,
The following iozone test case on ZFS would reliably trigger panic:
/usr/local/bin/iozone -M -e -+u -T -t 128 -S 4096 -L 64 -R -r 4k -s 30g
-i 0 -i 1 -i 2 -i 8 -+p 70 -C
Unfortunately the kgdb can not reveal useful backtrace. I have tried
KDB_TRACE, but have not yet be able to further
71 matches
Mail list logo