Re: 10.2-Beta i386..what's wrong..?
> > ...more RAM? Always more RAM? > > Reality check please, this is an i386 Machine with 2 Gbytes. > It has two of 3 sockets polluted with RAM Modules (1G), there is not > that > much Space to give it more RAM. > > i386 is a supported architecture as far as I know, ok it where nice to > have in i386 with 8 Gigs of RAM but there are not much motherboards out > here that would support this.. > > This remebers me to microsofts attemts to write Software, always more > RAM > and more CPU to fasten up endless loops.. > > Regards, > > Holm Of course i386 is supported, but in the near future arm will be tier 1 as well. Not every arch can do everything equally well. ZFS was designed to allow huge storage sizes (files are limited to 16 EiB). At some point the storage is just too big for fsck to make sense so you use copy-on-write, and copy-on-write kills performance unless you do a significant amount of caching. Thus the tradeoff is for big storage you need big memory. For an execellent rundown why ZFS does this and some comparisons to UFS I highly recommend Dr. McKusick's "An Introduction to the Implementation of ZFS" from BSDCan 2015: Part 1, 18m https://youtu.be/UP_JfUUmDZo Part 2, 54m https://youtu.be/l-RCLgLxuSc smime.p7s Description: S/MIME cryptographic signature
Re: 10.2-Beta i386..what's wrong..?
On Fri, 24 Jul 2015 09:50:52 +0100 Matthew Seaman wrote > On 07/24/15 07:58, Holm Tiffe wrote: > > ..interrestingly people here seem to focus my problem to ZFS.. but my > > problem was to build an raid over 4 disks on my old i386 machine and that > > failed with 2 different approaches. > > > > I'm accepting that ZFS is a too big thing for the i386 architecture > > and I possibly should leave it alone on that machine. > > > > But my 2nd try with gvinum failed also ...why? > > I've had success using a combination of gstripe and gmirror to create a > RAID10 over 4 drives: > > % gstripe status > Name Status Components > stripe/st0 UP mirror/gm2 > mirror/gm1 > % gmirror status > NameStatus Components > mirror/gm2 COMPLETE da0 (ACTIVE) > da1 (ACTIVE) > mirror/gm1 COMPLETE da2 (ACTIVE) > da3 (ACTIVE) > > This is a separate data area though -- system boots from some different > drives. I can't remember if it is possible to boot from a gstripe. While it hasn't been since around the beginning of 8. I can confirm it's possible to boot from a gstripe(8). --Chris > > Cheers, > > Matthew ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Fri, Jul 24, 2015 at 3:02 AM, Holm Tiffe wrote: > ...more RAM? Always more RAM? For ZFS, yes. Stick to UFS otherwise. -- brandon s allbery kf8nh sine nomine associates allber...@gmail.com ballb...@sinenomine.net unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
Glen Barber wrote: > On Fri, Jul 24, 2015 at 02:54:00AM +0200, Michelle Sullivan wrote: > >> >> Actually I'm quite sucessfully running zfs on i386 (in a VM) ... here's >> the trick (which leads me to suspect ARC handling as the problem) - when >> I get to 512M of kernel space or less than 1G of RAM available system >> wide, I export/import the zfs pool... Using this formula I have uptimes >> of months... I haven't yet tried the 'ARC patch' that was proposed >> recently... >> >> > > Which FreeBSD version is this? Things changed since 10.1-RELEASE and > what will be 10.2-RELEASE enough that I can't even get a single-disk ZFS > system (in VirtualBox) to boot on i386. During 10.1-RELEASE testing, > I only saw problems with multi-disk setup (mirror, raidzN), but the > FreeBSD kernel grew since 10.1-RELEASE, so this is not unexpected. > 9.2-i386 and 9.3-i386 - I don't run 10 on anything. -- Michelle Sullivan http://www.mhix.org/ ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
Matthew Seaman wrote: > On 07/24/15 07:58, Holm Tiffe wrote: > > ..interrestingly people here seem to focus my problem to ZFS.. but my > > problem was to build an raid over 4 disks on my old i386 machine and that > > failed with 2 different approaches. > > > > I'm accepting that ZFS is a too big thing for the i386 architecture > > and I possibly should leave it alone on that machine. > > > > But my 2nd try with gvinum failed also ...why? > > I've had success using a combination of gstripe and gmirror to create a > RAID10 over 4 drives: > > % gstripe status > Name Status Components > stripe/st0 UP mirror/gm2 > mirror/gm1 > % gmirror status > NameStatus Components > mirror/gm2 COMPLETE da0 (ACTIVE) > da1 (ACTIVE) > mirror/gm1 COMPLETE da2 (ACTIVE) > da3 (ACTIVE) > > This is a separate data area though -- system boots from some different > drives. I can't remember if it is possible to boot from a gstripe. > > Cheers, > > Matthew > gmirror is working for me too (as I already described), I've talking about gvinum here.., that's different. Regards, Holm -- Technik Service u. Handel Tiffe, www.tsht.de, Holm Tiffe, Freiberger Straße 42, 09600 Oberschöna, USt-Id: DE253710583 www.tsht.de, i...@tsht.de, Fax +49 3731 74200, Mobil: 0172 8790 741 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On 07/24/15 07:58, Holm Tiffe wrote: > ..interrestingly people here seem to focus my problem to ZFS.. but my > problem was to build an raid over 4 disks on my old i386 machine and that > failed with 2 different approaches. > > I'm accepting that ZFS is a too big thing for the i386 architecture > and I possibly should leave it alone on that machine. > > But my 2nd try with gvinum failed also ...why? I've had success using a combination of gstripe and gmirror to create a RAID10 over 4 drives: % gstripe status Name Status Components stripe/st0 UP mirror/gm2 mirror/gm1 % gmirror status NameStatus Components mirror/gm2 COMPLETE da0 (ACTIVE) da1 (ACTIVE) mirror/gm1 COMPLETE da2 (ACTIVE) da3 (ACTIVE) This is a separate data area though -- system boots from some different drives. I can't remember if it is possible to boot from a gstripe. Cheers, Matthew signature.asc Description: OpenPGP digital signature
Re: 10.2-Beta i386..what's wrong..?
On 07/24/15 07:58, Holm Tiffe wrote: > ..interrestingly people here seem to focus my problem to ZFS.. but my > problem was to build an raid over 4 disks on my old i386 machine and that > failed with 2 different approaches. > > I'm accepting that ZFS is a too big thing for the i386 architecture > and I possibly should leave it alone on that machine. > > But my 2nd try with gvinum failed also ...why? I've had success using a combination of gstripe and gmirror to create a RAID10 over 4 drives: % gstripe status Name Status Components stripe/st0 UP mirror/gm2 mirror/gm1 % gmirror status NameStatus Components mirror/gm2 COMPLETE da0 (ACTIVE) da1 (ACTIVE) mirror/gm1 COMPLETE da2 (ACTIVE) da3 (ACTIVE) This is a separate data area though -- system boots from some different drives. I can't remember if it is possible to boot from a gstripe. Cheers, Matthew signature.asc Description: OpenPGP digital signature
Re: 10.2-Beta i386..what's wrong..?
Glen Barber wrote: > On Thu, Jul 23, 2015 at 07:44:43PM -0500, Mark Linimon wrote: > > On Fri, Jul 24, 2015 at 12:43:43AM +, Glen Barber wrote: > > > Even on amd64, you need to tune the system with less than 4GB RAM. > > > > The only correct answer to "how much RAM do you need to run ZFS" is > > "always more" AFAICT. > > > > There's a bit more to it than that. You *can* successfully run amd64 > ZFS system with certain tunings (vfs.kmem_max IIRC), but you also need > to adjust things like disabling prefetching with less than 4GB RAM > (accessible to the OS). > > So yeah, "more RAM" is always a thing in this playing field. > > Glen > ...more RAM? Always more RAM? Reality check please, this is an i386 Machine with 2 Gbytes. It has two of 3 sockets polluted with RAM Modules (1G), there is not that much Space to give it more RAM. i386 is a supported architecture as far as I know, ok it where nice to have in i386 with 8 Gigs of RAM but there are not much motherboards out here that would support this.. This remebers me to microsofts attemts to write Software, always more RAM and more CPU to fasten up endless loops.. Regards, Holm -- Technik Service u. Handel Tiffe, www.tsht.de, Holm Tiffe, Freiberger Straße 42, 09600 Oberschöna, USt-Id: DE253710583 www.tsht.de, i...@tsht.de, Fax +49 3731 74200, Mobil: 0172 8790 741 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
Glen Barber wrote: > On Thu, Jul 23, 2015 at 08:42:44PM -0400, Brandon Allbery wrote: > > On Thu, Jul 23, 2015 at 8:40 PM, Mark Linimon wrote: > > > > > zfs is a resource hog. i386 is not able to handle the demand as well > > > as amd64. > > > > > > > Even amd64 is no guarantee. I installed one of the Illumos spinoffs on a > > 2GB amd64 netbook (they mostly force zfs). I think it lasted 2 days before > > the kernel panics started. > > > > Even on amd64, you need to tune the system with less than 4GB RAM. > > Glen > ..interrestingly people here seem to focus my problem to ZFS.. but my problem was to build an raid over 4 disks on my old i386 machine and that failed with 2 different approaches. I'm accepting that ZFS is a too big thing for the i386 architecture and I possibly should leave it alone on that machine. But my 2nd try with gvinum failed also ...why? In the meantime I've set up the first two disks to a geom_mirror and installed 2x swap and a 66G ufs on the mirror, successfully installed 10.2-Beta, pulled the sources with svn and rebuild the entire world and a kernel yesterday. That worked flawlessly. So the hardware is out of the question here. It seems that geom_vinum is broken and it is broken on 9.3 10.1 and 10.2-Beta. There isn't an other possibility anymore. Can someone please confirm this with a similar machine? Proably I could atach an IDE disk and move the new build system to it to give ZFS a 2nd try on the 4 SCSI Disks to prove if something changed n the meantime or if a kernel with KSTACK_PAGES=4 would fix the ZFS problem. I had running gvinum and gmirror on that machine in the past up to 8.4-stable. It seems that we've lost this possibility now.. Regards, Holm -- Technik Service u. Handel Tiffe, www.tsht.de, Holm Tiffe, Freiberger Straße 42, 09600 Oberschöna, USt-Id: DE253710583 www.tsht.de, i...@tsht.de, Fax +49 3731 74200, Mobil: 0172 8790 741 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Fri, 24 Jul 2015 01:00:03 + Glen Barber wrote .. > FreeBSD kernel grew since 10.1-RELEASE, so this is not unexpected. Not trying to hijack the thread, or anything. But on that note; does FreeBSD keep a graph, or anything that indicates kernel [size] over major versions? I'm sure I'm not the only one that would find this interesting. :) --Chris > > Glen ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Fri, Jul 24, 2015 at 02:54:00AM +0200, Michelle Sullivan wrote: > Glen Barber wrote: > > On Thu, Jul 23, 2015 at 07:44:43PM -0500, Mark Linimon wrote: > > > >> On Fri, Jul 24, 2015 at 12:43:43AM +, Glen Barber wrote: > >> > >>> Even on amd64, you need to tune the system with less than 4GB RAM. > >>> > >> The only correct answer to "how much RAM do you need to run ZFS" is > >> "always more" AFAICT. > >> > >> > > > > There's a bit more to it than that. You *can* successfully run amd64 > > ZFS system with certain tunings (vfs.kmem_max IIRC), but you also need > > to adjust things like disabling prefetching with less than 4GB RAM > > (accessible to the OS). > > > > So yeah, "more RAM" is always a thing in this playing field. > > > > Glen > > > > > Actually I'm quite sucessfully running zfs on i386 (in a VM) ... here's > the trick (which leads me to suspect ARC handling as the problem) - when > I get to 512M of kernel space or less than 1G of RAM available system > wide, I export/import the zfs pool... Using this formula I have uptimes > of months... I haven't yet tried the 'ARC patch' that was proposed > recently... > Which FreeBSD version is this? Things changed since 10.1-RELEASE and what will be 10.2-RELEASE enough that I can't even get a single-disk ZFS system (in VirtualBox) to boot on i386. During 10.1-RELEASE testing, I only saw problems with multi-disk setup (mirror, raidzN), but the FreeBSD kernel grew since 10.1-RELEASE, so this is not unexpected. Glen pgpLlYB_ift8p.pgp Description: PGP signature
Re: 10.2-Beta i386..what's wrong..?
Glen Barber wrote: > On Thu, Jul 23, 2015 at 07:44:43PM -0500, Mark Linimon wrote: > >> On Fri, Jul 24, 2015 at 12:43:43AM +, Glen Barber wrote: >> >>> Even on amd64, you need to tune the system with less than 4GB RAM. >>> >> The only correct answer to "how much RAM do you need to run ZFS" is >> "always more" AFAICT. >> >> > > There's a bit more to it than that. You *can* successfully run amd64 > ZFS system with certain tunings (vfs.kmem_max IIRC), but you also need > to adjust things like disabling prefetching with less than 4GB RAM > (accessible to the OS). > > So yeah, "more RAM" is always a thing in this playing field. > > Glen > > Actually I'm quite sucessfully running zfs on i386 (in a VM) ... here's the trick (which leads me to suspect ARC handling as the problem) - when I get to 512M of kernel space or less than 1G of RAM available system wide, I export/import the zfs pool... Using this formula I have uptimes of months... I haven't yet tried the 'ARC patch' that was proposed recently... -- Michelle Sullivan http://www.mhix.org/ ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Thu, Jul 23, 2015 at 07:44:43PM -0500, Mark Linimon wrote: > On Fri, Jul 24, 2015 at 12:43:43AM +, Glen Barber wrote: > > Even on amd64, you need to tune the system with less than 4GB RAM. > > The only correct answer to "how much RAM do you need to run ZFS" is > "always more" AFAICT. > There's a bit more to it than that. You *can* successfully run amd64 ZFS system with certain tunings (vfs.kmem_max IIRC), but you also need to adjust things like disabling prefetching with less than 4GB RAM (accessible to the OS). So yeah, "more RAM" is always a thing in this playing field. Glen pgplAqYMvbWtY.pgp Description: PGP signature
Re: 10.2-Beta i386..what's wrong..?
On Thu, Jul 23, 2015 at 8:43 PM, Glen Barber wrote: > > Even amd64 is no guarantee. I installed one of the Illumos spinoffs on a > > 2GB amd64 netbook (they mostly force zfs). I think it lasted 2 days > before > > the kernel panics started. > > > > Even on amd64, you need to tune the system with less than 4GB RAM. I knew it wasn't going to fly, in fact I looked for ways to get the installer to do ufs instead because I couldn't imagine zfs being able to work in 2GB. Somehow I don't think old netbooks are in Illumos's plans. :) -- brandon s allbery kf8nh sine nomine associates allber...@gmail.com ballb...@sinenomine.net unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Fri, Jul 24, 2015 at 12:43:43AM +, Glen Barber wrote: > Even on amd64, you need to tune the system with less than 4GB RAM. The only correct answer to "how much RAM do you need to run ZFS" is "always more" AFAICT. mcl ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Thu, Jul 23, 2015 at 08:42:44PM -0400, Brandon Allbery wrote: > On Thu, Jul 23, 2015 at 8:40 PM, Mark Linimon wrote: > > > zfs is a resource hog. i386 is not able to handle the demand as well > > as amd64. > > > > Even amd64 is no guarantee. I installed one of the Illumos spinoffs on a > 2GB amd64 netbook (they mostly force zfs). I think it lasted 2 days before > the kernel panics started. > Even on amd64, you need to tune the system with less than 4GB RAM. Glen pgpoZDAUBEQMT.pgp Description: PGP signature
Re: 10.2-Beta i386..what's wrong..?
On Thu, Jul 23, 2015 at 8:40 PM, Mark Linimon wrote: > zfs is a resource hog. i386 is not able to handle the demand as well > as amd64. > Even amd64 is no guarantee. I installed one of the Illumos spinoffs on a 2GB amd64 netbook (they mostly force zfs). I think it lasted 2 days before the kernel panics started. -- brandon s allbery kf8nh sine nomine associates allber...@gmail.com ballb...@sinenomine.net unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Fri, Jul 24, 2015 at 02:19:20AM +0200, Michelle Sullivan wrote: > Why is zfs on i386 so hard? zfs is a resource hog. i386 is not able to handle the demand as well as amd64. I have never, ever, heard of anyone who has a deep understanding of zfs on FreeBSD recommend anything other than amd64. (Note: I am not such a person, so I am simply reporting my understanding.) FWIW, I tried it once. Once. After spending a few days inspecting all the bullet holes in my feet, I moved it off that i386 machine and all the bullet holes healed up. tl;dr: zfs/i386 Not Recommended. But YMMV. mcl ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Fri, Jul 24, 2015 at 02:19:20AM +0200, Michelle Sullivan wrote: > Glen Barber wrote: > > > > ZFS on i386 requires KSTACK_PAGES=4 in the kernel configuration to work > > properly, as noted in the 10.1-RELEASE errata (and release notes, if > > I remember correctly). > > > > We cannot set KSTACK_PAGES=4 in GENERIC by default, as it is too > > disruptive. > > Why? > Because it takes resources away from userland threads. > > If you are using ZFS on i386, you *must* build your own > > kernel for this. It is otherwise unsupported by default. > > > Why is zfs on i386 so hard? Why is it even in the GENERIC kernel if > it's unsupported? > It is not in GENERIC by default. You have to specifically kldload(8) zfs.ko. Glen pgpBXn0STiajy.pgp Description: PGP signature
Re: 10.2-Beta i386..what's wrong..?
Glen Barber wrote: > > ZFS on i386 requires KSTACK_PAGES=4 in the kernel configuration to work > properly, as noted in the 10.1-RELEASE errata (and release notes, if > I remember correctly). > > We cannot set KSTACK_PAGES=4 in GENERIC by default, as it is too > disruptive. Why? > If you are using ZFS on i386, you *must* build your own > kernel for this. It is otherwise unsupported by default. > Why is zfs on i386 so hard? Why is it even in the GENERIC kernel if it's unsupported? -- Michelle Sullivan http://www.mhix.org/ ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Thu, 23 Jul 2015 23:48:06 + Glen Barber wrote > On Thu, Jul 23, 2015 at 07:40:42PM -0400, Jason Unovitch wrote: > > >> ..uh top quoting.. > > >> > > >> Trying to mount root from zfs:zroot/ROOT/default []. > > >> > > >> Fatal double fault: > > >> eip = 0xc0b416f5 > > >> esp = 0xe2673000 > > >> ebp = 0xe2673008 > > >> cpuid =0; apic id = 00 > > >> panic: double fault > > >> cpuid = 0 > > >> KDB stack backtrace: > > >> #0 0xc0b72832 at kdb_backtrace+0x52 > > >> #1 0xc0b339cb at vpanic+0x11b > > >> #2 0xc0b338ab at panic+0x1b > > >> #3 0xc10556 at dblfault_handler+0xab > > >> Uptime: 11s > > >> .. > > > > > >Looks like the panic I received on my Soekris Net6501-70. > > > > > >Fixed in stable/10 in r285759: > > >https://svnweb.freebsd.org/changeset/base/285759 > > > > > >Fixed in stable/9 in r285760: > > >https://svnweb.freebsd.org/changeset/base/285760 > > > > > > > Herbert, in https://bugs.FreeBSD.org/201642 I had tracked down the commit > > that caused the issue on our Soekris 6501s. Only between r284297 -> > > r285662 in HEAD and between r284998 -> r285756 in stable/10 should be > > affected. > > >>>Ok, it is 10.2-BETA so I've tried 10.1-Release next...exactly the same, > > >>>ok tried 9.3-RELEASE .. the same! > > > > Holm, if you are seeing this on 9.3-RELEASE and 10.1-RELEASE I'm not > > entirely convinced the cause is the same. > > > > ZFS on i386 requires KSTACK_PAGES=4 in the kernel configuration to work > properly, as noted in the 10.1-RELEASE errata (and release notes, if > I remember correctly). > > We cannot set KSTACK_PAGES=4 in GENERIC by default, as it is too > disruptive. If you are using ZFS on i386, you *must* build your own > kernel for this. It is otherwise unsupported by default. Shouldn't there be a GENERIC kernel with the KSTACK_PAGES=4 option defined, available? Maybe with one of the bootonly MEMSTICKS, or something? I know, it's (mostly) crazy to attempt ZFS on an i386. But it's pretty difficult for someone on 8.x to build a 9.x, or 10.x kernel. If all they've got is i386 hardware. Just a thought. --Chris > > Glen ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Thu, Jul 23, 2015 at 07:40:42PM -0400, Jason Unovitch wrote: > >> ..uh top quoting.. > >> > >> Trying to mount root from zfs:zroot/ROOT/default []. > >> > >> Fatal double fault: > >> eip = 0xc0b416f5 > >> esp = 0xe2673000 > >> ebp = 0xe2673008 > >> cpuid =0; apic id = 00 > >> panic: double fault > >> cpuid = 0 > >> KDB stack backtrace: > >> #0 0xc0b72832 at kdb_backtrace+0x52 > >> #1 0xc0b339cb at vpanic+0x11b > >> #2 0xc0b338ab at panic+0x1b > >> #3 0xc10556 at dblfault_handler+0xab > >> Uptime: 11s > >> .. > > > >Looks like the panic I received on my Soekris Net6501-70. > > > >Fixed in stable/10 in r285759: > >https://svnweb.freebsd.org/changeset/base/285759 > > > >Fixed in stable/9 in r285760: > >https://svnweb.freebsd.org/changeset/base/285760 > > > > Herbert, in https://bugs.FreeBSD.org/201642 I had tracked down the commit > that caused the issue on our Soekris 6501s. Only between r284297 -> r285662 > in HEAD and between r284998 -> r285756 in stable/10 should be affected. > > >>>Ok, it is 10.2-BETA so I've tried 10.1-Release next...exactly the same, > >>>ok tried 9.3-RELEASE .. the same! > > Holm, if you are seeing this on 9.3-RELEASE and 10.1-RELEASE I'm not > entirely convinced the cause is the same. > ZFS on i386 requires KSTACK_PAGES=4 in the kernel configuration to work properly, as noted in the 10.1-RELEASE errata (and release notes, if I remember correctly). We cannot set KSTACK_PAGES=4 in GENERIC by default, as it is too disruptive. If you are using ZFS on i386, you *must* build your own kernel for this. It is otherwise unsupported by default. Glen pgpx2cp89PFDy.pgp Description: PGP signature
Re: 10.2-Beta i386..what's wrong..?
>> ..uh top quoting.. >> >> Trying to mount root from zfs:zroot/ROOT/default []. >> >> Fatal double fault: >> eip = 0xc0b416f5 >> esp = 0xe2673000 >> ebp = 0xe2673008 >> cpuid =0; apic id = 00 >> panic: double fault >> cpuid = 0 >> KDB stack backtrace: >> #0 0xc0b72832 at kdb_backtrace+0x52 >> #1 0xc0b339cb at vpanic+0x11b >> #2 0xc0b338ab at panic+0x1b >> #3 0xc10556 at dblfault_handler+0xab >> Uptime: 11s >> .. > >Looks like the panic I received on my Soekris Net6501-70. > >Fixed in stable/10 in r285759: >https://svnweb.freebsd.org/changeset/base/285759 > >Fixed in stable/9 in r285760: >https://svnweb.freebsd.org/changeset/base/285760 > >-- >Herbert Herbert, in https://bugs.FreeBSD.org/201642 I had tracked down the commit that caused the issue on our Soekris 6501s. Only between r284297 -> r285662 in HEAD and between r284998 -> r285756 in stable/10 should be affected. >>>Ok, it is 10.2-BETA so I've tried 10.1-Release next...exactly the same, >>>ok tried 9.3-RELEASE .. the same! Holm, if you are seeing this on 9.3-RELEASE and 10.1-RELEASE I'm not entirely convinced the cause is the same. -- Jason ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
On Wed, Jul 22, 2015 at 01:57:26PM +0200, Holm Tiffe wrote: > ..uh top quoting.. > > Trying to mount root from zfs:zroot/ROOT/default []. > > Fatal double fault: > eip = 0xc0b416f5 > esp = 0xe2673000 > ebp = 0xe2673008 > cpuid =0; apic id = 00 > panic: double fault > cpuid = 0 > KDB stack backtrace: > #0 0xc0b72832 at kdb_backtrace+0x52 > #1 0xc0b339cb at vpanic+0x11b > #2 0xc0b338ab at panic+0x1b > #3 0xc10556 at dblfault_handler+0xab > Uptime: 11s > .. Looks like the panic I received on my Soekris Net6501-70. Fixed in stable/10 in r285759: https://svnweb.freebsd.org/changeset/base/285759 Fixed in stable/9 in r285760: https://svnweb.freebsd.org/changeset/base/285760 -- Herbert ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
..uh top quoting.. Trying to mount root from zfs:zroot/ROOT/default []. Fatal double fault: eip = 0xc0b416f5 esp = 0xe2673000 ebp = 0xe2673008 cpuid =0; apic id = 00 panic: double fault cpuid = 0 KDB stack backtrace: #0 0xc0b72832 at kdb_backtrace+0x52 #1 0xc0b339cb at vpanic+0x11b #2 0xc0b338ab at panic+0x1b #3 0xc10556 at dblfault_handler+0xab Uptime: 11s .. Don't know if I've blowing the stack, nor I have currently the capabilities to build an 10.2-Beta install disk with an custom kernel. I've simply tried to install freebsd on old (trusty) hardware to do some things.. In the meantime I've tested to install on the single disks with a plain ufs in the existing gpart partitions (da?p3) und using the swapspace (da?p2). That where 4 flawless installations, all 4 installed w/o any stall and where bootable, so the hardware is ok. Next try is gmirror on 2 gpart disks. I think gvinum is broken with gpart partitions for since at least 9.3 on i386. Regards, Holm Steven Hartland wrote: > What's the panic? > > As your using ZFS I'd lay money on the fact your blowing the stack, > which would require kernel built with: > options KSTACK_PAGES=4 > > Regards > Steve > > On 22/07/2015 08:10, Holm Tiffe wrote: > >Hi, > > > >yesterday I've decided to to put my old "Workstation" in my shack and > >to install a new FreeBSD on it, it is the computer I've used previously > >for my daily work, reading Mails, programming controllers and so on.. > > > >It is am AMD XP300+ with an Adaptec 29320 and four IBM 72GB SCSI3 Disks > >with only 2GB of Memory. > > > >I've replaced a bad disk, reformated it so 512 Byte sectors )they came > >original with 534 or so for ecc),pulled the 10.2-Beta disk1 ISO file from > >the german mirror and tried to install on a zfs raidz1 which was going > >flawlessly until the point of booting the installed system, some warnings > >about zfs and vm... double fault, panic. > > > >Later I've read on the net that installing zfs on a 32Bit machine isn't > >really a good idea, so I tried to install the system on a gvinum raid > >on gpt partitions. > >The layout was gpt-boot, 2G swap and the rest raid on every disk so > >that I could build a striped 8G swap and ~190G raid with gvinum. > >Installing that worked flawlessly with the install point "shell and doing > >partitioning per hand". I've made a newfs -U -L root /dev/gvinum/raid, > >mounted the filesystem to /mnt, activated the swap and put an fstab in > >/tmp/bsdsomething-etc, exited to install. > > > >The installer verified the install containers (base.txz,kernel.txz and > >so on) and begun to extract them. > >So far soo good, but while extracting the system repeatedly hung on the > >very same location. On vt4 I could start a top that was showing an hung > >bsdtar process in the state wdrain and nothing other happened, the system > >took a long time to react to keypresses.. > > > >I've tried to extract the distribution files per hand, same problem, tar > >hung on extracting kernel.symbols for example, same behavior on other files > >in base.txz. > > > >Ok, it is 10.2-BETA so I've tried 10.1-Release next...exactly the same, > >ok tried 9.3-RELEASE .. the same! > > > >What I'm doning wrong here? > > > >Besides of the bad disk that I've changed (IBM-SSG S53D073 C61F) the > >hardware is very trusty, it is a gigabyte board and I want to keep this > >machine since it has still floppy capabilites that I need to comunicate > >with my old CP/M gear and PDP11's. It run for years w/o problems. > >Capacitors are already changed and ok. > > > >Sorry for the wishy-washy error messages above, they are from my memory and > >from yesterday... > > > >Next try was installing the System on a 8G ATA disk that was laying around, > >went flawlessly, booted it and tried to install the files on the gvinum > >raid from there...same problem. > >Changed the 29320 against a 29160 ..same problem. > >No messages about bad disksor something on the console. > > > >What's going on here? The machine run on 8.4-stable before w/o any > >problems. > > > >Regards, > > > >Holm > > ___ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" -- Technik Service u. Handel Tiffe, www.tsht.de, Holm Tiffe, Freiberger Straße 42, 09600 Oberschöna, USt-Id: DE253710583 www.tsht.de, i...@tsht.de, Fax +49 3731 74200, Mobil: 0172 8790 741 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: 10.2-Beta i386..what's wrong..?
What's the panic? As your using ZFS I'd lay money on the fact your blowing the stack, which would require kernel built with: options KSTACK_PAGES=4 Regards Steve On 22/07/2015 08:10, Holm Tiffe wrote: Hi, yesterday I've decided to to put my old "Workstation" in my shack and to install a new FreeBSD on it, it is the computer I've used previously for my daily work, reading Mails, programming controllers and so on.. It is am AMD XP300+ with an Adaptec 29320 and four IBM 72GB SCSI3 Disks with only 2GB of Memory. I've replaced a bad disk, reformated it so 512 Byte sectors )they came original with 534 or so for ecc),pulled the 10.2-Beta disk1 ISO file from the german mirror and tried to install on a zfs raidz1 which was going flawlessly until the point of booting the installed system, some warnings about zfs and vm... double fault, panic. Later I've read on the net that installing zfs on a 32Bit machine isn't really a good idea, so I tried to install the system on a gvinum raid on gpt partitions. The layout was gpt-boot, 2G swap and the rest raid on every disk so that I could build a striped 8G swap and ~190G raid with gvinum. Installing that worked flawlessly with the install point "shell and doing partitioning per hand". I've made a newfs -U -L root /dev/gvinum/raid, mounted the filesystem to /mnt, activated the swap and put an fstab in /tmp/bsdsomething-etc, exited to install. The installer verified the install containers (base.txz,kernel.txz and so on) and begun to extract them. So far soo good, but while extracting the system repeatedly hung on the very same location. On vt4 I could start a top that was showing an hung bsdtar process in the state wdrain and nothing other happened, the system took a long time to react to keypresses.. I've tried to extract the distribution files per hand, same problem, tar hung on extracting kernel.symbols for example, same behavior on other files in base.txz. Ok, it is 10.2-BETA so I've tried 10.1-Release next...exactly the same, ok tried 9.3-RELEASE .. the same! What I'm doning wrong here? Besides of the bad disk that I've changed (IBM-SSG S53D073 C61F) the hardware is very trusty, it is a gigabyte board and I want to keep this machine since it has still floppy capabilites that I need to comunicate with my old CP/M gear and PDP11's. It run for years w/o problems. Capacitors are already changed and ok. Sorry for the wishy-washy error messages above, they are from my memory and from yesterday... Next try was installing the System on a 8G ATA disk that was laying around, went flawlessly, booted it and tried to install the files on the gvinum raid from there...same problem. Changed the 29320 against a 29160 ..same problem. No messages about bad disksor something on the console. What's going on here? The machine run on 8.4-stable before w/o any problems. Regards, Holm ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"