sndstat verbosity
The Handbook page on setting up sound says to find out which driver bound to the system after doing a "kldload snd_driver" to run"cat /dev/sndstat". In my versions of 8/9-STABLE, there is no way to determine what that driver is from sndstat without increasing the verbosity of "hw.snd.verbose". IMO, either a bug should be filed against docs to include a mention of increasing the verbosity, or sndstat should list the bound module in it's default output. Does anyone have feedback on this and if the bug should be a doc or sndstat related one? -- Adam Vande More ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs arc and amount of wired memory
on 07/02/2012 18:03 Eugene M. Zheganin said the following: > Hi. > > On 07.02.2012 21:51, Andriy Gapon wrote: >> >> I am not sure that these conclusions are correct. Wired is wired, it's not >> free. >> BTW, are you reluctant to share the full zfs-stats -a output? You don't >> have to >> place it inline, you can upload it somewhere and provide a link. >> > Well... nothing secret in it (in case someone will be interested too and so it > stays in maillist archive): [output snipped] Thank you. I don't see anything suspicious/unusual there. Just case, do you have ZFS dedup enabled by a chance? I think that examination of vmstat -m and vmstat -z outputs may provide some clues as to what got all that memory wired. -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: Swap on zvol - recommendable?
I can just tell you I had this problem still in 8.1 and it was a HUGE problem. System stalled every two weeks or so. Now when the swap is moved away from zfs it works fine. On Feb 6, 2012, at 11:57 AM, Patrick M. Hausen wrote: > Hi, all, > > is it possible to make a definite statement about swap on zvols? > > I found some older discussions about a resource starvation > scenario when ZFS arc would be the cause of the system > running out of memory, trying to swap, yet the ZFS would > not be accessible until some memory was freed - leading to > a deadlock. > > Is this still the case with RELENG_8? The various Root on > ZFS guides mention both choices (decicated or gmirror > partition vs. zvol), yet don't say anything about the respective > merits or risks. I am aware of the fact that I cannot dump to > a raidz2 zvol ... > > Thanks for any hints, > Patrick > -- > punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe > Tel. 0721 9109 0 * Fax 0721 9109 100 > i...@punkt.de http://www.punkt.de > Gf: Jürgen Egeling AG Mannheim 108285 > > > > ___ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" > ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: ld: kernel.debug: Not enough room for program headers
8-stable i386 is building again. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: FreeBSD 8.2-stable: devd fails to restart
On Tue, 7 Feb 2012, Torfinn Ingolfsen wrote: On Mon, 06 Feb 2012 00:08:55 -0700 (MST) Warren Block wrote: On Sat, 4 Feb 2012, Torfinn Ingolfsen wrote: On Sat, 04 Feb 2012 10:34:19 -0700 (MST) Warren Block wrote: Possibly relevant: http://www.freebsd.org/cgi/query-pr.cgi?pr=140462&cat= (Using DHCP from /etc/rc.conf leaves a lock on devd.pid. SYNCDHCP does not.) And the thread: http://lists.freebsd.org/pipermail/freebsd-current/2009-October/012749.html Yes, it seems to be that problem. Tested on my other machine, which hasn't changed since the problem was discovered: root@kg-v7# service devd status devd is not running. root@kg-v7# ll /var/run/devd.pid -rw--- 1 root wheel 3 Jan 12 20:40 /var/run/devd.pid root@kg-v7# lsof /var/run/devd.pid COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dhclient 1075 root5w VREG 0,703 918547 /var/run/devd.pid dhclient 1091 _dhcp5w VREG 0,703 918547 /var/run/devd.pid root@kg-v7# So, if this was worked on back in 2009, why isn't fixed yet? I switched to using SYNCDHCP which avoids the problem, didn't enter a PR, and quickly forgot about it. It would be nice to have it fixed. I'm all for getting it fixed, even if I don't know how yet. Should a PR be against devd, dhclient, or ... something else? It's devd, IMO. Hey, come to think of it, I did enter a PR, the one above. If this is still a problem in 9 (which I can test in a bit), posting to -current might get some needed attention on it. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: kernel debugging and ULE
On Monday, February 06, 2012 12:52:58 am Julian Elischer wrote: > so if I'm sitting still in the debugger for too long, a hardclock > event happens that goes into ULE, which then hits the following KASSERT. > > > KASSERT(pri >= PRI_MIN_BATCH && pri <= PRI_MAX_BATCH, > ("sched_priority: invalid priority %d: nice %d, " > "ticks %d ftick %d ltick %d tick pri %d", > pri, td->td_proc->p_nice, td->td_sched->ts_ticks, > td->td_sched->ts_ftick, td->td_sched->ts_ltick, > SCHED_PRI_TICKS(td->td_sched))); > > > The reason seems to be that I've been sitting still for too long and > things have become pear shaped. > > > how is it that being in the debugger doesn't stop hardclock events? > is there something I can do to make them not happen.. > It means I have to ge tmy debugging done in less than about 60 seconds. > > suggesions welcome. I committed a workaround to HEAD for this recently (r228960). Just make sure that is merged into whatever tree you are using. -- John Baldwin ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: FreeBSD 8.2-stable: devd fails to restart
On Mon, 06 Feb 2012 00:08:55 -0700 (MST) Warren Block wrote: > On Sat, 4 Feb 2012, Torfinn Ingolfsen wrote: > > > On Sat, 04 Feb 2012 10:34:19 -0700 (MST) > > Warren Block wrote: > > > >> > >> Possibly relevant: > >> http://www.freebsd.org/cgi/query-pr.cgi?pr=140462&cat= > >> > >> (Using DHCP from /etc/rc.conf leaves a lock on devd.pid. SYNCDHCP does > >> not.) > >> > >> And the thread: > >> http://lists.freebsd.org/pipermail/freebsd-current/2009-October/012749.html > > > > Yes, it seems to be that problem. Tested on my other machine, which hasn't > > changed since the problem was discovered: > > root@kg-v7# service devd status > > devd is not running. > > root@kg-v7# ll /var/run/devd.pid > > -rw--- 1 root wheel 3 Jan 12 20:40 /var/run/devd.pid > > root@kg-v7# lsof /var/run/devd.pid > > COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME > > dhclient 1075 root5w VREG 0,703 918547 /var/run/devd.pid > > dhclient 1091 _dhcp5w VREG 0,703 918547 /var/run/devd.pid > > root@kg-v7# > > > > So, if this was worked on back in 2009, why isn't fixed yet? > > I switched to using SYNCDHCP which avoids the problem, didn't enter a > PR, and quickly forgot about it. It would be nice to have it fixed. I'm all for getting it fixed, even if I don't know how yet. Should a PR be against devd, dhclient, or ... something else? -- Torfinn ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs arc and amount of wired memory
Hi. On 07.02.2012 21:51, Andriy Gapon wrote: I am not sure that these conclusions are correct. Wired is wired, it's not free. BTW, are you reluctant to share the full zfs-stats -a output? You don't have to place it inline, you can upload it somewhere and provide a link. Well... nothing secret in it (in case someone will be interested too and so it stays in maillist archive): ===Cut=== [emz@taiga:~]> zfs-stats -a ZFS Subsystem ReportTue Feb 7 22:01:09 2012 System Information: Kernel Version: 900044 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 ZFS Storage pool Version: 28 ZFS Filesystem Version: 5 FreeBSD 9.0-RELEASE #1: Mon Jan 23 13:36:16 YEKT 2012 emz 22:01 up 11 days, 2:44, 4 users, load averages: 0,30 0,26 0,32 System Memory: 4.30% 169.09 MiB Active, 1.04% 40.95 MiB Inact 90.80% 3.48GiB Wired, 2.17% 85.25 MiB Cache 0.69% 27.30 MiB Free, 0.99% 39.08 MiB Gap Real Installed: 4.00GiB Real Available: 99.61% 3.98GiB Real Managed: 96.31% 3.84GiB Logical Total: 4.00GiB Logical Used: 96.25% 3.85GiB Logical Free: 3.75% 153.50 MiB Kernel Memory: 397.53 MiB Data: 96.18% 382.33 MiB Text: 3.82% 15.20 MiB Kernel Memory Map: 2.69GiB Size: 9.93% 273.20 MiB Free: 90.07% 2.42GiB ARC Summary: (THROTTLED) Memory Throttle Count: 3.20k ARC Misc: Deleted:10.83m Recycle Misses: 1.55m Mutex Misses: 7.80k Evict Skips:7.80k ARC Size: 12.50% 363.20 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84GiB ARC Size Breakdown: Recently Used Cache Size: 56.17% 204.02 MiB Frequently Used Cache Size: 43.83% 159.18 MiB ARC Hash Breakdown: Elements Max: 191.41k Elements Current: 32.79% 62.76k Collisions: 28.06m Chain Max: 17 Chains: 12.77k ARC Efficiency: 179.54m Cache Hit Ratio:95.07% 170.68m Cache Miss Ratio: 4.93% 8.86m Actual Hit Ratio: 95.07% 170.68m Data Demand Efficiency: 94.72% 152.53m Data Prefetch Efficiency: 0.00% 20 CACHE HITS BY CACHE LIST: Most Recently Used: 30.50% 52.05m Most Frequently Used: 69.50% 118.63m Most Recently Used Ghost: 0.30% 517.41k Most Frequently Used Ghost: 1.02% 1.74m CACHE HITS BY DATA TYPE: Demand Data: 84.65% 144.48m Prefetch Data:0.00% 0 Demand Metadata: 15.35% 26.20m Prefetch Metadata:0.00% 52 CACHE MISSES BY DATA TYPE: Demand Data: 90.88% 8.05m Prefetch Data:0.00% 20 Demand Metadata: 9.12% 807.98k Prefetch Metadata:0.00% 172 L2ARC is disabled VDEV cache is disabled ZFS Tunables (sysctl): kern.maxusers 384 vm.kmem_size4120326144 vm.kmem_size_scale 1 vm.kmem_size_min0 vm.kmem_size_max329853485875 vfs.zfs.l2c_only_size 0 vfs.zfs.mfu
Re: zfs arc and amount of wired memory
on 07/02/2012 17:35 Eugene M. Zheganin said the following: > Okay, thank you once again, it made me more clear what is happening to the > memory: > > > System Memory: > > 4.39% 172.47 MiB Active, 0.29% 11.53 MiB Inact > 90.68% 3.48GiB Wired, 3.54% 138.98 MiB Cache > 0.10% 4.12MiB Free, 0.99% 39.07 MiB Gap > > Real Installed: 4.00GiB > Real Available: 99.61% 3.98GiB > Real Managed: 96.31% 3.84GiB > > Logical Total: 4.00GiB > Logical Used: 96.22% 3.85GiB > Logical Free: 3.78% 154.63 MiB > > Kernel Memory: 393.82 MiB > Data: 96.14% 378.63 MiB > Text: 3.86% 15.20 MiB > > Kernel Memory Map: 2.68GiB > Size: 9.82% 269.77 MiB > Free: 90.18% 2.42GiB > > Looks like most of the 'wired' space is makred in kernel as free, though still > 'wired', and thus for me it doesn't look that free as 'free' or 'inactive', > and, > as the paged out amount continues to grow, I want to ask if there's a way to > make > this address space availabe to a userland processes ? I am not sure that these conclusions are correct. Wired is wired, it's not free. BTW, are you reluctant to share the full zfs-stats -a output? You don't have to place it inline, you can upload it somewhere and provide a link. -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs arc and amount of wired memory
Hi. On 07.02.2012 17:43, Andriy Gapon wrote: on 07/02/2012 13:34 Eugene M. Zheganin said the following: Hi. On 07.02.2012 16:46, Andriy Gapon wrote: on 07/02/2012 10:36 Eugene M. Zheganin said the following: If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut=== ARC Size: 12.50% 363.14 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84GiB ===Cut=== At the same time I have 3500 megs in wired state: Please try sysutils/zfs-stats; zfs-stats -a output should provide a good overview of the state and configuration of the system. Thanks. But it seems to be the same script. Anyway, it reports the same amount of memory: [emz@taiga:~]# zfs-stats -A zfs-stats -A is not the same as zfs-stats -a Okay, thank you once again, it made me more clear what is happening to the memory: System Memory: 4.39% 172.47 MiB Active, 0.29% 11.53 MiB Inact 90.68% 3.48GiB Wired, 3.54% 138.98 MiB Cache 0.10% 4.12MiB Free, 0.99% 39.07 MiB Gap Real Installed: 4.00GiB Real Available: 99.61% 3.98GiB Real Managed: 96.31% 3.84GiB Logical Total: 4.00GiB Logical Used: 96.22% 3.85GiB Logical Free: 3.78% 154.63 MiB Kernel Memory: 393.82 MiB Data: 96.14% 378.63 MiB Text: 3.86% 15.20 MiB Kernel Memory Map: 2.68GiB Size: 9.82% 269.77 MiB Free: 90.18% 2.42GiB Looks like most of the 'wired' space is makred in kernel as free, though still 'wired', and thus for me it doesn't look that free as 'free' or 'inactive', and, as the paged out amount continues to grow, I want to ask if there's a way to make this address space availabe to a userland processes ? Thanks. Eugene. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs arc and amount of wired memory
on 07/02/2012 13:34 Eugene M. Zheganin said the following: > Hi. > > On 07.02.2012 16:46, Andriy Gapon wrote: >> on 07/02/2012 10:36 Eugene M. Zheganin said the following: >>> If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: >>> >>> ===Cut=== >>> ARC Size: 12.50% 363.14 MiB >>> Target Size: (Adaptive) 12.50% 363.18 MiB >>> Min Size (Hard Limit): 12.50% 363.18 MiB >>> Max Size (High Water): 8:1 2.84GiB >>> ===Cut=== >>> >>> At the same time I have 3500 megs in wired state: >>> >>> >>> Please try sysutils/zfs-stats; zfs-stats -a output should provide a good >>> overview of the state and configuration of the system. >>> > Thanks. But it seems to be the same script. Anyway, it reports the same > amount > of memory: > > [emz@taiga:~]# zfs-stats -A zfs-stats -A is not the same as zfs-stats -a > > ZFS Subsystem ReportTue Feb 7 17:32:34 2012 > > > ARC Summary: (THROTTLED) > Memory Throttle Count: 2.62k > > ARC Misc: > Deleted:10.17m > Recycle Misses: 1.47m > Mutex Misses: 7.69k > Evict Skips:7.69k > > ARC Size: 12.50% 363.24 MiB > Target Size: (Adaptive) 12.50% 363.18 MiB > Min Size (Hard Limit): 12.50% 363.18 MiB > Max Size (High Water): 8:1 2.84GiB > > ARC Size Breakdown: > Recently Used Cache Size: 57.54% 209.01 MiB > Frequently Used Cache Size: 42.46% 154.22 MiB > > ARC Hash Breakdown: > Elements Max: 191.41k > Elements Current: 33.76% 64.63k > Collisions: 27.09m > Chain Max: 17 > Chains: 13.56k > > > > I still don't get why I have 3/5 Gigs in wired state. > > Thanks. > > Eugene. > ___ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" > -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs arc and amount of wired memory
Hi. On 07.02.2012 16:46, Andriy Gapon wrote: on 07/02/2012 10:36 Eugene M. Zheganin said the following: If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut=== ARC Size: 12.50% 363.14 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84GiB ===Cut=== At the same time I have 3500 megs in wired state: Please try sysutils/zfs-stats; zfs-stats -a output should provide a good overview of the state and configuration of the system. Thanks. But it seems to be the same script. Anyway, it reports the same amount of memory: [emz@taiga:~]# zfs-stats -A ZFS Subsystem ReportTue Feb 7 17:32:34 2012 ARC Summary: (THROTTLED) Memory Throttle Count: 2.62k ARC Misc: Deleted:10.17m Recycle Misses: 1.47m Mutex Misses: 7.69k Evict Skips:7.69k ARC Size: 12.50% 363.24 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84GiB ARC Size Breakdown: Recently Used Cache Size: 57.54% 209.01 MiB Frequently Used Cache Size: 42.46% 154.22 MiB ARC Hash Breakdown: Elements Max: 191.41k Elements Current: 33.76% 64.63k Collisions: 27.09m Chain Max: 17 Chains: 13.56k I still don't get why I have 3/5 Gigs in wired state. Thanks. Eugene. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs arc and amount of wired memory
on 07/02/2012 10:36 Eugene M. Zheganin said the following: > Hi. > > I have a server with 9.0/amd64 and 4 Gigs of RAM. > Today's questions are about the amount of memory in 'wired' state and the ARC > size. > > If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: > > ===Cut=== > ARC Size: 12.50% 363.14 MiB > Target Size: (Adaptive) 12.50% 363.18 MiB > Min Size (Hard Limit): 12.50% 363.18 MiB > Max Size (High Water): 8:1 2.84GiB > ===Cut=== > > At the same time I have 3500 megs in wired state: > > ===Cut=== > Mem: 237M Active, 36M Inact, 3502M Wired, 78M Cache, 432K Buf, 37M Free > ===Cut=== > > First question - what is the actual size of ARC, and how can it be determined > ? > Solaris version of the script is more comprehensive (ran on Solaris): > > ===Cut=== > ARC Size: > Current Size: 6457 MB (arcsize) > Target Size (Adaptive): 6457 MB (c) > Min Size (Hard Limit):2941 MB (zfs_arc_min) > Max Size (Hard Limit):23534 MB (zfs_arc_max) > ===Cut=== Please try sysutils/zfs-stats; zfs-stats -a output should provide a good overview of the state and configuration of the system. > The arcstat script makes me think that the ARC size is about 380 megs indeed: > > ===Cut=== > Time read miss miss% dmis dm% pmis pm% mmis mm% size tsize > 14:33:35 170M 7466K 4 7466K4 192 78 793K3 380M 380M > ===Cut=== > > Second question: if the size is 363 Megs, why do I have 3500 Megs in wired > state? From my experience this is directly related to the zfs, but 380 megs > its > like about ten times smaller than 3600 megs. At the same time I have like 700 > Megs in swap, so my guess - zfs isn't freeing memory for current needs that > easily. > > Yeah, I can tune it down, but I just would like to know what is happening on > an > untuned machine. -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: kernel debugging and ULE
on 06/02/2012 07:52 Julian Elischer said the following: > so if I'm sitting still in the debugger for too long, a hardclock > event happens that goes into ULE, which then hits the following KASSERT. > > >KASSERT(pri >= PRI_MIN_BATCH && pri <= PRI_MAX_BATCH, > ("sched_priority: invalid priority %d: nice %d, " > "ticks %d ftick %d ltick %d tick pri %d", > pri, td->td_proc->p_nice, td->td_sched->ts_ticks, > td->td_sched->ts_ftick, td->td_sched->ts_ltick, > SCHED_PRI_TICKS(td->td_sched))); > > > The reason seems to be that I've been sitting still for too long and things > have > become pear shaped. > > > how is it that being in the debugger doesn't stop hardclock events? > is there something I can do to make them not happen.. > It means I have to ge tmy debugging done in less than about 60 seconds. > > suggesions welcome. Does this really happen when you just sit in the debugger? Or does it happen when you let the kernel run? Like stepping through the code, etc -- Andriy Gapon ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
zfs arc and amount of wired memory
Hi. I have a server with 9.0/amd64 and 4 Gigs of RAM. Today's questions are about the amount of memory in 'wired' state and the ARC size. If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut=== ARC Size: 12.50% 363.14 MiB Target Size: (Adaptive) 12.50% 363.18 MiB Min Size (Hard Limit): 12.50% 363.18 MiB Max Size (High Water): 8:1 2.84GiB ===Cut=== At the same time I have 3500 megs in wired state: ===Cut=== Mem: 237M Active, 36M Inact, 3502M Wired, 78M Cache, 432K Buf, 37M Free ===Cut=== First question - what is the actual size of ARC, and how can it be determined ? Solaris version of the script is more comprehensive (ran on Solaris): ===Cut=== ARC Size: Current Size: 6457 MB (arcsize) Target Size (Adaptive): 6457 MB (c) Min Size (Hard Limit):2941 MB (zfs_arc_min) Max Size (Hard Limit):23534 MB (zfs_arc_max) ===Cut=== The arcstat script makes me think that the ARC size is about 380 megs indeed: ===Cut=== Time read miss miss% dmis dm% pmis pm% mmis mm% size tsize 14:33:35 170M 7466K 4 7466K4 192 78 793K3 380M 380M ===Cut=== Second question: if the size is 363 Megs, why do I have 3500 Megs in wired state? From my experience this is directly related to the zfs, but 380 megs its like about ten times smaller than 3600 megs. At the same time I have like 700 Megs in swap, so my guess - zfs isn't freeing memory for current needs that easily. Yeah, I can tune it down, but I just would like to know what is happening on an untuned machine. Thanks. Eugene. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: make buildworld fails with 8.2-RELEASE amd64 building STABLE
On 6 February 2012 21:23, Michael L. Squires wrote: > "make buildworld" fails when trying to "make buildworld" (no options) with > 8.2-RELEASE with 8.2-STABLE sources as of 2/6/2012 updated at 9:15 AM from > ftp.freebsd.org. > > The compile fails in the same spot every time; this is not a random failure. > I have seen some evidence, however, that there may be hardware problems. > > The other hardware problem was a Broadcom GigE watchdog timeout and reset. > > Hardware is a Tyan S2885 (dual dual-core Opteron) which has been running > Windows XP SP3 for some time without problems, for what that's worth. > > A Google search turns up nothing obviously related. > > Mike Squires > mikes at siralan.org > UN*X at home since 1986 > > uname -a: > > FreeBSD s2885.familysquires.net 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb > 17 02:41:51 UTC 2011 > r...@amason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 > > Error messages at failure: > > ===> gnu/usr.bin/gperf/doc (depend) c++ -O2 -pipe > -I/usr/obj/usr/src/tmp/legacy/usr/include > -I/usr/src/gnu/usr.bin/gperf/../../../contrib/gperf/lib > -I/usr/src/gnu/usr.bin/gperf -c > /usr/src/gnu/usr.bin/gperf/../../../contrib/gperf/src/bool-array.cc > /usr/src/gnu/usr.bin/gperf/../../../contrib/gperf/src/bool-array.cc: In > destructor 'Bool_Array::~Bool_Array()': > /usr/src/gnu/usr.bin/gperf/../../../contrib/gperf/src/bool-array.cc:39: > internal compiler error: Illegal instruction: 4 > Please submit a full bug report, > with preprocessed source if appropriate. > See http://gcc.gnu.org/bugs.html> for instructions. > *** Error code 1 > > Stop in /usr/src/gnu/usr.bin/gperf. > *** Error code 1 > > Stop in /usr/src. > *** Error code 1 > > Stop in /usr/src. > *** Error code 1 > > Stop in /usr/src. > s2885# You probably has bad RAM / overheat. -- wbr, pluknet ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"