Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-24 Thread Anton Berezin
On Fri, Mar 23, 2001 at 08:11:03PM +0100, Adrian Chadd wrote: A while back I started running through the undocumented sysctls and documenting them. I didn't get through all of them, and the main reason I stopped was because there wasn't a nifty way to extract the sysctls short of writing a

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-24 Thread Matt Dillon
:Matt Dillon [EMAIL PROTECTED] writes: : So you would be able to create approximately four 17GB swap partitions. : If you reduce NSWAP to 2 you would be able to create approximately : two 34GB swap partitions. If you reduce NSWAP to 1 you would be able : to create approximately

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-23 Thread Robert Watson
On Fri, 23 Mar 2001, Alexey V. Neyman wrote: On Thu, 22 Mar 2001, Michael C . Wu wrote: (Why is vfs.vmiodirenable=1 not enabled by default?) By the way, is there any all-in-one-place description of sysctl tuneables? Looking all the man pages and collecting notices about MIB variables

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-23 Thread Adrian Chadd
On Fri, Mar 23, 2001, Robert Watson wrote: On Fri, 23 Mar 2001, Alexey V. Neyman wrote: On Thu, 22 Mar 2001, Michael C . Wu wrote: (Why is vfs.vmiodirenable=1 not enabled by default?) By the way, is there any all-in-one-place description of sysctl tuneables? Looking all the man

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-22 Thread thinker
On Wed, Mar 21, 2001 at 04:14:32PM -0300, Rik van Riel wrote: The (maybe too lightweight) structure I have in my patch looks like this: struct pte_chain { struct pte_chain * next; pte_t * ptep; }; Each pte_chain hangs off a page of physical memory and the ptep is a pointer

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-22 Thread Rik van Riel
On Thu, 22 Mar 2001, thinker wrote: On Wed, Mar 21, 2001 at 04:14:32PM -0300, Rik van Riel wrote: The (maybe too lightweight) structure I have in my patch looks like this: struct pte_chain { struct pte_chain * next; pte_t * ptep; }; Each pte_chain hangs off a page of

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-22 Thread Michael C . Wu
Just an update on the lovely loaded BBS server. We made our record-breaking number of users last night. After implementing the changes suggested, and kqueue'ifying the BBS daemon. We saw a dramatic increase in server power. Top number of users was 4704 users. Serving SSH, HTTP, SMTP, innd,

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-22 Thread Alfred Perlstein
* Michael C . Wu [EMAIL PROTECTED] [010322 12:29] wrote: Just an update on the lovely loaded BBS server. We made our record-breaking number of users last night. After implementing the changes suggested, and kqueue'ifying the BBS daemon. We saw a dramatic increase in server power. Top

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-22 Thread Alexey V. Neyman
hello there! On Thu, 22 Mar 2001, Michael C . Wu wrote: (Why is vfs.vmiodirenable=1 not enabled by default?) By the way, is there any all-in-one-place description of sysctl tuneables? Looking all the man pages and collecting notices about MIB variables seems rather tiresome and, I think,

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-22 Thread Matt Dillon
:(Why is vfs.vmiodirenable=1 not enabled by default?) : The only reason it isn't enabled by default is some unresolved filesystem corruption that occurs very rarely (with or without it) that Kirk and I are still trying to nail down. I want to get that figured out first. It

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-21 Thread Peter Wemm
Matt Dillon wrote: :If this is a result of the shared memory, then my sysctl should fix it. : :Be aware, that it doesn't fix it on the fly! You must drop and recreate :the shared memory segments. : :better to reboot actually and set the variable before any shm is :allocated. : :--

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-21 Thread Matt Dillon
:Hey, talking about large amounts of swap, did you know that: : 4.2-STABLE FreeBSD 4.2-STABLE #1: Sat Feb 10 01:26:41 PST 2001 :has a max swap limit that's possibly 'low': : : b: 159124120 swap# (Cyl.0 - 990*) : c: 179124120unused0

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-21 Thread Matt Dillon
:B) Added 3gb of swap on one drive, 1gb of swap on a raid volume : another 1gb swap on another raid volume :C) enabled vfs.vmiodirenable and kern.ipc.shm_use_phys : :-- :+---+ :| [EMAIL PROTECTED] | [EMAIL PROTECTED] |

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-21 Thread Matt Dillon
:* Rik van Riel [EMAIL PROTECTED] [010321 09:51] wrote: : On Wed, 21 Mar 2001, Peter Wemm wrote: : : Also, 4MB = 1024 pages, at 28 bytes per mapping == 28k per process. : : 28 bytes/mapping is a LOT. I've implemented an (admittedly : not completely architecture-independent) reverse mapping :

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-21 Thread Alfred Perlstein
* Matt Dillon [EMAIL PROTECTED] [010321 10:20] wrote: :B) Added 3gb of swap on one drive, 1gb of swap on a raid volume : another 1gb swap on another raid volume :C) enabled vfs.vmiodirenable and kern.ipc.shm_use_phys : :-- :+---+

Re: tuning a VERY heavily (30.0) loaded s cerver

2001-03-21 Thread Rik van Riel
On Wed, 21 Mar 2001, Matt Dillon wrote: We've looked at those structures quite a bit. DG and I talked about it a year or two ago but we came to the conclusion that the extra linkages in our pv_entry gave us significant performance benefits during rundowns. Since then Tor

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Michael C . Wu
MRTG Graph at http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html | | FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE | #0: Tue Mar 20 11:10:46 CST 2001 root@:/usr/src/sys/compile/SimFarm i386 | | | system stats at | | http://zoo.ee.ntu.edu.tw/~keichii/ | md0/MFS is used for caching

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Alfred Perlstein
* Michael C . Wu [EMAIL PROTECTED] [010320 10:01] wrote: MRTG Graph at http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html | | FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE | #0: Tue Mar 20 11:10:46 CST 2001 root@:/usr/src/sys/compile/SimFarm i386 | | | system stats at | |

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Matt Dillon
: :How much SHM? Like, what's the combined size of all segments in :the system? You can make SHM non-pageable which results in a lot :of saved memory for attached processes. : :You want to be after this date and have this file: : : :Revision 1.3.2.3 / (download) - annotate - [select for diffs],

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Michael C . Wu
On Tue, Mar 20, 2001 at 10:09:09AM -0800, Alfred Perlstein scribbled: | * Michael C . Wu [EMAIL PROTECTED] [010320 10:01] wrote: | MRTG Graph at | http://zoonews.ee.ntu.edu.tw/mrtg/zoo.html | | | | | FreeBSD zoo.ee.ntu.edu.tw 4.2-STABLE FreeBSD 4.2-STABLE | | #0: Tue Mar 20 11:10:46 CST

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Alfred Perlstein
* Matt Dillon [EMAIL PROTECTED] [010320 10:17] wrote: : :How much SHM? Like, what's the combined size of all segments in :the system? You can make SHM non-pageable which results in a lot :of saved memory for attached processes. : :You want to be after this date and have this file: : :

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Michael C . Wu
On Tue, Mar 20, 2001 at 10:15:27AM -0800, Matt Dillon scribbled: | | :Another problem is that we have around 4000+ processes accessing | :lots of SHM at the same time.. | | How big is 'lots'? If the shared memory segment is smallish, e.g. | less then 64MB, you should be ok. If it is

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Matt Dillon
:| How big is 'lots'? If the shared memory segment is smallish, e.g. :| less then 64MB, you should be ok. If it is larger then you will :| have to do some kernel tuning to avoid running out of pmap entries. : :This is exactly what happens to us sometimes. We run out of pmap

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Matt Dillon
:Another problem is that we have around 4000+ processes accessing :lots of SHM at the same time.. How big is 'lots'? If the shared memory segment is smallish, e.g. less then 64MB, you should be ok. If it is larger then you will have to do some kernel tuning to avoid running out of

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Michael C . Wu
On Tue, Mar 20, 2001 at 10:38:35AM -0800, Matt Dillon scribbled: | | :| How big is 'lots'? If the shared memory segment is smallish, e.g. | :| less then 64MB, you should be ok. If it is larger then you will | :| have to do some kernel tuning to avoid running out of pmap entries. |

Re: tuning a VERY heavily (30.0) loaded scerver

2001-03-20 Thread Matt Dillon
:If this is a result of the shared memory, then my sysctl should fix it. : :Be aware, that it doesn't fix it on the fly! You must drop and recreate :the shared memory segments. : :better to reboot actually and set the variable before any shm is :allocated. : :-- :-Alfred Perlstein - [[EMAIL