On Fri, Apr 20, 2012 at 02:43:05PM +0300, Kostas Zorbadelos wrote:

> Hello all, sorry for the big mail that follows.
> 
> These are my first attempts at fine tuning and stress testing in OpenBSD
> so excuse my ignorance.
> I am stress testing BIND as a resolver on Linux (CentOS 6) and OpenBSD
> (5.0 release). I will evaluate unbound since this will be included
> in base also later. I use BIND that comes with the operating systems
> (9.4.2-P2 in base for OpenBSD and bind-9.7.3 rpm that comes with
> CentOS 6), however this is rather irrelevant for my questions. I am
> trying to fill as much as I can in BIND's cache and I use 2 VMs with
> identical configuration (2 CPUs, 8GB RAM) for the systems to perform the
> tests.  
> The tests are simple shell scripts running from a couple of clients that
> generate random names and ask (using dig(1)) for various RRs, most of them
> won't exist so the nameserver should cache a negative response. I have
> increased the caching ttl configs to both systems as
> 
>         # 7 days max ttls for the stress tests
>         max-ncache-ttl 604800;
>         max-cache-ttl 604800;
>  
> I run the tests for a couple of days, identical load on both
> systems. Here is the relevant top(1) and ps(1) output on both systems:
> 
> OpenBSD
> ---------
> load averages:  0.39,  0.41,  0.40                                         
> openbsd-dns 14:23:05
> 37 processes:  36 idle, 1 on processor
> CPU0 states:  4.2% user,  0.0% nice,  2.2% system,  1.8% interrupt, 91.8% idle
> CPU1 states:  3.4% user,  0.0% nice,  3.8% system,  0.2% interrupt, 92.6% idle
> Memory: Real: 609M/1231M act/tot Free: 6728M Cache: 551M Swap: 0K/502M
> 
>   PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
> 31077 named      2    0  592M  594M sleep/1   select  217:52 12.65% named
>  1970 _pflogd    4    0  728K  384K sleep/1   bpf       0:34  0.00% pflogd
>  1642 root       2    0 1240K 2124K sleep/0   select    0:29  0.00% sendmail
>  ...
> 
> kzorba@openbsd: ~ ->ps -ax -v | head   
>   PID STAT       TIME  SL  RE PAGEIN   VSZ   RSS   LIM TSIZ %CPU %MEM COMMAND
> 31077 S     216:21.22   0 127      7 606228 608260 8145988 1292 13.2  7.3 
> /usr/sbin/named
> 10103 Is      0:00.21 127 127      0  3500  3120 8145988  284  0.0  0.0 sshd: 
> kzorba [priv] (sshd)
> 32112 S       0:00.53   5 127      0  3500  2316 8145988  284  0.0  0.0 sshd: 
> kzorba@ttyp0 (sshd)
> 23004 Is      0:00.02 127 127      0  3420  3136 8145988  284  0.0  0.0 sshd: 
> kzorba [priv] (sshd)
>  1816 S       0:00.06   0 127      0  3420  2292 8145988  284  0.0  0.0 sshd: 
> kzorba@ttyp3 (sshd)
> 15380 Is      0:00.02 127 127      0  3376  3160 8145988  284  0.0  0.0 sshd: 
> kzorba [priv] (sshd)
> 21925 Is      0:00.16 127 127      0  3372  3144 8145988  284  0.0  0.0 sshd: 
> kzorba [priv] (sshd)
>  3237 I       0:00.08 127 127      0  3344  2336 8145988  284  0.0  0.0 sshd: 
> kzorba@ttyp2 (sshd)
> 12462 I       0:00.40  22 127      0  3340  2296 8145988  284  0.0  0.0 sshd: 
> kzorba@ttyp1 (sshd)
> 
> CentOS 6 Linux
> ---------------
> top - 14:24:11 up 15 days, 23:29,  3 users,  load average: 0.00, 0.00, 0.00
> Tasks: 114 total,   1 running, 113 sleeping,   0 stopped,   0 zombie
> Cpu(s):  3.1%us,  2.4%sy,  0.0%ni, 92.8%id,  0.0%wa,  0.3%hi,  1.4%si,  0.0%st
> Mem:   8062104k total,  5052532k used,  3009572k free,   144560k buffers
> Swap:  5486188k total,        0k used,  5486188k free,   218556k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
>                               
> 19542 named     20   0 4582m 4.3g 2564 S 11.6 55.3 193:28.88 named            
>                               
>     1 root      20   0 19328 1528 1220 S  0.0  0.0   0:00.77 init             
>                               
>     2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd         
>                               
>     3 root      RT   0     0    0    0 S  0.0  0.0   0:00.06 migration/0      
>                               
>     4 root      20   0     0    0    0 S  0.0  0.0   0:16.42 ksoftirqd/0      
>          
> ...
> 
> kzorba@linux~->ps aux | head -1
> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> kzorba@linux~->ps aux | grep named
> named    19542  6.7 55.4 4692936 4468528 ?     Ssl  Apr18 194:02 
> /usr/sbin/named -u named -t /var/named/chroot
> 
> I understand the kernel VM layers are completely different, but how come
> the named process on OpenBSD for the same load consumes so low resident
> memory? Also, why VZS < RSS on OpenBSD?

You neglect to tell us platform details so we cannot tell.

OpenBSD has a bit different accounting on what counts for process size
and resident size. Something like code or lib pages are accounted in
one but not the other. I always forget the details. 

> The general question I am trying to answer is, can BIND utilize all
> available memory on the system (so I can arrange the amount of memory
> when I order the servers).

Depends. i386 systems for example can only utilize max 2G per process
and can address max 4G physical mem. Other platforms have different limits.

Also, per process limits play a role.

        -Otto

> 
> I understand I need reading. Any pointers to documentation or hints are
> highly welcome.
> 
> Regards,
> 
> Kostas
> 
> -- 
> Kostas Zorbadelos             
> twitter:@kzorbadelos          http://gr.linkedin.com/in/kzorba 
> ----------------------------------------------------------------------------
> ()  www.asciiribbon.org - against HTML e-mail & proprietary attachments
> /\  

Reply via email to