[ This is a long long message but worth reading .. I hope ]

James Dickens posted an interesting topic in his blog :

    http://uadmin.blogspot.com/2006/03/defusing-bombs.html

Thus I just had to run the famous bash fork bomb on my new build 35
based server.

Needless to say the machine became really insanely busy for a while
and then eventually recovered neatly.  No special tuning.  No crash.

The end user session ( me in this case ) saw the following :

$ bash
bash-3.00$
bash-3.00$ :(){ :|:& }; :
[1] 100730
bash-3.00$ bash: fork: Not enough space
bash: fork: Not enough space
bash: fork: Not enough space
.
.
. repeat ad nauseum
.
.
bash: fork: Resource temporarily unavailable

[1]+  Done                    : | :
bash-3.00$
bash-3.00$
bash-3.00$
bash-3.00$
bash-3.00$ date
Tue Mar  7 23:10:42 EST 2006
bash-3.00$ pwd
/export/home/dclarke
bash-3.00$ uname -a
SunOS core 5.11 blastware sun4u sparc SUNW,UltraSPARC-IIi-cEngine
bash-3.00$


Meanwhile on the console I saw this sort of thing :

# vmstat 5
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m7 m1   in   sy   cs us sy id
 0 0 0 1349288 336672 0   0  0  0  0  0  0  0  0  0  0  488   82   30  0  5 95
 0 0 0 1349280 336664 0   0  0  0  0  0  0 17  0  0 17  900   73   39  1  6 93
 0 0 0 1349264 336648 0   0  0  0  0  0  0  0  0  0  0  480   73   30  1  5 94
 0 0 0 1349248 336632 0   0  0  0  0  0  0  0  0  0  0  477   71   25  0  6 94
 0 0 0 1349080 336464 0 765 15  0  0  0  0  2  0  0  2  488  608   54  3 18 79
 149 0 0 1255552 303840 0 4378 0 0 0  0  0  1  0  0  0  477 2760  266 12 88  0
 365 0 0 995336 223296 4 2340 25 0 0  0  0  5  0  0  3  430 1601  274  8 92  0

Mar  7 22:50:44 core genunix: WARNING: Sorry, no swap space to grow
stack for pid 102072 (bash)
Mar  7 22:50:44 core last message repeated 3 times
.
.
.
Mar  7 22:54:31 core last message repeated 2 times
Mar  7 22:54:31 core genunix: WARNING: Sorry, no swap space to grow
stack for pid 102672 (bash)
Mar  7 22:54:31 core last message repeated 2 times
Mar  7 22:54:31 core genunix: WARNING: Sorry, no swap space to grow
stack for pid 101936 (bash)
.
. that sort of whine and squeal went on for a while over and over
.
Mar  7 22:54:58 core last message repeated 2 times
Mar  7 22:54:58 core genunix: WARNING: Sorry, no swap space to grow
stack for pid 102663 (bash)
Mar  7 22:54:58 core last message repeated 2 times
Mar  7 23:00:08 core ufs: NOTICE: alloc: /export/home: file system full
Mar  7 23:00:28 core last message repeated 4 times

^C
# #


Then the system went back to being a good little server.


# uname -a
SunOS core 5.11 blastware sun4u sparc SUNW,UltraSPARC-IIi-cEngine
# date
Tue Mar  7 23:10:31 EST 2006
#


You really must look at this report from uptime :

# uptime
 11:12pm  up 28 min(s),  2 users,  load average: 0.99, 48.46, 104.14
#



That says it all right there.  I booted the system and then abused it
very badly.

I then thought .. lets set a few hard limits to make life easier.

I then cunsulted "Solaris Tunable Parameters Reference Manual" at :

    http://docs.sun.com/app/docs/doc/817-0404/6mg74vs9c#hic

I then I cooked up the following /etc/system along with liberal comments :

    http://www.blastwave.org/dclarke/opensolaris/etc_system.txt

I reboot the box and upon booting I get the following messages :

Executing last command: boot -rv
Boot device: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0:a  File and args: -rv
module /platform/sun4u/kernel/sparcv9/unix: text at [0x1000000,
0x10c2cdf] data at 0x1800000
module misc/sparcv9/krtld: text at [0x10c2ce0, 0x10de657] data at 0x186c8e0
module /platform/sun4u/kernel/sparcv9/genunix: text at [0x10de658,
0x12e01af] data at 0x1872700
module /platform/sun4u/kernel/misc/sparcv9/platmod: text at
[0x12e01b0, 0x12e01b7] data at 0x18cdcd0
module /platform/sun4u/kernel/cpu/sparcv9/SUNW,UltraSPARC-IIi: text at
[0x12e01c0, 0x12ece17] data at 0x18ce3c0
WARNING: jump_pid < 0 or >= pidmax; ignored
SunOS Release 5.11 Version blastware 64-bit
Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
DEBUG enabled
misc/forthdebug (501166 bytes) loaded
Ethernet address = 8:0:20:c2:46:48
mem = 524288K (0x20000000)
avail mem = 491462656
root nexus = Netra t1 (UltraSPARC-IIi 440MHz)


I am not to sure what to make of the WARNING message :

    WARNING: jump_pid < 0 or >= pidmax; ignored

but I can only assume that setting pidmax = 32767 was a bad thing.

I then subjected the server to the same bash bomb user abuse and it
performed wonderfully.

This is what I saw at the console :


# vmstat 5
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m7 m1   in   sy   cs us sy id
 0 0 0 1261688 370448 67 220 295 0 0  0 91 19  0  0 15  505 2616 1059 12 14 73
 0 0 0 1364840 356176 0   0  0  0  0  0  0  1  0  0  1  435   10   24  0  1 99
 0 0 0 1364840 356176 0   0  0  0  0  0  0  0  0  0  0  421   11   21  0  1 99
.
.
.
 0 0 0 1362992 353096 42 132 257 0 0  0  0 30  0  0 23  536  260  107  6  6 88
 0 0 0 1361016 351736 18 135 29 0  0  0  0  4  0  1  3  453  269   77  1  5 93
 0 0 0 1360728 351040 5  40 141 0  0  0  0 18  0  0 18  490  205  113  0  4 95
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m7 m1   in   sy   cs us sy id
 0 0 0 1360520 350560 0   0  2  0  0  0  0  0  0  0  0  425   34   40  0  1 99
 0 0 0 1360448 350488 1  19  6  0  0  0  0  1  0  0  1  465   44   35  0  2 98
 0 0 0 1360448 350488 0   0  0  0  0  0  0 22  0  0 22  746   11   40  0  2 97
 0 0 0 1360464 350504 0  72  0  0  0  0  0  0  0  0  0  418   71   28  0  2 97
 143 0 0 1247984 312344 8 3173 24 0 0 0  0  3  0  0  1  438 2384  379 10 90  0
Mar  8 15:22:07 core genunix: NOTICE: out of per-user processes for uid 16411
 129 0 0 1228136 295192 1 4271 9 0 0  0  0  2  0  1  2  457 4365  388 14 86  0
 43 0 0 1191728 269144 0 3828 2 0  0  0  0  0  0  1  0  457 6435  561 26 74  0
 57 0 0 1176672 260648 0 4288 3 0  0  0  0  0  0  1  0  443 8117  450 20 80  0
 53 0 0 1165504 254672 0 4237 2 0  0  0  0  0  0  2  0  462 7463  428 28 72  0
 61 0 0 1165368 255584 0 4186 0 0  0  0  0  0  0  1  0  438 7867  393 19 81  0
 57 0 0 1154600 249424 0 4071 22 0 0  0  0  5  0  0  5  494 9537  440 21 79  0
 65 0 0 1155216 252752 0 4103 0 0  0  0  0  0  0  0  0  438 6967  389 18 82  0
 50 0 0 1154064 247992 1 3831 16 0 0  0  0  2  0  2  1  460 6955  419 18 82  0
Mar  8 15:22:50 core last message repeated 1229 times
Mar  8 15:22:51 core genunix: NOTICE: out of per-user processes for uid 16411
 63 0 0 1144400 248984 0 4540 3 0  0  0  0  1  0  1  1  493 8147  394 20 80  0
 59 0 0 1140864 245040 0 4309 0 0  0  0  0  0  0  0  0  436 11339 493 23 77  0
 50 0 0 1130848 236472 0 4040 0 0  0  0  0  3  0  2  3  509 8648  405 20 80  0
 61 0 0 1136528 240640 0 4165 0 0  0  0  0  0  0  1  0  432 7452  345 19 81  0
 51 0 0 1135672 240072 0 4033 2 0  0  0  0  0  0  1  0  456 8489  407 20 80  0
 49 0 0 1130336 236752 0 4046 10 0 0  0  0  0  0  1  0  446 10831 406 22 78  0
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m7 m1   in   sy   cs us sy id
 57 0 0 1117728 230744 0 4144 0 0  0  0  0  0  0  1  0  442 9234  361 21 79  0
 89 0 0 1142056 249168 0 3819 3 0  0  0  0  2  0  1  2  489 5456  364 17 83  0
^C#
# ps -ef | wc -l
     183
# vmstat 5
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m7 m1   in   sy   cs us sy id
 14 0 0 1258560 341232 41 997 178 0 0 0 53 12  0  0 10  486 3255  713 12 26 63
 163 0 0 1106496 245968 0 4378 5 1 1  0  0  0  0  0  0  462 5921  242 32 68  0
 39 0 0 1094536 221136 0 3856 4 0  0  0  0  1  0  1  1  462 14748 487 25 75  0
 71 0 0 1113584 233200 0 4358 0 0  0  0  0  1  0  0  1  437 10977 370 23 77  0
 77 0 0 1117064 234416 0 4351 0 0  0  0  0  0  0  0  0  428 9778  292 22 78  0
 53 0 0 1104400 224456 0 4213 0 0  0  0  0  6  0  0  6  525 9530  345 22 78  0
 35 0 0 1169824 253160 0 2139 0 2  2  0  0  0  0  1  0  444 7322  327 15 61 24
 0 0 0 1331848 329136 6  37 37  2  2  0  0  4  0  1  3  463   62   42  0  2 97
 0 0 0 1333296 330864 0   0  0  0  0  0  0  0  0  0  0  419   14   26  0  1 99
 0 0 0 1333584 330920 0   0  0  0  0  0  0  0  0  1  0  434   10   55  0  2 98
^C#
# ps -ef | wc -l
      35
#
# dmesg | tail
Mar  8 15:20:14 core genunix: [ID 936769 kern.info] devinfo0 is
/pseudo/[EMAIL PROTECTED]
Mar  8 15:20:14 core pseudo: [ID 129642 kern.info] pseudo-device: pool0
Mar  8 15:20:14 core genunix: [ID 936769 kern.info] pool0 is /pseudo/[EMAIL 
PROTECTED]
Mar  8 15:22:07 core genunix: [ID 748887 kern.notice] NOTICE: out of
per-user processes for uid 16411
Mar  8 15:22:50 core last message repeated 1229 times
Mar  8 15:22:51 core genunix: [ID 748887 kern.notice] NOTICE: out of
per-user processes for uid 16411
Mar  8 15:23:53 core last message repeated 1444 times
Mar  8 15:23:55 core pseudo: [ID 129642 kern.info] pseudo-device: devinfo0
Mar  8 15:23:55 core genunix: [ID 936769 kern.info] devinfo0 is
/pseudo/[EMAIL PROTECTED]
Mar  8 15:23:59 core genunix: [ID 748887 kern.notice] NOTICE: out of
per-user processes for uid 16411
#
# uptime
  3:25pm  up 10 min(s),  1 user,  load average: 23.94, 21.59, 9.60
#
# exit

core console login:
core console login:


So there you see that the server simply did what it was asked to do
within the scope of reason and good judgement.  The box never became
unresponsive nor sluggish for that matter.

But I did get that WARNING at boot.

As a comparison I tried the same thing on an unmodified Red Hat
Enterprise Linux 4 AS 64-bit server.  It simply packed up and went
away.  Totally.  Gone in less than one second and nothing worked
anymore.  Not the mouse and not even the NumLock light on the
keyboard.  I don't like playng the Linux versus Solaris game but I had
to perform the test.  I left the machine running ( warm brick ) until
this morning and then I had to pull the plug.

Dennis Clarke
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to