53% + 47% fragmentation=100%
there is a mention of this problem in relation to web caches in
the squid faq, http://squid.nlanr.net/Squid/FAQ/FAQ-14.html#ss14.1
""
disk write error: (28) No space left on device
You might get this error even if your disk is not full, and is not out of
inodes. Check your syslog logs
(/var/adm/messages, normally) for messages like either of these:
NOTICE: realloccg /proxy/cache: file system full
NOTICE: alloc: /proxy/cache: file system full
In a nutshell, the UFS filesystem used by Solaris can't cope with the
workload squid presents to it very well. The filesystem will end up
becoming highly fragmented, until it reaches a point where there are
insufficient free blocks left to create files with, and only fragments
available. At this point, you'll get this error and squid will revise its
idea of how much space is actually available to it. You can do a "fsck -n
raw_device" (no need to unmount, this checks in read only mode) to look at
the fragmentation level of the filesystem. It will probably be quite high
(>15%).
Sun suggest two solutions to this problem. One costs money, the other is
free but may result in a loss of performance (although Sun do claim it
shouldn't, given the already highly random nature of squid disk access).
The first is to buy a copy of VxFS, the Veritas Filesystem. This is an
extent-based filesystem and it's capable of having online defragmentation
performed on mounted filesystems. This costs money, however (VxFS is not
very cheap!)
The second is to change certain parameters of the UFS filesystem. Unmount
your cache filesystems and use tunefs to change optimization to "space"
and to reduce the "minfree" value to 3-5% (under Solaris 2.6 and higher,
very large filesystems will almost certainly have a minfree of 2% already
and you shouldn't increase this). You should be able to get fragmentation
down to around 3% by doing this, with an accompanied increase in the
amount of space available.
"
On Mon, 17 May 1999, Tina Stewart wrote:
> umounted the disk and ran fsck - 47% fragmentation...
> mako{root}: fsck /var/qmail
> 526455 files, 950898 used, 872324 free (872324 frags, 0 blocks, 47.8%
> fragmentation)
>
> I restarted qmail and now it is taking up lots of CPU...
>
> last pid: 803; load averages: 1.16, 0.95, 0.58
>
> 48 processes: 46 sleeping, 1 running, 1 on cpu
> CPU states: 0.0% idle, 0.4% user, 99.6% kernel, 0.0% iowait, 0.0% swap
> Memory: 512M real, 65M free, 768M swap free
>
> PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
> 693 qmails 1 -25 0 1208K 936K run 7:56 94.90% qmail-send
> 803 root 1 33 0 2000K 1536K cpu 0:00 1.15% top
> 496 root 1 33 0 1792K 1488K sleep 0:07 0.05% sshd1
> 795 qmaild 1 34 0 1416K 864K sleep 0:00 0.03% qmail-smtpd
> 691 root 1 33 0 1416K 1016K sleep 0:00 0.02% splogger
> 155 root 6 8 0 3232K 1712K sleep 0:00 0.02% syslogd
> 801 qmaild 1 33 0 1416K 864K sleep 0:00 0.02% qmail-smtpd
>
> > -----Original Message-----
> > From: Adam D. McKenna [mailto:[EMAIL PROTECTED]]
> > Sent: Monday, May 17, 1999 2:47 PM
> > To: Tina Stewart
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: qmail: full disk?
> >
> >
> > maybe it's a mount problem. I've seen this with linux before
> > -- I don't
> > have a lot of solaris experience. Try stopping qmail, unmounting and
> > mounting the filesystem and restarting.
> >
> > --Adam
> >
> > ----- Original Message -----
> > From: Tina Stewart <[EMAIL PROTECTED]>
> > To: 'Adam D. McKenna' <[EMAIL PROTECTED]>
> > Cc: <[EMAIL PROTECTED]>
> > Sent: Monday, May 17, 1999 5:38 PM
> > Subject: RE: qmail: full disk?
> >
> >
> > : We sent 247,000 messages and there are about 85,000 more to
> > go in the todo
> > : queue. The body of the message is just under 2k.
> > :
> > : -tina
> > :
> > : > -----Original Message-----
> > : > From: Adam D. McKenna [mailto:[EMAIL PROTECTED]]
> > : > Sent: Monday, May 17, 1999 2:29 PM
> > : > To: Tina Stewart
> > : > Cc: [EMAIL PROTECTED]
> > : > Subject: Re: qmail: full disk?
> > : >
> > : >
> > : > From: Tina Stewart <[EMAIL PROTECTED]>
> > : > : > > Filesystem iused ifree %iused Mounted on
> > : > : > > /dev/dsk/c1t1d0s7 524498 1494062 26% /var/qmail
> > : > : > >
> > : > : > > Filesystem kbytes used avail capacity
> > : > Mounted on
> > : > : > > /dev/dsk/c1t1d0s7 1823222 948752 856238 53%
> > : > /var/qmail
> > : >
> > : > Could it be that someone mailed one (or more) of your users a
> > : > ludicrously
> > : > large attachment, and qmail is trying to write it to disk and
> > : > running out of
> > : > space?
> > : >
> > : > --Adam
> > : >
> > : >
> > :
> >
>
RjL
==================================================================
The problems of the world || Fax: +44 870 0521198
can't be solved by fixing || Email: [EMAIL PROTECTED]
the working -- C. Daniluk || Phone: +44 385 275 394