avily in a production system?
(Not that I expect any other answer except maybe for a resounding
"probably" :-)
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
#x27;ll get 480GB of disk without even needing to plug another
SCSI card in. With larger disks or another SCSI card or two you could
go larger/faster.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
uot;?
This problem is preventing the upgrade to 2.2 of a number of Linux
servers and has meant that I've had to bring a new large server into
service without the benefit of RAID (since it needs kernel 2.2 for
other reasons).
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
return -EIO;
}
-- cut here --
Use it as
insmod scsiaccesscount.o host=0 channel=0 id=3 lun=0
to show the access count for ID 3 on bus 0 channel 0 and
insmod scsiaccesscount.o host=0 channel=0 id=3 lun=0 delta=-1
to substract one
e wants me to post
figures, I'll do so.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
(2 million hits/week, 90+ main sites, thousands of small user sites).
No problems. I also use it on our mirror server (similar hardware).
No problems. I trust it.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
t I'll contact the quota tools maintainer and/or Red Hat.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
I have
which *do* have journalled filesystems still take significantly
longer to boot than my Linux boxes because the rest of the boot is
horribly slow. Solaris seems to take forever prodding its very bits of
hardware before it gets around to booting properly.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTEC
Richard Jones writes:
> Malcolm Beattie wrote:
> >
> >
> > Mounting mine when clean takes 4 seconds. I wonder if you used a 1k
> > block size for your filesystem. That greatly increases the time to
> > check the bitmaps upon mounting (though you can turn this
urnalling
> in 2.3, I say!
That will be nice but you can use ext2 as-is on large systems if care
is taken at the design/tuning stage.
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
ot ?
> In theory if the array was shut down cleanly , the filesystem should be
> in a consistent status.
> please correct me in I am wrong .
That's wrong. The consistency of the array and the consistency of the
filesystem on it are two independent issues.
--Malcolm
--
Malcolm Beattie
h/i386/boot/zImage /boot/vmlinuz-2.0.36-foo
cp System.map /boot/System.map-2.0.36-foo
cp /boot/module-info{,-2.0.36-foo}
then add an entry to /etc/lilo.conf and rerun lilo. If you stick with
the Red Hat way of doing kernels then the multiple versions of the
kernel will coexist more nicely for m
per char8275 K/s @ 99.2% CPU 5058 K/s @ 96.1% CPU
block 21856 K/s @ 46.4% CPU 13080 K/s @ 15.2% CPU
Random Seeks 293.0 /s @ 8.7% CPU282.3 /s @ 5.7% CPU
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
stripe the two
12GB md devices into one big 24GB one. Is doing RAID0 over RAID5 a
possible/reasonable thing to do with the latest md/raidtools? Need I
choose any non-default chunk sizes or suchlike to tune things better?
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems P
figured out what context make_request runs in and how to synchronise
writing to the ring buffer with the ioctl code to shut it off.
Does make_request get called from interrupts or bottom halves?
What's the new-fangled SMP-safe way to do such locking in a way that
make_request doesn't have to get a slow lock every time it wants to
write data?
--Malcolm
--
Malcolm Beattie <[EMAIL PROTECTED]>
Unix Systems Programmer
Oxford University Computing Services
15 matches
Mail list logo