Here you go:
[karen@localhost]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7168
virtual memory (kbytes, -v) unlimited
So you know what I'm running:
Dell PowerEdge 2650
Dual Intel Xeon 1.8GHz w/512K Cache
2GB DDR 200MHZ RAM (going to add more - up to 6GB)
PERC3-DI, 128MB Battery Backed Cache, 2 Internal Ch- Embedded RAID
5 73GB, 10K RPM, Ultra 160 SCSI Hot Plug Hard Drives Raid 5
RedHat 7.3 Kernel version 2.4.18-10 SMP
Currently I can crawl about 1.5 million URLs a day without any problem on a dedicated point-to-point T-1, but I do get a lot of these "can't connect to host" errors when running index. Heck if I could connect to these I could index even more!
How I normally run index:
./index -N 80 -R 64
Thanks for the reply,
Karen
On Sat, 26 Oct 2002 at 20:23:27 -0700, Karen Barnes wrote: > So what's so different about index. Why can't it connect?What shell are you using? If bash/sh then please post to the list the output of 'ulimit -a', if tcsh/csh then post the output from 'limit'. Matt.
_________________________________________________________________
Unlimited Internet access for only $21.95/month.� Try MSN! http://resourcecenter.msn.com/access/plans/2monthsfree.asp
