I have indexed approximately 3 million URLs. The system was working without any problems as far as searching goes for over 2 weeks now.
Today I decided to index another 3,000 documents. NONE of these URLs were already in the database. I check this using mysql. Anyway, I started the process by doing this: ./index -i -f ./urls.txt everything went smoothly. Then I started the index: ./index -n 50 Again, everything went smooth as silk. All URLs were fetched, deltas created, hrefs done everything as it has always done. It returned to the prompt just as I would expect without any errors. But after indexing most all of the previouslexisting data was now gone. The size of the database and all tables as well as /usr/local/aspseek/var/ were what you would expect. Doing a search on "apple" previously would return roughly 2500 results. Now it returns only 25. So what did I do next? I checked and repaired all databases: /usr/local/mysql/bin/myisamchk -o *.MYI on all .MYI files. No non recoverable errors were reported and things went smoothly. Restarted mysql and aspseek's searchd and still, bad results. Next I tried to see if I could recover anything using aspseek's ./index ./index -X1 ./index -X2 that worked fine and no errors were reported, but when I did this: ./index -H it simply "Abort"s with no explanation what-so-ever. Mysql's configuration has not changed: [mysqld] user = mysql port = 3306 socket = /tmp/mysql.sock skip-locking set-variable = max_connections=256 set-variable = key_buffer=256M set-variable = max_allowed_packet=10M set-variable = table_cache=256 set-variable = sort_buffer=1M set-variable = record_buffer=1M set-variable = myisam_sort_buffer_size=64M set-variable = thread_cache=8 # Try number of CPU's*2 for thread_concurrency set-variable = thread_concurrency=8 #log-bin server-id = 1 [mysqldump] quick set-variable = max_allowed_packet=16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] set-variable = key_buffer=128M set-variable = sort_buffer=128M set-variable = read_buffer=2M set-variable = write_buffer=2M [myisamchk] set-variable = key_buffer=128M set-variable = sort_buffer=128M set-variable = read_buffer=2M set-variable = write_buffer=2M [mysqlhotcopy] interactive-timeout I'm using Linux Redhat 2.4.18-10 kernel, dual 2.2GHZ Zeons with 2GB RAM and RAID 5 with 205GB free diskspace. What could be going on? I have a backup of this data and when I restore it everything is back to normal, but I can't index more documents without having this problem. Need some help! Karen _________________________________________________________________ Send and receive Hotmail on your mobile device: http://mobile.msn.com
