Jim, thanks for your comments.
This seems to be a re-occurring problem. It takes close to 5 1/2 hours
to run a full dig on my site, over 13k documents parsed. This is why I
started saving the previous digs as backups.
The db.words.db size's are very similar but not identical.
corrupt: 144742400 (sys human read = 138M)
working: 144676864 (sys human read = 138M)
The server htdig is running is a dual 1.2GHZ Xeon w/ 1GB RAM, 512kb
cache - should be plenty of horsepower :).
When running search monitored with top.
load average: 0.68, 0.45, 0.41
corrupt: size:5560 rss:5560 %cpu:8.9 %mem:0.5 time:0:00
command:htsearch
working: size:10800 rss:10M %cpu:12.3 %mem:1.0 time:0:01
command:htsearch
(There is currently no memory restriction that would affect this
process)
I ran the htdump utility on both versions of db.words.db. The working db
worked as expected. The corrupt db tried to run, created the db.dumped
file but eventually quit. I cannot located any error messages that
htdump generated so it appears to have ran properly but could not
actually dump any content.
Thankfully we know how to fix the issue but at this point it would be
really nice to correct the bug or whatever is causing this, preventing
it from occurring again.
Any other suggestions you or others have are welcomed.
Thanks!
-----Original Message-----
From: Jim [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 12, 2004 6:15 PM
To: Wendt, Trevor
Cc: [EMAIL PROTECTED]
Subject: Re: [htdig] Segmentation fault and db.words.db corruption
On Thu, 12 Aug 2004, Wendt, Trevor wrote:
> Segmentation fault
> ---------- end snip ----------
>
> This Segmentation fault produced the corresponding errors:
>
> ---------- start snip ----------
> Premature end of script headers: htsearch
> [Thu Aug 12 10:38:24 2004] [error] [client 170.231.200.136]
WordDB:
> CDB___memp_cmpr_read:
> unable to uncompress page at pgno = 116
> [Thu Aug 12 10:38:24 2004] [error] [client 170.231.200.136]
WordDB:
> PANIC: Input/output error
> [Thu Aug 12 10:38:24 2004] [error] [client 170.231.200.136]
> DB_RUNRECOVERY: Fatal error,
> run database recovery
> ---------- end snip ----------
Is this a recurring problem? Or something that has only happened once?
If the latter, I would just rebuild the databases and not worry about it
too much, unless the problem repeats.
> We rundig three times a week, always from scratch. Our db.words.db is
> 138mb. I have a working and non-working version of the db at this
time.
Are both the working and non-working versions essentially the same size?
The only time that I have seen these particular messages with 3.2.0b6 is
when an ht://Dig related task has exhausted system memory. You might try
running something like top and watching memory use at the time the
segfault occurs. That is probably a long shot, but it wouldn't hurt to
rule out the possibility. You might also want to verify that your system
isn't configured to place an overly tight constraint on the amount of
memory available to a search request.
Have you tried running the htdump utility? It might be useful to check
whether a dump of word database results in the same type of problem.
Jim
-------------------------------------------------------
SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media
100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33
Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift.
http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285
_______________________________________________
ht://Dig general mailing list: <[EMAIL PROTECTED]>
ht://Dig FAQ: http://htdig.sourceforge.net/FAQ.html
List information (subscribe/unsubscribe, etc.)
https://lists.sourceforge.net/lists/listinfo/htdig-general