i have a table of 100 million records when i do a select i get 600k to 1million records table structure is as follows

                stmt.executeUpdate("CREATE TABLE POSTINGLIST ("
                                   +"WORDID INTEGER NOT NULL,"
                                   +"DOCID INTEGER NOT NULL,"
                                   +"ANCHORID INTEGER NOT NULL,"
                                   +"DOCPOSITION SMALLINT NOT NULL,"
                                   +"FLAG SMALLINT NOT NULL)");
                
                stmt.executeUpdate("CREATE INDEX WORDID ON 
POSTINGLIST(WORDID)");
                stmt.executeUpdate("CREATE INDEX DOCID ON POSTINGLIST(DOCID)");
                stmt.executeUpdate("CREATE INDEX ANCHORID ON 
POSTINGLIST(ANCHORID)");

99 percent of the time i select based on the wordId i have implemented all the tuning tips in the manual but it stil takes so much time disk i/o seems to be the bottleneck( no swap cpu is idle during select derby uses index) what i want to do is keep the tables sorted by wordId so that i can avoid random reads and do a sequencial read. rigth now insert performence is faster than i expected so i can trade some write for read is this possible?

derby is running in embedded mode.


Nurullah Akkaya
[EMAIL PROTECTED]

Blooby.com
Tel: +1 (256) 270 4091



Reply via email to