Benchmark (WAS: Indexing Speed: Documents vs. Sentences)
Hello, Here's is a benchmark. I am not sure if that is proper etiquette, but I will just paste it into this mail and hope that it gets funneled into the right channels. Cheers! Jochen benchmark ul p bHardware Environment/bbr/ liiDedicated machine for indexing/ino, some other work performed on it. shouldn't influence results much since it's a multiple processor machine/li liiCPU/i2x Intel Xeon 3.05GHz/li liiRAM/i4GB/li liiDrive configuration/iSCSI/li /p p bSoftware environment/bbr/ liiJava Version/i1.4.2-b28/li liiJava VM/iJava HotSpot Client VM 1.4.2/li liiOS Version/iRedhat 8/li liiLocation of index/ilocal/li /p p bLucene indexing variables/bbr/ liiNumber of source documents/i5,000,000/li liiTotal filesize of source documents/i40GB/li liiAverage filesize of source documents/i8kB/li liiSource documents storage location/iDB on remote server/li liiFile type of source documents/ipre-parsed HTML/li liiParser(s) used, if any/in/a/li liiAnalyzer(s) used/iStandardAnalyzer/li liiNumber of fields per document/i5/li liiType of fields/iactual text is indexed but not stored in lucene index/li liiIndex persistence/i: Where the index is stored, e.g. FSDirectory, SqlDirectory, etc/li /p p bFigures/bbr/ liiTime taken (in ms/s as an average of at least 3 indexing runs)/i332 minutes/li liiTime taken / 1000 docs indexed/i4 sec/li liiMemory consumption/iabout 100MB/li /p p bNotes/bbr/ liiNotes/iWith the above configuration we pretty consistently achieve a 250 docs / sec rate of indexing. The actual text cannot be retrieved from the index, this keeps the index size down (6.1GB) and increases indexing speed. When the actual documents are stored in the index the rate drops by about 30% to 160 docs / sec./li /p /ul /benchmark - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
FW: Indexing Speed: Documents vs. Sentences
Stephane, The actual indexing is actually less glamorous than it sounds. When you index 1TB across 10 machines you end up with 100GB on each machine. We do not merge the indexes either, since we get better speed on indexing as well as querying when we keep indexes smaller and distributed across different machines. (But somehow I think that I'll sit down and merge all of them together and play with it when I get a chance ... 'cause it's cool :-) I'll keep you posted when it happens). My test set that I am playing with is 40GB, and I just posted a benchmark. Best, Jochen -Original Message- From: Stephane Vaucher [mailto:[EMAIL PROTECTED] Sent: Thursday, December 18, 2003 9:01 AM To: Lucene Users List; [EMAIL PROTECTED] Subject: RE: Indexing Speed: Documents vs. Sentences Jochen, If you have a bit of time, could you post some metrics, (as an example, you can look at http://jakarta.apache.org/lucene/docs/benchmarks.html). I haven't heard of anyone indexing 1TB yet. I'm sure everyone is interested in problems you could be facing and we could probably give you some ideas. I know (oddly enough) I sometimes wish I had dataset greater than a few M docs to experiment with. cheers, sv On Thu, 18 Dec 2003, Jochen Frey wrote: Hi, Yes, this is correct, I am dealing with a few 100GB (close to 1TB). I am, however, distributing the data across several machines and then merge the results from all the machines together (until I find a better faster solution). Cheers! -Original Message- From: Victor Hadianto [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:50 PM To: Lucene Users List Subject: Re: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Jochen .. a few 100 GB? Is this correct? /victor - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Indexing Speed: Documents vs. Sentences
Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Jochen .. a few 100 GB? Is this correct? /victor - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Indexing Speed: Documents vs. Sentences
Hi, Yes, this is correct, I am dealing with a few 100GB (close to 1TB). I am, however, distributing the data across several machines and then merge the results from all the machines together (until I find a better faster solution). Cheers! -Original Message- From: Victor Hadianto [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:50 PM To: Lucene Users List Subject: Re: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Jochen .. a few 100 GB? Is this correct? /victor - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Indexing Speed: Documents vs. Sentences
Interesting. What hardware are you using for searching 1TB of data, and how fast is the repsonse time? On Thu, Dec 18, 2003 at 08:23:42AM -0800, Jochen Frey wrote: Hi, Yes, this is correct, I am dealing with a few 100GB (close to 1TB). I am, however, distributing the data across several machines and then merge the results from all the machines together (until I find a better faster solution). Cheers! -Original Message- From: Victor Hadianto [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:50 PM To: Lucene Users List Subject: Re: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Jochen .. a few 100 GB? Is this correct? /victor - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Dror Matalon Zapatec Inc 1700 MLK Way Berkeley, CA 94709 http://www.fastbuzz.com http://www.zapatec.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Indexing Speed: Documents vs. Sentences
Interesting. What hardware are you using for searching 1TB of data, and how fast is the repsonse time? Me too :) I'm interested on how many documents you indexed. Do you have lots and lots of document or you have big sizes documents? /victor On Thu, Dec 18, 2003 at 08:23:42AM -0800, Jochen Frey wrote: Hi, Yes, this is correct, I am dealing with a few 100GB (close to 1TB). I am, however, distributing the data across several machines and then merge the results from all the machines together (until I find a better faster solution). Cheers! -Original Message- From: Victor Hadianto [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:50 PM To: Lucene Users List Subject: Re: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Jochen .. a few 100 GB? Is this correct? /victor - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Dror Matalon Zapatec Inc 1700 MLK Way Berkeley, CA 94709 http://www.fastbuzz.com http://www.zapatec.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Indexing Speed: Documents vs. Sentences
Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Lately I have been trying to index on a sentence level, not the document level. My problem is that the indexing speed has gone down dramatically and I am wondering if there is any way for me to improve on that. Indexing on a sentence level the overall amount of data stays the same while the number of records increases substantially (since there is usually many sentences to one web page). It seems to me like the indexing speed (everything else being the same) depends largely on the number of Documents inserted into the index, and not so much on the size of the data within the documents (correct?). I have played with the merge factor, using RAMDirectory, etc and I am quite comfortable with our overall configuration, so my guess is that that is not the issue (and I am QUITE happy with the indexing speed as long as I use complete pages and not sentences). Maybe there is a different way of attacking this? My goal is to be able to execute a query and get the sentences that match the query in the most efficient way while maintaining good/great indexing speed. I would prefer not having to search the complete document for the sentence in question. My current solution is to have one Lucene Document for each page (containing the URL and other information I require) that does NOT contain the text of the page. Then I have one Lucene Document for each sentence within that document, which contains the text of this particular sentence in addition to some identifying information that references the entry of the page itself. Any and all suggestions are welcome. Thanks! Jochen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Indexing Speed: Documents vs. Sentences
I'm confused about something - what's the point of creating a document for every sentence? -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:17 PM To: 'Lucene Users List' Subject: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Lately I have been trying to index on a sentence level, not the document level. My problem is that the indexing speed has gone down dramatically and I am wondering if there is any way for me to improve on that. Indexing on a sentence level the overall amount of data stays the same while the number of records increases substantially (since there is usually many sentences to one web page). It seems to me like the indexing speed (everything else being the same) depends largely on the number of Documents inserted into the index, and not so much on the size of the data within the documents (correct?). I have played with the merge factor, using RAMDirectory, etc and I am quite comfortable with our overall configuration, so my guess is that that is not the issue (and I am QUITE happy with the indexing speed as long as I use complete pages and not sentences). Maybe there is a different way of attacking this? My goal is to be able to execute a query and get the sentences that match the query in the most efficient way while maintaining good/great indexing speed. I would prefer not having to search the complete document for the sentence in question. My current solution is to have one Lucene Document for each page (containing the URL and other information I require) that does NOT contain the text of the page. Then I have one Lucene Document for each sentence within that document, which contains the text of this particular sentence in addition to some identifying information that references the entry of the page itself. Any and all suggestions are welcome. Thanks! Jochen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Indexing Speed: Documents vs. Sentences
Hi! In essence: 1) I don't care about the whole page 2) I only care about the actual sentence that matches the query. 3) I want the matching for the query only to happen within one sentence and not over sentence boundaries (even when I do a PhraseQuery with some slop). The query: i like the beach~20 should not match: And we go to the restaurant and i really like it. the beach was wonderful as well. 4) I would much prefer not to parse the actual page to find the sentence that matches the query (though I obviously will, if I have to). Does that answer your question? Thanks! Jochen -Original Message- From: Dan Quaroni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 1:19 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences I'm confused about something - what's the point of creating a document for every sentence? -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:17 PM To: 'Lucene Users List' Subject: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Lately I have been trying to index on a sentence level, not the document level. My problem is that the indexing speed has gone down dramatically and I am wondering if there is any way for me to improve on that. Indexing on a sentence level the overall amount of data stays the same while the number of records increases substantially (since there is usually many sentences to one web page). It seems to me like the indexing speed (everything else being the same) depends largely on the number of Documents inserted into the index, and not so much on the size of the data within the documents (correct?). I have played with the merge factor, using RAMDirectory, etc and I am quite comfortable with our overall configuration, so my guess is that that is not the issue (and I am QUITE happy with the indexing speed as long as I use complete pages and not sentences). Maybe there is a different way of attacking this? My goal is to be able to execute a query and get the sentences that match the query in the most efficient way while maintaining good/great indexing speed. I would prefer not having to search the complete document for the sentence in question. My current solution is to have one Lucene Document for each page (containing the URL and other information I require) that does NOT contain the text of the page. Then I have one Lucene Document for each sentence within that document, which contains the text of this particular sentence in addition to some identifying information that references the entry of the page itself. Any and all suggestions are welcome. Thanks! Jochen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Indexing Speed: Documents vs. Sentences
When you parse the page you can prevent sentence-boundry hits from matching your criteria -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:34 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences Right. However, even if I do that, my problem #3 below remains unsolved: I do not wish to match phrases across sentence boundaries. Anyone have a neat solution (or pointers to one)? Thanks again! Jochen -Original Message- From: Dan Quaroni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 1:29 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences Yeah. I'd suggest parsing the page, unfortunately. :) -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:26 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences Hi! In essence: 1) I don't care about the whole page 2) I only care about the actual sentence that matches the query. 3) I want the matching for the query only to happen within one sentence and not over sentence boundaries (even when I do a PhraseQuery with some slop). The query: i like the beach~20 should not match: And we go to the restaurant and i really like it. the beach was wonderful as well. 4) I would much prefer not to parse the actual page to find the sentence that matches the query (though I obviously will, if I have to). Does that answer your question? Thanks! Jochen -Original Message- From: Dan Quaroni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 1:19 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences I'm confused about something - what's the point of creating a document for every sentence? -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:17 PM To: 'Lucene Users List' Subject: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Lately I have been trying to index on a sentence level, not the document level. My problem is that the indexing speed has gone down dramatically and I am wondering if there is any way for me to improve on that. Indexing on a sentence level the overall amount of data stays the same while the number of records increases substantially (since there is usually many sentences to one web page). It seems to me like the indexing speed (everything else being the same) depends largely on the number of Documents inserted into the index, and not so much on the size of the data within the documents (correct?). I have played with the merge factor, using RAMDirectory, etc and I am quite comfortable with our overall configuration, so my guess is that that is not the issue (and I am QUITE happy with the indexing speed as long as I use complete pages and not sentences). Maybe there is a different way of attacking this? My goal is to be able to execute a query and get the sentences that match the query in the most efficient way while maintaining good/great indexing speed. I would prefer not having to search the complete document for the sentence in question. My current solution is to have one Lucene Document for each page (containing the URL and other information I require) that does NOT contain the text of the page. Then I have one Lucene Document for each sentence within that document, which contains the text of this particular sentence in addition to some identifying information that references the entry of the page itself. Any and all suggestions are welcome. Thanks! Jochen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Indexing Speed: Documents vs. Sentences
Dan, I will send you a separate e-mail directly to your address. In the meanwhile, I hope to get input from other people. Maybe someone else knows how to solve my original problem below. Thanks! Jochen -Original Message- From: Dan Quaroni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 1:36 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences When you parse the page you can prevent sentence-boundry hits from matching your criteria -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:34 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences Right. However, even if I do that, my problem #3 below remains unsolved: I do not wish to match phrases across sentence boundaries. Anyone have a neat solution (or pointers to one)? Thanks again! Jochen -Original Message- From: Dan Quaroni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 1:29 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences Yeah. I'd suggest parsing the page, unfortunately. :) -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:26 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences Hi! In essence: 1) I don't care about the whole page 2) I only care about the actual sentence that matches the query. 3) I want the matching for the query only to happen within one sentence and not over sentence boundaries (even when I do a PhraseQuery with some slop). The query: i like the beach~20 should not match: And we go to the restaurant and i really like it. the beach was wonderful as well. 4) I would much prefer not to parse the actual page to find the sentence that matches the query (though I obviously will, if I have to). Does that answer your question? Thanks! Jochen -Original Message- From: Dan Quaroni [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 1:19 PM To: 'Lucene Users List' Subject: RE: Indexing Speed: Documents vs. Sentences I'm confused about something - what's the point of creating a document for every sentence? -Original Message- From: Jochen Frey [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 4:17 PM To: 'Lucene Users List' Subject: Indexing Speed: Documents vs. Sentences Hi, I am using Lucene to index a large number of web pages (a few 100GB) and the indexing speed is great. Lately I have been trying to index on a sentence level, not the document level. My problem is that the indexing speed has gone down dramatically and I am wondering if there is any way for me to improve on that. Indexing on a sentence level the overall amount of data stays the same while the number of records increases substantially (since there is usually many sentences to one web page). It seems to me like the indexing speed (everything else being the same) depends largely on the number of Documents inserted into the index, and not so much on the size of the data within the documents (correct?). I have played with the merge factor, using RAMDirectory, etc and I am quite comfortable with our overall configuration, so my guess is that that is not the issue (and I am QUITE happy with the indexing speed as long as I use complete pages and not sentences). Maybe there is a different way of attacking this? My goal is to be able to execute a query and get the sentences that match the query in the most efficient way while maintaining good/great indexing speed. I would prefer not having to search the complete document for the sentence in question. My current solution is to have one Lucene Document for each page (containing the URL and other information I require) that does NOT contain the text of the page. Then I have one Lucene Document for each sentence within that document, which contains the text of this particular sentence in addition to some identifying information that references the entry of the page itself. Any and all suggestions are welcome. Thanks! Jochen - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED