AW: Lucene and Samba

2002-09-26 Thread Christian Schrader
I am still struggling with sharing the index via samba. The problem seems to come down to lucene not reading the new index files, or at least not all of them. It reads the following files at startup: commit.lock (numopen=0) segments read=Yes write=Yes (numopen=1) _0.fnm read=Yes write=Yes (numopen

AW: Lucene and Samba

2002-09-25 Thread Christian Schrader
As long as the Mainserver is the only one updating the data, everything should be fine, right? But I will take a look at the read-only index issue discussed earlier on this list. Chris -Ursprüngliche Nachricht- Von: Clemens Marschner [mailto:[EMAIL PROTECTED]] Gesendet: September 24, 20

Lucene and Samba

2002-09-24 Thread Christian Schrader
Hi everybody, we are using Lucene for a while now and everything works fine. We recently started load balancing our tomcat 3.3 over two servers using Apache and mod_jk and integrating the lucene Index via Samba. Now the problem started, we get ArrayIndexOutOfBoundsException on the slave (the one t

document boost factor

2002-06-06 Thread Christian Schrader
Is it possible to set a document boost factor in the current CVS? And if not, is anybody working on it? I am VERY interested and would gladly test performance issues :-) Christian > -Ursprüngliche Nachricht- > Von: Halácsy Péter [mailto:[EMAIL PROTECTED]] > Gesendet: 13 April 2002 18:03

JavaCC Tokenizer

2002-05-29 Thread Christian Schrader
I need to construct a Tokenizer that tokenizes at word/number boundaries, so that "IBM Deskstar IC35L060AVER07" would result in the following tokens: IBM Deskstar IC 35 L 060 AVER 07 Has anybody solved this with the StandardTokenizer? Christian -- To unsubscribe, e-mail:

AW: WildcardQuery

2002-05-24 Thread Christian Schrader
; I thought Lucene didn't support left wildcards like the following: > > > > *ucene > > > > - Original Message - > > From: "Christian Schrader" <[EMAIL PROTECTED]> > > To: "Lucene Users List" <[EMAIL PROTECTED]> > &g

WildcardQuery

2002-05-06 Thread Christian Schrader
I am pretty happy with the results of WildcardQueries like "*ucen*" that matches lucene, but "*lucene*" doesn't match lucene. Is there a reason for this? And what would be the patch. It should be in WildcardTermEnum. I am wondering if somebody already patched it? Thanks, Chris -- To unsubscribe

Combining FuzzyQueries

2002-03-07 Thread Christian Schrader
I have the following problem. When I create a Fuzzyquery: FuzzyQuery fuzzy = new FuzzyQuery(new Term("categoryName", "test")); and add it to a new BooleanQuery finalQuery = new BooleanQuery(); finalQuery.add(fuzzy,false,false); fuzzy1.toString("contents"); gives me categoryName:test~ which

AW: AW: Lexical Error

2001-12-07 Thread Christian Schrader
001 3:06 PM > An: Lucene Users List > Betreff: Re: AW: Lexical Error > > > Christian Schrader wrote: > > >Is there a good reason, why the QueryParser should throw an > Error instead of > >an Exception? > > > Unfortunatly this is the f

AW: Lexical Error

2001-12-07 Thread Christian Schrader
Is there a good reason, why the QueryParser should throw an Error instead of an Exception? > -Ursprungliche Nachricht- > Von: Christian Schrader [mailto:[EMAIL PROTECTED]] > Gesendet: December 7, 2001 12:57 PM > An: Lucene Users List > Betreff: Lexical Error > > &g

Lexical Error

2001-12-07 Thread Christian Schrader
I just encountered an Error searching for a Term like ${test}. org.apache.lucene.queryParser.TokenMgrError: Lexical error at line 1, column 1. Encountered: "$" (36), after : "" Is this error known? Chris -- To unsubscribe, e-mail: For additional commands, e-mail: