,
Malcolm Clark
Hi,
Sent you a private email with some code attached ;-)
Malcolm
yeohwm [EMAIL PROTECTED] wrote:
Hi,
Thanks for the help. Please do let me know what jar file that I
needed and where I can find them.
Regards,
Wooi Meng
--
No virus found in this outgoing message.
Checked by AVG Free
Is this the W3 Ent collection you are indexing?
MC
Hi,
I'm going to attempt to output several thousand documents from a 3+ million
document collection into a csv file.
What is the most efficient method of retrieving all the text from the fields of
each document one by one? Please help!
Thanks,
Malcolm
Hi,
Would you please send me your parser too?
Thanks!
Malcolm
- Original Message
From: Liao Xuefeng [EMAIL PROTECTED]
To: java-user@lucene.apache.org
Sent: Friday, June 23, 2006 12:54:29 AM
Subject: RE: HTML text extraction
hi, all,
I wrote my own html parser because it just meets
Hi everyone,
I am about to index the INEX collection (22 files with 3 files in each-ish)
using Java Lucene. I am undecided with the approach to indexing and have left
my LIA book at uni :-/
Would you recommend:
1.. indexing all files into one big index? (would this be inefficient to
Hi all,
I didn't know whether to add this to the thread asking about TREC indexing or
start a new one.
Anyway, has anyone attempted to index/search the Reuters collection which
consists of SGML?
Mine seems to run through the process okay but alas I'm left with nothing in
the index when I check
Okay converting to XML sounds like a great option.
Thanks,
Malcolm
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
URL for all the source code:
http://www.lucenebook.com/LuceneInAction.zip
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Hi all,
I came across an old mail list item from 2003 exploring the possibilities of a
more probabilistic approach to using Lucene. Do the online experts know if
anyone achieved this since?
Thanks for any advice,
Malc
Hi all,
Are any of you planning on using Lucene in any way for the NLP in INEX this
year or the Enterprise track in TREC?
Thanks,
MC
Hi all,
I am planning on participating in the INEX and hopefully passively on a
couple of TREC tracks mainly using the Lucene API.
Is anyone else on this list planning on using Lucene during participation?
I am particularly interested in the SPAM, Blog and ADHOC tracks.
Malcolm Clark
Hi,
Maybe post some of the code which is giving you problems and people can view
it and try and see what's wrong.
Cheers,
MC
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Hi Oren,
In the grand scheme of things and in comparison to some of the participants
knowledge on here I am fairly new and inexperienced to Java and Lucene.
I thought my way may be the most effectual method of implementing the commit.I
am using many methods of searching/reading the index for
Hi thanks for your reply,
So when I delete a document the writer.close(); this actually commits the
deletion to the index which is not reversible?
I have a facility which deletes but leaves the delete 'undoable' until the
change is commited by closing the reader. I cannot access the doCommit
Okay.Thanks to you both.
Malcolm
class is this:
public abstract class commitDelete extends IndexReader {
protected final void commitIndex() {
try{
super.commit();
}(IOException e){}
}
}
Incidentally if I close the index does this commit anyway?
Please help as I'm stumped.
thanks in advance,
Malcolm Clark
Hi,
Could you send me the url for HighFreqTerms.java in cvs?
Thanks,
Malcolm
cheers
Grant,
Thanks for your help with the problem I was experiencing. I split it all down
and realised the problem was the location of the IndexWriting(It was not in the
correct place within the SAX processing) and also becuase of some poor error
handling on my part.
kind thanks,
Malcolm
Grant,
Thanks for your tips.I have considered DOM processing but it seemed to take a
hell of a long time to process all the documents(12,125).
Hi again,
I am desperately asking for aid!!
I have used the sandbox demo to parse the INEX collection.The problem being
it points to a volume file which references 50 other xml articles.Lucene
only treats this as one document.Is there any method of which I'm
overlooking that halts after each
Hi all,
I am relatively new and scared by Lucene so please don't flame me.I have
abandoned Digester and am now just using other SAX stuff.
I have used the sandbox stuff to parse an XML file with SAX which then bungs it
into a document in a Lucene index.The bit I'm stuck on is how is a
Hi I have tried as suggested and isolated Digester from Lucene. Digester
doesn't trigger an Element Matching Pattern for each element only the last one
of each repeating tag.My XML (trimmed a bit looks like this):
books
journal
titleIEEE Annals of the History of Computing/title
Hi
I used Luke to check the content of the index and they are not there.
cheers,
MC
Hi,
Could somebody please help me regarding Lucene and Digester. I have discovered
this problem during indexing the INEX collection of XML for my MSc project.
During the parsing of the XML files all named Volume.xml the parser will only
index the last XML element in any repetitive list. For
Hi all,
I'm using Lucene/Digester etc for my MSc I'm quite new to these API's. I'm
trying to obtain advice but it's hard to say whether the problem is Lucene or
Digester.
Firstly:
I am trying to index the INEX collection but when I try to index repetitive
elements only the last one is indexed.
27 matches
Mail list logo