I don't know if the use of a DATALINK data type would be relevant in your
case.
Here are some references.
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/start/c0005450.htm
http://www.oracle.com/technology/sample_code/tech/java/codesnippet/jdbc/datalink/read
itive.
Upon re-reading the document, I see that
"All other data types are defined as sequences of bytes, so file formats are
byte-order independent. "I think that I should be fine.
Sorry for posting before reading more carefully.
On 7/13/06, Beady Geraghty <[EMAIL PROTECTED]>
As I understand from earlier answers to my question that
one can create an index on machine A,
and use it (search and merge with other indices) on Machine B.
I was reading the file format today.
http://lucene.apache.org/java/docs/fileformats.html
The index has Byte UInt32 UInt64 in most place
t;[EMAIL PROTECTED]> wrote:
On Jun 27, 2006, at 2:02 PM, Daniel Naber wrote:
> On Dienstag 27 Juni 2006 17:23, Beady Geraghty wrote:
>
>> I tried to look at the segments file, thinking that it points to the
>> various other
>> files in the index directory,
>
> Use Inde
Hi,
I am trying to merge in index from a different node and probably different
platform.
I tried some simple cases by copying an index created from a windows
machine,
and bring to a linux server. I seem to be able to search from this index
that
is copied over. I would therefore assume that I c
I find it very useful. I hope you will too.
On 6/6/06, digby <[EMAIL PROTECTED]> wrote:
Does everyone recommend getting this book? I'm just starting out with
Lucene and like to have a book beside me as well as the web / this
mailing list, but the book looks quite old now, has a 1-2 month del
I finally got back to doing my project. HitCollector solved my problem.
Thank you for all the help.
On 5/14/06, Beady Geraghty <[EMAIL PROTECTED]> wrote:
Thank you for the links. I will go through them, and hopefully solve my
problem.
On 5/14/06, Chris Hostetter <[EMAIL
eliminating-scoring-for-the-sake-of-efficiency-t1603827.html#a4351614
http://www.nabble.com/Exact-date-search-doesn%27t-work-with-1.9.1--t1418643.html#a3833741
: Date: Sun, 14 May 2006 15:34:08 -0400
: From: Beady Geraghty <[EMAIL PROTECTED]>
: Reply-To: java-user@lucene.apache.or
ument? Are you storing all
of those names as you iterate?
Have you profiled your application to see exactly where the memory is
going? It is surely being eaten by your own code and not Lucene.
Erik
On May 14, 2006, at 12:07 PM, Beady Geraghty wrote:
> I have an out-of-memroy er
I have an out-of-memroy error when returning many hits.
I am still on Lucene 1.4.3
I have a simple term query. It returned 899810 documents.
I try to retrieve the name of each document and nothing else
and I ran out of memory.
Instead of getting the names all at once, I tried to query again a
Thanks.
On 1/6/06, Paul Elschot <[EMAIL PROTECTED]> wrote:
>
> On Friday 06 January 2006 18:04, Beady Geraghty wrote:
> > I would like to do queries that are negative. I mean a query with
> > only negative terms and phrases. For example, retrieve all
> > document
half don't.
It appears that everyone suggests that I take MatchAllDocsQuery
from the trunk. Is this the choice regardless of the number of
documents I have ?
Thanks,
On 1/6/06, Beady Geraghty <[EMAIL PROTECTED]> wrote:
>
> Thank you all for your answer.
>
> On 1/6/0
t; > give you the BitSet back that you could easily complement, but that
> > might be a bit overkill for what you need given the option above.
> >
> > Erik
> >
> >
> > On Jan 6, 2006, at 12:04 PM, Beady Geraghty wrote:
> >
> > > I wo
I would like to do queries that are negative. I mean a query with
only negative terms and phrases. For example, retrieve all
documents that do not contain the term "apple".
For now, I have a limited set of documents (say, 1) to index.
I can create a bitset that represents the search result of
27;ll have to sort it out with them :-(
Thanks again.
On 12/8/05, Erik Hatcher <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 8, 2005, at 10:15 AM, Beady Geraghty wrote:
> > Since someone suggested hyphen, the next requestion
> > is underscore. I can see more and more of these requ
hat way, I can think about some of the potential issues and decide
whether I should just abandon using javaCC ?
Thanks,
On 12/7/05, Erik Hatcher <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 7, 2005, at 9:08 PM, Beady Geraghty wrote:
> > In general, do the rules in javaCC wor
Hi Erik,
Thank you so much for pointing out the error :-)
It should have been
| )+"-"()+("-"()+)*>
I missed a pair of brackets for the 3rd LETTER (and a +)
I wonder how my indexer and query parser worked before,
but not the token stream. Anyhow, it seems to work with both
indexing/query parsin
I am back to doing something with Lucene after a short break from it.
I am trying to index/search hyphenated words,
and retrieve them from a token stream.
1. I modified the StandardTokenizer.jj file.
Essentially, I added the following to StandardTokenizer.jj
| )+"-"()+("-")*>
2. I used Java
Thank you for your response.
That was my original goal.
On 9/21/05, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
>
> : Since I used the StandAnalyzer when I originally created the index,
> : I therefore use the StandardTokenizer to tokenize the input stream.
> : Is there a better way to do what I
Thank you for the response.
I was trying to do something really simple - I want to extract the context
for
terms and phrases from files that satisfy some (many) queries.
I *know* that file test.txt is a hit (because I queried the index, and
it tells me that test.txt satisfies the query). Then, I o
please excuse these
simple questions.
Thanks
On 9/21/05, Beady Geraghty <[EMAIL PROTECTED]> wrote:
>
> Could someone tell me how to use the StandardTokenizer properly ?
> I thought that if the tokenizer.getNextToken() returns null, then it is
> the end of stream. I have a loop
Could someone tell me how to use the StandardTokenizer properly ?
I thought that if the tokenizer.getNextToken() returns null, then it is
the end of stream. I have a loop that tries to get the next token until
it is null. But the loop doesn't terminate.
I tried to termintae the loop by t.kind == 0
ple are free to write Analyzers that don't close the
> Reader's they get.
>
> I can't think of any clean way to change this without causing lots of
> backwards compatibility problems for lots of people.
>
>
> : -Original Message-
>
am new to Lucene. I don't seem to be able to find the answer to my question
from the archive. I hope to get some help with my problem.
I have :
Document doc = new Document();
doc.add( Field.Text( "contents", rdr );
myIndexWriter.addDocument( doc );
After this point, it appears that rdr is clo
24 matches
Mail list logo