We ran the org.apache.lucene.index.IndexUpgrader as part of upgrading from
6.1 to 7.2.0
After the upgrade, one of our collections threw a NullPointerException on a
query of *:*
We didn't observe errors in the logs. All of our other collections appear
to be fine.
Re-indexing the collection seems
gt;> If you need more shards use SPLITSHARD. If you need more replicas use
>> ADDREPLICA etc.
>>
>> Best,
>> Erick
>>
>> On Tue, Dec 12, 2017 at 2:37 AM, Amin Raeiszadeh
>> wrote:
>> > i have a lucene index that some fields of docs are indexed
e, Dec 12, 2017 at 2:37 AM, Amin Raeiszadeh
> wrote:
> > i have a lucene index that some fields of docs are indexed with custom
> > incremental gaps and all fields are stored too(not only indexed).
> > i need to import this docs to solr cloud.
> > is there any way to auto
2, 2017 at 2:37 AM, Amin Raeiszadeh
wrote:
> i have a lucene index that some fields of docs are indexed with custom
> incremental gaps and all fields are stored too(not only indexed).
> i need to import this docs to solr cloud.
> is there any way to automatically rebuild this docs fo
i have a lucene index that some fields of docs are indexed with custom
incremental gaps and all fields are stored too(not only indexed).
i need to import this docs to solr cloud.
is there any way to automatically rebuild this docs for importing in solr
with costum gaps by some thing likes
Another sanity check. With deletion, only option would be to reindex those
documents. Could someone please let me know if I am missing anything or if I
am on track here. Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lucene-index-corruption-and-recovery
While trying to upgrade 100G index from Solr 4 to 5, check index (actually
updater) indicates that the index is corrupted. Hence, I ran check index
to fix the index which showed broken segment warning and then deleted those
documents. I then ran index update on the fixed index which upgraded fine
message in context:
http://lucene.472066.n3.nabble.com/How-to-read-bag-of-words-from-Lucene-index-tp4279679.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Tue, 2015-06-16 at 09:54 -0700, Shenghua(Daniel) Wan wrote:
> Hi, Toke,
> Did you try MapReduce with solr? I think it should be a good fit for your
> use case.
Thanks for the suggestion. Improved logistics, such as starting build of
a new shard while the previous shard is optimizing, would work
Hi, Toke,
Did you try MapReduce with solr? I think it should be a good fit for your
use case.
On Tue, Jun 16, 2015 at 5:02 AM, Toke Eskildsen
wrote:
> Shenghua(Daniel) Wan wrote:
> > Actually, I am currently interested in how to boost merging/optimizing
> > performance of single solr instance.
Shenghua(Daniel) Wan wrote:
> Actually, I am currently interested in how to boost merging/optimizing
> performance of single solr instance.
We have the same challenge (we build static 900GB shards one at a time and the
final optimization takes 8 hours with only 1 CPU core at 100%). I know that
I think your advice on future incremental update is very useful. I will
keep eye on that.
Actually, I am currently interested in how to boost merging/optimizing
performance of single solr instance.
Parallelism at MapReduce level does not help merging/optimizing much,
unless Solr/Lucene internally
Ah, OK. For very slowly changing indexes optimize can makes sense.
Do note, though, that if you incrementally index after the full build, and
especially if you update documents, you're laying a trap for the future. Let's
say you optimize down to a single segment. The default TieredMergePolicy
trie
Hi, Erick,
First thanks for sharing the ideas. I am further giving more context here
accordingly.
1. why optimize? I have done some experiments to compare the query response
time, and there is some difference. In addition, the searcher will be
customer-facing. I think any performance boost will be
The first question is why you're optimizing at all. It's not recommended
unless you can demonstrate that an optimized index is giving you enough
of a performance boost to be worth the effort.
And why are you using embedded solr server? That's kind of unusual
so I wonder if you've gone down a wrong
Hi,
Do you have any suggestions to improve the performance for merging and
optimizing index?
I have been using embedded solr server to merge and optimize the index. I
am looking for the right parameters to tune. My use case have about 300
fields plus 250 copyfields, and moderate doc size (about 65K
SpatialContext.GEO.readShape(shapeString));
> solr.add(solrInputDocument);
>
> or I'll have to stick to the WKT format.
>
>
>
>
> Any help will be highly appreciated.
>
> Thanks,
>
> Shahbaz
-
Author: http://www.packtpub.com/apa
Yeap, AFAIK you can only send the field in WKT format POINT (X Y), here
is my definition for lat lons using polygons in the map:
*JTS field definition:*
class="solr.SpatialRecursivePrefixTreeFieldType"
spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFact
Thanks Guido for the reply,
Just to clarify; this means that we cannot index JTS POINT in format
like Pt(x=55.76056,y=24.19167).
Is that so?
Thanks again.
On Mon, Oct 14, 2013 at 4:20 PM, Guido Medina wrote:
> WKT format should work, like explained in the wiki:
>
> http://en.wikipedia.org/wiki
WKT format should work, like explained in the wiki:
http://en.wikipedia.org/wiki/Well-known_text
Guido.
On 14/10/13 11:50, Shahbaz lodhi wrote:
Hi,
*Story:*
I am trying to index *JTS point* in following format; not successful though:
Pt(x=55.76056,y=24.19167)
It is the format that i get by ct
Hi,
*Story:*
I am trying to index *JTS point* in following format; not successful though:
Pt(x=55.76056,y=24.19167)
It is the format that i get by ctx.readShape( shapeString ).
I don't get any error at reading shape or adding shape
to solrInputDocument but prompts "*error reading WKT*" on adding
Hi,
*Story:*
I am trying to index *JTS point* in following format; not successful though:
Pt(x=55.76056,y=24.19167)
It is the format that i get by ctx.readShape( shapeString ).
I don't get any error at reading shape or adding shape
to solrInputDocument but prompts "*error reading WKT*" on adding
Hi ,
Thanks Chris . For every document that matches the query i want to able
to compute the following set of features for a query document pair
LuceneScore ( The vector space score that lucene gives to each doc)
LinkScore ( computed from nutch )
OpicScore ( computed from
: used to call the lucene IndexSearcher . As the documents are collected in
: TopDocs in Lucene , before that is passed back to Nutch , i used to look
: into the top K matching documents , consult some external repository
: and further score the Top K documents and reorder them in the TopDocs array
solr finally
> : > > directs it to lucene Index Searcher. As results are matched and
> collected
> : > > as TopDocs in lucene i want to inspect the top K Docs , reorder them
> by
> : > > some logic and pass the final TopDocs to solr which solr may send as
> a
> : > &
: > > . For any query it passes through the search handler and solr finally
: > > directs it to lucene Index Searcher. As results are matched and collected
: > > as TopDocs in lucene i want to inspect the top K Docs , reorder them by
: > > some logic and pass the final To
Timothy,Thanks for pointing out . But i have a specific
> requirement
> > . For any query it passes through the search handler and solr finally
> > directs it to lucene Index Searcher. As results are matched and collected
> > as TopDocs in lucene i want to inspect the top K D
ave a specific requirement
> . For any query it passes through the search handler and solr finally
> directs it to lucene Index Searcher. As results are matched and collected
> as TopDocs in lucene i want to inspect the top K Docs , reorder them by
> some logic and pass the final TopDocs t
query it passes through the search handler and solr finally
> directs it to lucene Index Searcher. As results are matched and collected
> as TopDocs in lucene i want to inspect the top K Docs , reorder them by
> some logic and pass the final TopDocs to solr which solr may send as a
> respo
Hi ,
Timothy,Thanks for pointing out . But i have a specific requirement
. For any query it passes through the search handler and solr finally
directs it to lucene Index Searcher. As results are matched and collected
as TopDocs in lucene i want to inspect the top K Docs , reorder them by
org.apache.solr.search.SolrIndexSearcher
On Tue, Apr 23, 2013 at 9:51 AM, parnab kumar wrote:
> Hi ,
>
> Can anyone please point out from where a solr search originates
> and how it passes to the lucene index searcher and back to solr . I
> actually what to know
Hi ,
Can anyone please point out from where a solr search originates
and how it passes to the lucene index searcher and back to solr . I
actually what to know which class in solr directly calls the lucene Index
Searcher .
Thanks.
Pom
Please, could someone tell me where can i find a version of luke (Lucene
Index Toolbox <http://code.google.com/p/luke/>) compatible with lucene/solr
4.0 index format? The version-4.0.0-lukeall ALPHA.jar, currently available
in http://code.google.com/p/luke/ does not work. I tried to re-buil
store the result.
Now I have a representation of the indexed data of the original field stored
in the final field content.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Access-and-copy-lucene-index-data-tp4006167p4010513.html
Sent from the Solr - User mailing list
solr-user@lucene.apache.org
Subject: Access and copy lucene index data
Dear all,
Similar subjects about index data have already been post, but I would like
your advise.
I use solr analysers to process fields, like synonyms, stopwords, ... and I
cannot see the result without using a special
es to perform this ...
Thanks in advance for your interest.
Bill_78
--
View this message in context:
http://lucene.472066.n3.nabble.com/Access-and-copy-lucene-index-data-tp4006167.html
Sent from the Solr - User mailing list archive at Nabble.com.
of a Postgres-database).
> There are some articles which have more than one value, so I decided to take
> numeric fields into account and used them in my application as:
>
> | var valueField= new NumericField(internalname, Field.Store.YES,
> true);
> valueField.SetDoubleVal
ication as:
| var valueField= new NumericField(internalname, Field.Store.YES, true);
valueField.SetDoubleValue(value);
doc.Add(valueField);
|
|I can open my Lucene index in Luke and see all those nice fields I
made, so there should be no problem with the index; plus: my application
se
the ball for very
> large scale indexing. At least I know now...
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://lucene.472066.n3.nabble.com/Copy-lucene-index-into-Solr-tp3997078p3997251.html
&
try something like Katta. In my opinion this is a
fundamental design decision where solr REALLY dropped the ball for very
large scale indexing. At least I know now...
--
View this message in context:
http://lucene.472066.n3.nabble.com/Copy-lucene-index-into-Solr-tp3997078p3997251.html
Sent from
I use for my field types in my schema and how will this affect my
> use and/or limits of using solr's query and index configurations?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Copy-lucene-index-into-Solr-tp3997078p3997105.html
>
y and index configurations?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Copy-lucene-index-into-Solr-tp3997078p3997105.html
Sent from the Solr - User mailing list archive at Nabble.com.
Solr places constraints upon what you can do with your lucene index
(e.g. You must conform to a schema). If your Lucene index cannot be
mapped to a schema, then it cannot be used within Solr.
Upayavira
On Tue, Jul 24, 2012, at 11:05 PM, spredd1208 wrote:
> Is there a best practice to cop
Is there a best practice to copy a lucene index which is built using the core
API of Lucene 3.6 into a
solr server (also 3.6) and then have it work?
I cannot find a mapping anywhere of lucene fields to solr fields and what
the corresponding schema.xml would look like.
This seems like something
Yep, this can be done. After all, Solr just uses Lucene
under the covers, a Solr index *is* a lucene index.
That said, you must take some care that the
definitions you specify in schema.xml are close enough
to how you indexed your Lucene documents to work. Indexing
something in Lucene as a
tible.
Once I did that, my lucene index just started working.
Sean
On Sun, Jan 29, 2012 at 4:38 AM, T Vinod Gupta wrote:
> hi,
> i am really new to solr/lucene and doing some experiments.. i have a
> question - if i create an index using lucene, can i use solr to query
> against that inde
hi,
i am really new to solr/lucene and doing some experiments.. i have a
question - if i create an index using lucene, can i use solr to query
against that index? if yes, how do i setup solr?
i already have a lucene index. i just copied over the index dir as /examples/solr/data. But that is not
Thanks Robert. I'll watch them all. Any others that are good to keep track of?
On Thu, Dec 8, 2011 at 1:25 PM, Robert Muir wrote:
> On Thu, Dec 8, 2011 at 12:55 PM, Jamie Johnson wrote:
>> Thanks Andrzej. I'll continue to follow the portable format JIRA
>> along with 3622, are there any other
On Thu, Dec 8, 2011 at 12:55 PM, Jamie Johnson wrote:
> Thanks Andrzej. I'll continue to follow the portable format JIRA
> along with 3622, are there any others that you're aware of that are
> blockers that would be useful to watch?
>
There is a lot to be done, particularly norms and deleted doc
Thanks Andrzej. I'll continue to follow the portable format JIRA
along with 3622, are there any others that you're aware of that are
blockers that would be useful to watch?
On Thu, Dec 8, 2011 at 10:49 AM, Andrzej Bialecki wrote:
> On 08/12/2011 14:50, Jamie Johnson wrote:
>>
>> Mark,
>>
>> Agre
On 08/12/2011 14:50, Jamie Johnson wrote:
Mark,
Agreed that Replication wouldn't help, I was dreaming that there was
some intermediate format used in replication.
Ideally you are right, I could just reindex the data and go on with
life, but my case is not so simple. Currently we have some set
Thanks Robert. I'll continue to watch the Jira and try not to bother
folks about this. Again greatly appreciate the insight.
On Thu, Dec 8, 2011 at 11:31 AM, Robert Muir wrote:
> On Thu, Dec 8, 2011 at 10:46 AM, Mark Miller wrote:
>>
>> On Dec 8, 2011, at 8:50 AM, Jamie Johnson wrote:
>>
>>> I
On Thu, Dec 8, 2011 at 10:46 AM, Mark Miller wrote:
>
> On Dec 8, 2011, at 8:50 AM, Jamie Johnson wrote:
>
>> Isn't the codec stuff merged with trunk now?
>
> Robert merged this recently AFAIK.
>
true but that issue only moved the majority of the rest of the index
(stored fields, term vectors, fi
On Dec 8, 2011, at 8:50 AM, Jamie Johnson wrote:
> Isn't the codec stuff merged with trunk now?
Robert merged this recently AFAIK.
- Mark Miller
lucidimagination.com
Mark,
Agreed that Replication wouldn't help, I was dreaming that there was
some intermediate format used in replication.
Ideally you are right, I could just reindex the data and go on with
life, but my case is not so simple. Currently we have some set of
processes which is run against the raw ar
On 08/12/2011 05:00, Mark Miller wrote:
Replication just copies the index, so I'm not sure how this would help offhand?
With SolrCloud this is a breeze - just fire up another replica for a shard and
the current index will replicate to it.
If you where willing to export the data to some portabl
Replication just copies the index, so I'm not sure how this would help offhand?
With SolrCloud this is a breeze - just fire up another replica for a shard and
the current index will replicate to it.
If you where willing to export the data to some portable format and then pull
it back in, why no
Yeah I was actually hoping that some how I could use the replication
handler to do this, fire up 1 shard, set another as a slave and see if
it would replicate the index to it but obviously I'm not sure that
would work either.
Something like this would be great too
https://issues.apache.org/jira/br
Unfortunately, I think the the only silver bullet here, for pure Solr, is to
build a system that makes it possible to reindex somehow.
On Dec 7, 2011, at 1:38 PM, Erik Hatcher wrote:
>
> On Dec 7, 2011, at 13:20 , Shawn Heisey wrote:
>
>> On 12/6/2011 2:06 PM, Erik Hatcher wrote:
>>> I think t
On Dec 7, 2011, at 13:20 , Shawn Heisey wrote:
> On 12/6/2011 2:06 PM, Erik Hatcher wrote:
>> I think the best thing that you could do here would be to lock in a version
>> of Lucene (all the Lucene libraries) that you use with SolrCloud. Certainly
>> not out of the realm of possibilities of s
On 12/6/2011 2:06 PM, Erik Hatcher wrote:
I think the best thing that you could do here would be to lock in a version of
Lucene (all the Lucene libraries) that you use with SolrCloud. Certainly not
out of the realm of possibilities of some upcoming SolrCloud capability that
requires some upgr
hen the 4.0 becomes final there is no migration utility from
>>>> this pre 4.0 version to 4.0, right?
>>>>
>>>>
>>>> On Tue, Dec 6, 2011 at 4:36 PM, Erik Hatcher
>>>> wrote:
>>>>> Oh geez... no... I didn't mean 3.x
0, right?
>>>
>>>
>>> On Tue, Dec 6, 2011 at 4:36 PM, Erik Hatcher wrote:
>>>> Oh geez... no... I didn't mean 3.x JARs... I meant the trunk/4.0 ones that
>>>> are there now.
>>>>
>>>> Erik
>>>>
>>
ht?
>>
>>
>> On Tue, Dec 6, 2011 at 4:36 PM, Erik Hatcher wrote:
>>> Oh geez... no... I didn't mean 3.x JARs... I meant the trunk/4.0 ones that
>>> are there now.
>>>
>>> Erik
>>>
>>> On Dec 6, 2011, at 16:22 ,
t;
> On Tue, Dec 6, 2011 at 4:36 PM, Erik Hatcher wrote:
>> Oh geez... no... I didn't mean 3.x JARs... I meant the trunk/4.0 ones that
>> are there now.
>>
>>Erik
>>
>> On Dec 6, 2011, at 16:22 , Jamie Johnson wrote:
>>
>>> S
the trunk/4.0 ones that
> are there now.
>
> Erik
>
> On Dec 6, 2011, at 16:22 , Jamie Johnson wrote:
>
>> So if I wanted to used lucene index 3.5 with SolrCloud I "should" be
>> able to just move the 3.5 jars in and remove any of the snapshot jars
>&g
Oh geez... no... I didn't mean 3.x JARs... I meant the trunk/4.0 ones that are
there now.
Erik
On Dec 6, 2011, at 16:22 , Jamie Johnson wrote:
> So if I wanted to used lucene index 3.5 with SolrCloud I "should" be
> able to just move the 3.5 jars in and remove any
So if I wanted to used lucene index 3.5 with SolrCloud I "should" be
able to just move the 3.5 jars in and remove any of the snapshot jars
that are present when I build locally?
On Tue, Dec 6, 2011 at 4:06 PM, Erik Hatcher wrote:
> Jamie -
>
> I think the best thing that you
Jamie -
I think the best thing that you could do here would be to lock in a version of
Lucene (all the Lucene libraries) that you use with SolrCloud. Certainly not
out of the realm of possibilities of some upcoming SolrCloud capability that
requires some upgrading of Lucene though, but you may
Thanks, but I don't believe that will do it. From my understanding
that does not control the index version written, it's used to control
the behavior of some analyzers (taken from some googling). I'd love
if someone told me otherwise though.
On Tue, Dec 6, 2011 at 3:48 PM, Alireza Salimi wrote:
Hi, I'm not sure if it would help.
in solrconfig.xml:
LUCENE_34
On Tue, Dec 6, 2011 at 3:14 PM, Jamie Johnson wrote:
> Is there a way to specify the index version solr uses? We're
> currently using SolrCloud but with the index format changing I'd be
> preferable to be able to specify a p
Is there a way to specify the index version solr uses? We're
currently using SolrCloud but with the index format changing I'd be
preferable to be able to specify a particular index format to avoid
having to do a complete reindex. Is this possible?
On Mon, Nov 28, 2011 at 10:49 AM, Roberto Iannone
wrote:
> Hi Michael,
>
> thx for your help :)
You're welcome!
> 2011/11/28 Michael McCandless
>
>> Which version of Solr/Lucene were you using when you hit power loss?
>>
> I'm using Lucene 3.4.
Hmm, which OS/filesystem? Unexpected power loss
Hi Michael,
thx for your help :)
2011/11/28 Michael McCandless
> Which version of Solr/Lucene were you using when you hit power loss?
>
> I'm using Lucene 3.4.
> There was a known bug that could allow power loss to cause corruption,
> but this was fixed in Lucene 3.4.0.
>
> Unfortunately, th
too much work but
nobody has created such a tool yet, that I know of.
Mike McCandless
http://blog.mikemccandless.com
On Mon, Nov 28, 2011 at 5:54 AM, Roberto Iannone
wrote:
> Hi all,
>
> after a power supply inperruption my lucene index (about 28 GB) looks like
> this:
>
>
Hi all,
after a power supply inperruption my lucene index (about 28 GB) looks like
this:
18/11/2011 20:29 2.016.961.997 _3d.fdt
18/11/2011 20:29 1.816.004 _3d.fdx
18/11/2011 20:2989 _3d.fnm
18/11/2011 20:30 197.323.436 _3d.frq
18/11/2011 20:30
Thank you Erik for the information you gave me.
I will test the version of the index in order to know when I need to refresh
the component.
Best Regards,
gquaire
-
Jouve ITS France
--
View this message in context:
http://lucene.472066.n3.nabble.com/the-version-of-a-Lucene-index-changes
anks Eric for your reply,.
>
> To answer to your question, I'm currently developing a kind of
> TermsComponent which is able to merge the terms of several fields and have
> the ability to reach a position in the list with a random access . To do
> that, I construct a merged l
ucene Index for these
fields. I need to rebuild this list each time the index has been modified.
If an optimize changes the Lucene index data, so I have to detect it as I do
for classical upates. Can I use the version number of the index to detect
such modifications?
Best regards,
gquaire
201
how can I detect that the data have changed in the index ?
>
> Thanks for your help!
>
> gquaire
>
>
>
> -----
> Jouve ITS France
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/the-version-of-a-Lucene-index-changes-after-an-optimize-tp3143822p3143822.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
incremented after an optimize operation. Can
you tell me if it is the case?
If it is, how can I detect that the data have changed in the index ?
Thanks for your help!
gquaire
-
Jouve ITS France
--
View this message in context:
http://lucene.472066.n3.nabble.com/the-version-of-a-Lucene-i
Thanks Erick.
sadly in my use-case I don't that wouldn't work. I'll go back to storing them
at the story level, and hitting a DB to get related stories I think.
--I
On May 31, 2011, at 12:27 PM, Erick Erickson wrote:
> Hmmm, I may have mis-lead you. Re-reading my text it
> wasn't very well writ
Hmmm, I may have mis-lead you. Re-reading my text it
wasn't very well written
TF/IDF calculations are, indeed, per-field. I was trying
to say that there was no difference between storing all
the data for an individual field as a single long string of text
in a single-valued field or as several
On 5/31/2011 12:16 PM, Ian Holsman wrote:
we have a collection of related stories. when a user searches for
something, we might not want to display the story that is
most-relevant (according to SOLR), but according to other home-grown
rules. by combing all the possibilities in one SolrDocument,
On May 31, 2011, at 12:11 PM, Erick Erickson wrote:
> Can you explain the use-case a bit more here? Especially the post-query
> processing and how you expect the multiple documents to help here.
>
we have a collection of related stories. when a user searches for something, we
might not want to
Can you explain the use-case a bit more here? Especially the post-query
processing and how you expect the multiple documents to help here.
But TF/IDF is calculated over all the values in the field. There's really no
difference between a multi-valued field and storing all the data in a
single field
Hi.
I want to store a list of documents (say each being 30-60k of text) into a
single SolrDocument. (to speed up post-retrieval querying)
In order to do this, I need to know if lucene calculates the TF/IDF score over
the entire field or does it treat each value in the list as a unique field?
.nabble.com/Getting-lucene-index-differences-tp2561994p2562395.html
Sent from the Solr - User mailing list archive at Nabble.com.
sch
> To: solr-user@lucene.apache.org
> Sent: Wed, February 23, 2011 1:34:39 PM
> Subject: Getting lucene index differences
>
>
>
> When you are working with full-imports in Solr you have to send over the net
> all your index to your slaves, with the correspondent loss
:
http://lucene.472066.n3.nabble.com/Getting-lucene-index-differences-tp2561994p2561994.html
Sent from the Solr - User mailing list archive at Nabble.com.
: Having found some code that searches a Lucene index, the only analyzers
: referenced are Lucene.Net.Analysis.Standard.StandardAnalyzer.
:
: How can I map this is Solr? The example schema doesn't seem to mention this,
: and specifying 'text' or 'string' for every field
Having found some code that searches a Lucene index, the only analyzers
referenced are Lucene.Net.Analysis.Standard.StandardAnalyzer.
How can I map this is Solr? The example schema doesn't seem to mention
this, and specifying 'text' or 'string' for every field doesn&
I don't really think this is possible/reasonable. There's nothing fixed
about
a Lucene index, you could index a field in different documents with any
number of analysis chains. The tricky part here will, as you've discovered,
find a way to match the Solr schema "closely enough&
I have to use some Lucene indexes, and Solr looks like the perfect
solution.
However, all I know about the Lucene indexes are what Luke tells me, and
simply setting the schema to represent all fields as text does not seem
to be working -- though as this is my first Solr, I am not sure if that
Hey all - apologize for the quick cross post - just to let you know,
Andrzej is giving a free webinar this wed. His presentations are always
fantastic, so check it out:
Lucid Imagination Presents a free technical webinar: Mastering the
Lucene Index
Wednesday, August 11, 2010 11:00 AM PST / 2:00
examine the design of the
Lucene index and create a matching Solr schema in solr/conf/schema.xml.
On 9/10/09, busbus wrote:
>
>
> Thanks for your reply
>
>
>
>
>
> > On Sep 10, 2009, at 6:41 AM, busbus wrote:
> > Solr defers to Lucene on reading the index. Yo
to make solr to read lucene index files.
There is a tag in Solrconfig.xml
false
Enable it to true does not seem to be working.
What else need to be done.
Should i change the config file or add new tag.
Also how to check the compatibility of Lucen and solr
Thanks in advance
--
View this mes
On Sep 10, 2009, at 6:41 AM, busbus wrote:
Hello All,
I have a set of Files indexed by Lucene. Now i want to use the
indexed files
in SOLR. The file .cfx an .cfs are not readable by Solr, as it
supports only
.fds and .fdx.
Solr defers to Lucene on reading the index. You just need to te
newFile.XML - Loads the XML and Updates the index.
Now i want to Convert all the cfx files to XML so that i can Use them in
SOLR.
Advice Needed.
Any other suggestions are most welcomed.
- Balaji
--
View this message in context:
http://www.nabble.com/How-to-Convert-Lucene-index-files-to-XML-Format
2009/8/18 Licinio Fernández Maurelo
> Nobody knoes how can i get exactly this info : index format : -9 (UNKNOWN)
>
I think Luke may be using an older version of Lucene which is not able to
read the index created by Solr.
>
> Despite of knowing 2.9-dev 794238 -
> 2009-07-15 18:05:08 helps, i as
1 - 100 of 154 matches
Mail list logo