1. No, if IndexReader is on I get the same error message from checkindex
2. It doesnt do any thing but giving that error message I posted before then
quit. The full print of the error trace is:
Opening index @ E:\...\zookeeper\solr\collec
tion1\data\index
ERROR: could not read any segments file
Thank you.
I tried Luke with IndexReader disabled, however it seems the index is
compeletely broken, as it complains " ERROR: java.lang.Exception: there is
no valid Lucene index in this directory."
Sounds like I am out of luck, is it so?
--
View this message in context:
http://lucene.4720
Hi
Thanks.
But I am already using CheckIndex and the error is given by the CheckIndex
utility: it could not even continue after reporting "could not read any
segements file in directory".
--
View this message in context:
http://lucene.472066.n3.nabble.com/Fixing-corrupted-index-tp4126644p4126
My Lucene index - built with Solr using Lucene4.1 - is corrupted. Upon trying
to read the index using the following code I get
org.apache.solr.common.SolrException: No such core: collection1 exception:
>>
File configFile = new File(cacheFolder + File.separator + "solr.xml");
CoreContainer containe
Hi
I need to store and retrieve some custom java objects using Solr and I have
used ByteField and java serialisation for this. Using the embedded jetty
server I can see these byte data but when I use Solrj api to retrieve the
data they are not available. Details are below:
My schema:
Hi
sorry I couldnt do this directly... the way I do this is by subscribing to a
cluster of computers in our organisation and send the job with required
memory. It gets randomly allocated to a node (one single server in the
cluster) once executed and it is not possible to connect to that specific
no
Hi, the full stack trace is below.
-
SEVERE: Unable to create core: collection1
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCore.java:794)
at org.apache.solr.core.SolrCore.(SolrCore.java:607)
at
Thanks again for your kind input!
I followed Tim's advice and tried to use MMapDirectory. Then I get
outofmemory on solr startup (tried giving only 8G, 4G to JVM)
I guess this truely indicates that there arent sufficient memory for such a
huge index.
On another thread I posted days before, rega
Hi, thanks for your advice!
I have deliberately allocated 32G to JVM, with the command "java -Xmx32000m
-jar start.jar" etc. I am using our server which I think has a total of 48G.
However it still crashes because of that error when I specify any keywords
in my query. The only query that worked, a
Hi
I am really frustrated by this problem.
I have built an index of 1.5 billion data records, with a size of about
170GB. It's been optimised and has 12 separate files in the index directory,
looking like below:
_2.fdt --- 58G
_2.fdx --- 80M
_2.fnm--- 900bytes
_2.si --- 380bytes
_2.lucene41_0.
Hi
I have built a 300GB index using lucene 4.1 and now it is too big to do
queries efficiently. I wonder if it is possible to split it into shards,
then use SolrCloud configuration?
I have looked around the forum but was unable to find any tips on this. Any
help please?
Many thanks!
--
View t
Hi all
I am learning to use morelikethis handler, which seems very straightforward
but I got some problems when testing and I wonder if you could help me.
In my schema I have
With this schema when I use the query parameter
mlt.fl=page_content
The returned XML results in the "moreLiksThis" se
uot; letter?
In terms of doing ranged queries on multivalued fields, I think it should be
ok because i have another two fields using sfloat and are multivalued, the
ranged queries work ok
Any hints are appreciated! thanks!
zqzuk wrote:
>
> Hi all,
>
> in my schema I have two m
Hi all,
in my schema I have two multivalued fields as
and I issued a query as: start_year:[400 TO *], the result seems to be
incorrect because I got some records with start year = - 3000... and also
start year = -2147483647 (Integer.MINVALUE) Also when I combine start_year
with end_year, it a
tone's configuration ?
> I would like to know how did you simulate several word search ... ??
> Did you create a lot of different workers with lof of different word
> search ?
>
> Thanks,
>
>
> zqzuk wrote:
>>
>> Hi,
>>
>> try to firstly
wrote:
>
> Hi,
>
> I'm trying as well to stress test solr. I would love some advice to manage
> it properly.
> I'm using solr 1.3 and tomcat55.
> Thanks a lot,
>
>
> zqzuk wrote:
>>
>> Hi, I am doing a stress testing of my solr application to s
Hi all, in my application I need to index some seminar data. The basic
assumption is that each seminar can be allocated to multiple time slots,
with an start time and an end time. For example on 1st March it is allocated
to 14:00 to 16:00 ; then on 1st April it is reallocated to 10:00 - 11:30.
The
Hi, solr have reserved some special chars in building its queries, such as +
* : and so on, thus any queries must escape these chars otherwise
exceptions will occur. I wonder where can I find a complete list of chars I
need escape in the query, and what is the encoding/decoding method (URL?)
In
quest will be
served first, and in the worse case, the last request may have to wait for a
long time until all preceding requests have been answered?
Thanks
zqzuk wrote:
>
> Hi, I am doing a stress testing of my solr application to see how many
> concurrent requests it can handle a
Hi, I d like to do a stress testing of my solr application to see how many
concurrent requests it can handle and how long it takes. But I m not sure if
I have done it in proper way (likely not)... responses seem to be very slow
My configuration:
1 Solr instance, using the default settings distrib
Thanks for the quick advice!
pbinkley wrote:
>
> You should encode those three characters, and it doesn't hurt to encode
> the ampersand and double-quote characters too:
> http://en.wikipedia.org/wiki/XML#Entity_references
>
> Peter
>
> -Original Messa
Hi, I am using the SimplePostTool to post files to solr. I have encoutered
some problem with the content of xml files. I noticed that if my xml file
has fields whose values contain the character "&" or "<" or ">", the post
fails and I get the exception :
"javax.xml.stream.XMLStreamException: Pars
Hi, is it possible to have "append" like updates, where if two records of
same id's are posted to solr, the contents of the two merges and composes a
single record with the id? I am asking because my program works in a
multi-thread manner where several threads produces serveral parts of a final
re
Hi, I am using the post.jar tool to post files to solr. I d like to post
everything in a folder, e.g., "myfolder". I typed in command:
java -jar post.jar c:/myfolder/*.xml.
This works perfectly when I test on a sample of 100k xml files. But when I
work on the real dataset, there are over 1m file
Hi, I have been seeing tutorials and messages discussing solrj the magic
client package which eases tasks for building solr powered applications...
but I have been searching around without success, could you please give me
some directions?
Many thanks!
--
View this message in context:
http://ww
Thanks for your tips Chris, I really appreciate!
hossman wrote:
>
>
> : Hi, I have played with the solr example web app, it works well. I wonder
> how
> : do I do the same searching, or faceted searching without relying on the
> web
> : application, i.e., sending request by urls etc. In other
and the Wiki
>
> As for configuration of Analyzers, have a look at the schema.xml file
> for defining your fields.
>
>
> On Nov 18, 2007, at 11:03 AM, zqzuk wrote:
>
>>
>> Hi, I understand that in solr we index documents by issuing a
>> command to post
>
Hi, I have played with the solr example web app, it works well. I wonder how
do I do the same searching, or faceted searching without relying on the web
application, i.e., sending request by urls etc. In other words, essentially
how does the search and faceting work? Could you please point me to
s
28 matches
Mail list logo