On Thu, Nov 10, 2011 at 12:39 PM, 刘浪 wrote:
>
> Hi,
> The total size of all files can attach PB or TB?
> If I use the only one solr core to index PB level files, how about the
> time of searching? Can the time will be less 1 second?
> If I use the only multi solr core to index PB level f
> Hi all
> We have very complexed queries including wildcard.
> That causes memory overhead.
> Sometimes, memory is full and server doesn't response.
> What I wonder, when query process time on server exceeds
> the time limit, can
> I abort processing query?
> If possible, how should I do?
QueryCo
All,
Can anyone advise how to stop the "deleteAll" event during a full import?
I'm still unable to determine why repeat full imports seem to delete old
indexes. After investigation the logs confirm this - see "REMOVING ALL
DOCUMENTS FROM INDEX" below.
..but the request I'm making is..
/solr/myfee
Hi all
We have very complexed queries including wildcard.
That causes memory overhead.
Sometimes, memory is full and server doesn't response.
What I wonder, when query process time on server exceeds the time limit, can
I abort processing query?
If possible, how should I do?
Thanks in advance
Jason
Hi,
One way to add new shards is to add them in the shard parameter in the
solrconfig.xml file. But this will require to restart the solr server
everytime you add a new shard.
I wanted to know if it is possible to dynamically add shards without having
to restart the solr server. If yes how?
Thanks
This is correct.
And there is no way I can think of optimize could just start on its own -
somebody or something called it.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
>
>From: Walter U
A restart during an optimize should not cause index corruption. The optimize
only reads existing indexes, and the only writes are to indexes not yet in use.
If it does not finish, those half-written indexes are junk to be cleaned up
later.
wunder
On Nov 9, 2011, at 8:16 PM, Brendan Grainger wr
I think in the past I've tried that, and it has restarted, although I will have
to try it out (this time we were loath to stop it as we didn't want any index
corruption issues).
A related question is, why did the optimize start? I thought it had to be
explicitly started, but somehow it started
If you restart the server, the optimize should stop and not restart, right?
wunder
On Nov 9, 2011, at 7:43 PM, Otis Gospodnetic wrote:
> Don't think so, at least not gracefully. You can always do partial optimize
> and do a few of them if you want to optimize in smaller steps.
>
> Otis
>
>
Don't think so, at least not gracefully. You can always do partial optimize
and do a few of them if you want to optimize in smaller steps.
Otis
>
>From: Brendan Grainger
>To: solr-user@lucene.apache.org
>Sent: Wednesday, November 9, 2011 4:35 PM
>Subject: Anywa
Hello all.
I'm having an issue in regards to matching a quoted phrase in Solr, and I'm not
certain what the issue at hand is.
I have tried this on both Solr 1.3 (Our production system) and 3.3 (Our
development system).
The field is a text field, and has the following fieldType definition:
htt
Thanks Otis:
It looks like SolrJ is what I was looking for exactly, it is also nice to know
that the csv implementation is fast as a fall back.
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: Wednesday, November 09, 2011 12:48 PM
To: solr-user@lucene.
Hi,
Does anyone know if an optimize can be stopped once started?
Thanks
From: Otis Gospodnetic
>To: "solr-user@lucene.apache.org"
>Sent: Wednesday, November 9, 2011 2:51 PM
>Subject: Re: Out of memory, not during import or updates of the index
>
>Hi,
>
>Some options:
>* Yes, on the slave/search side you can reduce your cache sizes and lower the
>memory footprint.
>*
The CodecUtil.writeHeader signature has changed from
public static DataOutput writeHeader(DataOutput out, String codec, int
version)
in lucene 3.4 (which is the method not found) to
public static void writeHeader(DataOutput out, String codec, int version)
in lucene 4.0
It means that while you'
Hi,
Some options:
* Yes, on the slave/search side you can reduce your cache sizes and lower the
memory footprint.
* You can also turn off norms in various fields if you don't need that and save
memory there.
* You can increase your Xmx
I don't know what version of Solr you have, but look throug
Carey,
Some options:
* Just read your BDB and use SolrJ to index to Solr in batches and in parallel
* Dump your BDB into csv format and use Solr's ability to import csv files fast
* Use Hadoop MapReduce to index to Lucene or Solr in parallel
Yes, you can index using Lucene APIs directly, but you
We get at rare times out of memory errors during the day. I know one reason for
this is data imports, none are going on. I see in the wiki, document adds have
some quirks, not doing that. I don't know to to expect for memory use though.
We had Solr running under Tomcat set to 2G ram. I presume c
Hi:
I have a massive data repository (hundreds of millions of records) stored in
Berkeley DB with Java code to access it, and I need an efficient method to
import it into Solr for indexing. I cannot find a straightforward Java data
import API that I can load the data with.
There is no JDBC for
Oh, one more thing. I wasn't suggesting that you *remove*
WordDelimiterFilterFactory from the query chain, just
that you should be more selective about the options. Look
at the differences in the options in the example schema for
a place to start
Best
Erick
On Wed, Nov 9, 2011 at 12:33 PM, Er
Length normalization is an attempt to factor in how long the field is. The idea
is that a token in a field with 10,000 tokens should count less than the word
in a field of 10 tokens. But since the length of the field is encoded
in a byte, the distinction between 4 and 20 characters is pretty much l
Hello!
I was looking for a way to implement distributed indexing in Solr.
From looking at the https://issues.apache.org/jira/browse/SOLR-2358
there was some work done to enable Solr to distribute the documents to
shards without the need of 3rd party software before Solr. What I
would like to know
Regarding <1>. Take a look at admin/analysis and see the tokenization just
to check.
Oh, and one more thing...
putting in front of
kind of defeats the purpose of WordDelimiterFilterFactory. One of the
things WDDF does is split on case change and you're removing the case
changes before WDDF gets
Hi *,
I am using DataImportHandler to do imports on a INDEX_QUEUE table (UKEY |
ACTION)
using a custom Transformer which adds fields from various sources depending on
the UKEY.
Indexing works fine this way.
But now I want to delete the rows from INDEX_QUEUE which were successfully
updated.
-
You're right James ! It was the solution and i can get suggestions now after
increasing spellcheck.count to 20.
I also made some change at the URL :
http://localhost:8080/solr/spell/?q=pr_name:sonadr&spellcheck=true&spellcheck.build=true
instead of:
http://localhost:8080/solr/select/?q=pr_name:sona
Dali,
You might want to try to increase spellcheck.count to something higher, maybe
10 or 20. The default spell checker pre-filters suggestions in such a way that
you often need to ask for more results than you actually want to get the right
ones. The other thing you might want to see is to g
Something like :
David T. Webb wrote:
Can you point me to the docs on how to create the additional flat index of note? Thx for the quick reply. Dave.
Sent from my iPhone
On Nov 9, 2011, at 6:03 AM, "Andre Bois-Crettez" wrote:
I do not think this is possbile directly out of the box
Hi,
I've a problem with the ExtractingRequestHandler of Solr. I want to
send a really big base64 encoded string to Solr with the
CommonsHttpSolrServer. The base64 encoded string is the contet of the
indexed file. The CommonsHttpSolrServer sends the parameters as a HTTP
GET request. Because of that
Hello,
I've just installed Solr 4.0, and I am getting an error when indexing.
*GRAVE: java.lang.NoSuchMethodError:
org.apache.lucene.util.CodecUtil.writeHeader(Lorg/apache/lucene/store/DataOutput;Ljava/lang/String;I)Lorg/apache/lucene/store/DataOutput;
at org.apache.lucene.util.fst.FST.save(F
Can you point me to the docs on how to create the additional flat index of
note? Thx for the quick reply. Dave.
Sent from my iPhone
On Nov 9, 2011, at 6:03 AM, "Andre Bois-Crettez" wrote:
> I do not think this is possbile directly out of the box in Solr.
>
> A quick workaround would be to f
How much memory you actually allocate to the JVM ?
http://wiki.apache.org/solr/SolrPerformanceFactors#Memory_allocated_to_the_Java_VM
You need to increase the -Xmx value, otherwise your large ram buffers
won't fit in the java heap.
sivaprasad wrote:
Hi,
I am getting the following error durin
I do not think this is possbile directly out of the box in Solr.
A quick workaround would be to fully denormalize the data, ie instead of
multivalued notes for a customer, have a completely flat index of
customer_note.
Or maybe a custom request handler plugin could actually check that
matches
Thanks for the details, but what do you mean by normalization, can you
describe shortly the concepts behind ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-dismax-scoring-and-weight-tp3490096p3492986.html
Sent from the Solr - User mailing list archive at Nabble.com.
Am 08.11.2011 23:38, schrieb Cam Bazz:
How can I store a 2d point and index it to a field type that is
latlontype, if I am using solrj?
Simply use a String field. The format is "$latitude,$longitude".
-Kuli
well fixed for now... just ignore this thread
-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-while-upgrading-from-1-4-to-3-1-tp3492373p3492887.html
Sent from the Solr - User mailing list archive at Nabble.com.
35 matches
Mail list logo