Where is the error message? On the database side?
If it is repeatable, I would just put the two on separate machines and
wireshark the http conversation. The problem might become apparent then
from visual inspection.
Regards,
Alex
On Jul 28, 2012 1:24 PM, "Xue-Feng Yang" wrote:
> Hi all,
>
>
this should be :
Hi!
I am using Solr as my main search system for my site. Currently, I am using
google to turn a place name (such as a postcode or city) into a long / lat
co-ordinate. Then I am supplying this long / lat to Solr so it can perform a
spacial search.
I am really new to this, but I dont like my relia
Hi,
I am trying to configure the suggester for solr 3.6 as described under the
http://wiki.apache.org/solr/Suggester but the configuration does not work.
I cannot figure out what I am doing wrong...
After starting Solr-Server I am getting an exception
"org.apache.solr.common.SolrException: no fie
We have auto commit on and will basically send it in a loop after
validating each record, we send it to search service. And keep doing it in
a loop. Mikhail / Lan, are you suggesting that instead of sending it in a
loop, we should collect them in an array and do a commit at the end? Is
this better
Hi all,
When run DIH indexing data from database, I run into the following error.
Anyone knows what is the problem?
Thanks,
Xufeng
///
SEVERE: GRIZZLY0040: Request header is too large.
java.nio.BufferOverflowException
at
com.sun.grizzly.tc
OK I definitely need a response parser.
Thank you!
2012/7/28 Erik Hatcher
> And by parser, what is meant is a ResponseParser. There is an example in
> one of the Solr 4 test cases that goes like this:
>
> public void testGetRawFile() throws SolrServerException, IOException {
> SolrServer
Hi,
One approach for this can be to get fact.prefix results for prefix based
suggests and for suggesting names from middle of doc what you can do is
index that name field with white space and edge ngram filter; search on
that field with prefix key word and fl=title only.. Then concatenate both
And by parser, what is meant is a ResponseParser. There is an example in one
of the Solr 4 test cases that goes like this:
public void testGetRawFile() throws SolrServerException, IOException {
SolrServer server = getSolrServer();
//assertQ(req("qt", "/admin/file")); TODO file bug that
Solrj can support only xml writer and binary writer . It not possible get
the response in Json . If your requirement is to get response in Json then
you have to write parser ..
Syed Abdul kather
send from Samsung S3
On Jul 28, 2012 1:29 AM, "Federico Valeri [via Lucene]" <
ml-node+s472066n3997784..
Hi,
thanks for this hint. Will check this out. Sounds promising.
Daniel
On Sat, Jul 28, 2012 at 3:18 AM, Chris Hostetter
wrote:
>
> : the list of IDs is constant for a longer time. I will take a look at
> : these join thematic.
> : Maybe another solution would be to really create a whole new
>
Lan,
I assume that some particular server can freeze on such bulk. But overall
message seems not absolutely correct to me. Solr has a lot of mechanisms to
survive in such cases.
Bulk indexing is absolutely right (if you submit single request with long
iterator of SolrInputDocs). This indexing thre
12 matches
Mail list logo