Hi all,
I know we can now update a field on Solr 4.0 version, but I am just confused
how to do it on solrj client. I have found some examples but they were just
simple CURL commands in some shell scripts...
So I wanna ask if anyone is able to update fields by solrj? if yes, where to
find an examp
Thanks a lot
--
View this message in context:
http://lucene.472066.n3.nabble.com/Antonyms-configuration-tp3991595p4000408.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sure, Lucene is kind of column oriented DB. if the same text occurs in two
different fields there is no any relation between such terms i.e. BRAND:RED
vs COLOR:RED. The only thing I can suggest you is build separate index (in
solr core) with docs like token:RED; fields:{COLOR, BRAND,,} or giving yo
: >> Is it possible to connect to SOLR over a socket file as is possible
: >> with mysql? I've looked around and I get the feeling that I may be
: >> mi-understanding part of SOLR's architecture.
Why are you specificly interested in trying to talk to solr over a socket
file?
https://people.apac
Ah, my bad. I was incorrect - it was not actually indexing.
@Jon - is there a possibility that your url_type is NULL, but not empty? Your
if check only checks to see if it is empty, which is not the same as checking
to see if it is null. If it is null, that's why you'd be having those errors -
I am getting a similar issue when while using a Template Transformer. My fields
*always* have a value as well - it is getting indexed correctly.
Furthermore, the number of warnings I get seems arbitrary. I imported one
document (debug mode) and I got roughly ~400 of those warning messages for th
Hi Michael,
Thanks for the information. Unfortunately I'm having a hard time
finding any servlet containers that can serve over a unix domain
socket. Also it looks like EmbeddedSolr won't work since I am not
writing the application in Java (it's in Ruby on Rails and I'm using
it through Sunspot).
Not sure honestly. I would not have thought of passing in an lb server.
I'll look at those docs tomorrow though. What is the recommended approach
for initing the cloud solr server in an environment where a web service is
being stood up and is.expected to handle a large number of simultaneous
reques
In the solr admin stats page I see multiple index searchers open. My
understanding is there will be two searchers open during replication and
only one otherwise. We have a multicore setup with 8 cores in the server
(each core with its own index). And we have master-slave replication setup.
Our repl
On Thu, Aug 9, 2012 at 5:39 PM, Bing Hua wrote:
> I'm a bit confused with the purpose of Transaction Logs (Update Logs) in
> Solr.
>
> My understanding is, update request comes in, first the new item is put in
> RAM buffer as well as T-Log. After a soft commit happens, the new item
> becomes searc
Hello,
I'm a bit confused with the purpose of Transaction Logs (Update Logs) in
Solr.
My understanding is, update request comes in, first the new item is put in
RAM buffer as well as T-Log. After a soft commit happens, the new item
becomes searchable but not hard committed in stable storage. Conf
On Thu, Aug 9, 2012 at 4:24 PM, tech.vronk wrote:
> Is there any 3.6 equivalent for this, before I install and run 4.0?
> I can't seem to find a corresponding class (org.apache.lucene.index.Terms)
> in 3.6.
>
unfortunately 3.6 does not carry this statistic, there is really no
clear delineation o
Any thoughts guys.
Your insights will really help if you have already worked on a scenario like
this.
Thanks in advance
Nitin
--
View this message in context:
http://lucene.472066.n3.nabble.com/Limit-on-SOLR-Cores-tp403p4000299.html
Sent from the Solr - User mailing list archive at Nabbl
Am 09.08.2012 18:02, schrieb Robert Muir:
On Thu, Aug 9, 2012 at 10:20 AM, tech.vronk wrote:
Hello,
I wonder how to figure out the total token count in a collection (per
index), i.e. the size of a corpus/collection measured in tokens.
You want to use this statistic, which tells you number of
On Aug 9, 2012, at 4:16 PM, solr-user [via Lucene] wrote:
I didn't know how the cache got triggered and the "needScore=false" now allows
some of my problem queries to finally work, and well within 2gb of mem.
needScore is an unfortunate hack in the Solr adapter to the Lucene spatial
module to w
We'll have to see if anybody else has a better idea.
-- Jack Krupansky
-Original Message-
From: caddmngr
Sent: Thursday, August 09, 2012 3:49 PM
To: solr-user@lucene.apache.org
Subject: Re: exclusions by query and many values
Thanks for the response, Jack...but as I mentioned, we are
Thanks David. You are a life saver.
I didn't know how the cache got triggered and the "needScore=false" now
allows some of my problem queries to finally work, and well within 2gb of
mem.
will look at your other suggestion when I can.
MANY thanks again.
--
View this message in context:
ht
Thanks for the response, Jack...but as I mentioned, we are currently doing
pretty much what you suggest. When customers login, we pull their list of
exceptions and create the filter query to use on all queries within their
session.
This works good, but as I also mentioned, its getting hard to man
solr-user wrote
>
> Thanks David. No worries about the delay; am always happy and
> appreciative when someone responds.
>
> I don't understand what you mean by "All center points get cached into
> memory upon first use in a score" in question 2 about the Java OOM errors
> I am seeing.
>
The u
No, your client has to re-issue the query.
I have looked into doing this automatically but it would be complicated to
implement. SpellCheckComponent would have to somehow get the entire component
stack (faceting, highlighting, etc) to re-start from the beginning and return
the new request to t
Hello,
>From spell check component I'm able to get the collation query and its # of
hits. Is it possible to have solr execute the collated query automatically
and return doc search results without resending it on client side?
Thanks,
Bing
--
View this message in context:
http://lucene.472066.
Hello,
Background is that I want to use both Suggest and SpellCheck features in a
single query to have alternatives returned at one time. Right now I can only
specify one of them using spellcheck.dictionary at query time.
default
..
suggest
I agree. We chose embedded to minimize the maintenance cost of http solr
servers.
One more concern. Even if I have only one node doing indexing, other nodes
need to reopen index reader periodically to catch up with new changes,
right? Is there a solr request that does this?
Thanks,
Bing
--
Vie
Thanks David. No worries about the delay; am always happy and appreciative
when someone responds.
I don't understand what you mean by "All center points get cached into
memory upon first use in a score" in question 2 about the Java OOM errors I
am seeing.
The Solr instance I have setup for testi
Thanks Kuli and Mikhail,
Using either termcomponent or suggester I could get some suggested terms but
it's still confusing me how to get the respective field names. In order to
get that, Use TermComponent I'll need to do a term query to every possible
field. Similar things as using SpellCheckCompo
On Wed, Aug 8, 2012 at 3:03 PM, Chris Hostetter wrote:
> I can't reproduce with teh example configs -- it looks like you've
> tweaked hte logging to use the XML file format, anyway to get the
> stacktrace of the "Caused by" exception so we can see what is null and
> where?
>
Here is the caused by
Ahh thanks! I'll probably just generate them myself instead of forcing jQuery
to do it the old way.
Thanks again!
-Original Message-
From: subscription-bounces+s472066u482...@n3.nabble.com on behalf of Chris
Hostetter-3 [via Lucene]
Sent: Thu 8/9/2012 1:13 PM
To: Cirelli, Stephen J.
Sub
Ok. this explanation is much cleaner. Have you tried to invoke
http://wiki.apache.org/solr/TermsComponent/ against all fields which you
need?
On Wed, Aug 8, 2012 at 10:56 PM, Bing Hua wrote:
> Not quite understand but I'd explain the problem I had. The response would
> contain only fields and a
: Well you get the same problem if you use jQuery's get(). It accepts an object
for the url params.
: I'll try to pass an array of k/v pairs and see what happens. Thanks!
from what i'm told, jQuery can definitely send multivalue request params,
although starting with 1.4 they changed the defau
jQuery does allow a string to be passed, so I can build the string myself. I
might be able to rig something so ext does the same.
-Original Message-
From: subscription-bounces+s472066u482...@n3.nabble.com on behalf of Chris
Hostetter-3 [via Lucene]
Sent: Thu 8/9/2012 12:10 PM
To: Cirell
Well you get the same problem if you use jQuery's get(). It accepts an object
for the url params.
I'll try to pass an array of k/v pairs and see what happens. Thanks!
-Original Message-
From: subscription-bounces+s472066u482...@n3.nabble.com on behalf of Chris
Hostetter-3 [via Lucene]
Thanks Jack.
our schema version is 1.3
we are using the official solr 3.4 release. actually we use maven to
download solr war and artifacts
org.apache.solr
solr
3.4.0
war
Thanks. No immediate, obvious, problem stands out, but I need to study it
more closely (which I am doing now).
For the "good" query I see idf(doc: ca=10 067=10), which looks exactly
correct.
But for the "bad" query I see idf(text: ca=16 067=9), which doesn't look
right. I can believe that th
: I'm using extjs 4 store.load and it accepts new parameters for a GET request
: as an json object.
: I can not put more than one property with the same name on a json object.
...
: How can I pass multiple facet.field values to solr with out having to append
: a new facet.field param to th
On Thu, Aug 9, 2012 at 10:20 AM, tech.vronk wrote:
> Hello,
>
> I wonder how to figure out the total token count in a collection (per
> index), i.e. the size of a corpus/collection measured in tokens.
>
You want to use this statistic, which tells you number of tokens for
an indexed field:
http://
: My question is: Does it make sense to round these coordinates (a) while
: indexing and/or (b) while querying to optimize cache hits? Our maximum
: required resolution for geo queries is 1km and we can tolerate minor errors
: so I could round to two decimal points for most of our queries.
: fq=_
I'm using extjs 4 store.load and it accepts new parameters for a GET request
as an json object.
I can not put more than one property with the same name on a json object.
How can I pass multiple facet.field values to solr with out having to append
a new facet.field param to the GET url. Instead of
For a rough estimate, square the number of unique terms to get the number of
terms. Vocabulary usually goes up as the square root of the corpus size in
words.
wunder
On Aug 9, 2012, at 7:20 AM, tech.vronk wrote:
> Hello,
>
> I wonder how to figure out the total token count in a collection (pe
Hi Tomas,
I really agree with your opinion, and your answear is detailed and useful
to me. As a newbie in solr, I
think I still have so much to learn to use it in a project.And the book you
mentioned is really useful, and
to be honest, I have read some of that, but not so clear about some of the
u
Yeah, that is not really supported. One write lock and one IndexWriter per
index.
If you use embedded you want to share your CoreContainer, else don't use
embedded.
On Aug 6, 2012, at 1:56 PM, Bing Hua wrote:
> Hi,
>
> I'm trying to use two embedded solr servers pointing to a same solrhome /
Makes sense. Thank you.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-Embedded-Servers-Pointing-to-single-solrhome-index-tp3999451p4000180.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello,
I wonder how to figure out the total token count in a collection (per
index), i.e. the size of a corpus/collection measured in tokens.
The statistics in /admin tell the number of distinct terms,
and the frequency list per index reveals the number of documents with
given term. So even i
On Thu, Aug 9, 2012 at 10:11 AM, Markus Jelsma
wrote:
> I've increased the connection time out on all 10 Tomcats from 1000ms to
> 5000ms. Indexing a larger amount of batches seems to run fine now. This,
> however, does not really answer the issue. What is exactly timing out here
> and why?
It
I've increased the connection time out on all 10 Tomcats from 1000ms to 5000ms.
Indexing a larger amount of batches seems to run fine now. This, however, does
not really answer the issue. What is exactly timing out here and why? I assume
its the forwarding of documents from the `indexing node` t
Thank you,
this is very interesting, I will try with solr cloud + autosoftcommit.
Il 09/08/12 14:45, Tomás Fernández Löbbe ha scritto:
Master-Slave architectures don't get along very well with NRT. One minute
may be achieved if your index is small and you don't have many updates per
minute, bu
Could you share the logs as well?
On Aug 8, 2011, at 1:31 AM, Shinichiro Abe wrote:
> Hi.
> I use EmbeddedSolrServer.The solrJ indexing code(attached) worked well
> on Solr1.4 but didn't work on Solr3.3(since 3.1). Do I need to do anything
> else?
>
> Exception:
> Exception in thread "main" or
Jack, Thanks for your reply.
We are using solr 3.4.
We use the standard lucene query parser.
I added debugQuery=true , this is the result when searching ca067 and
getting 5 documents:
ca067ca067PhraseQuery(text:"ca
067")text:"ca 067"
0.1108914 = (MATCH) weight(text:"ca 067" in 75), product of:
Master-Slave architectures don't get along very well with NRT. One minute
may be achieved if your index is small and you don't have many updates per
minute, but in other case, I would go with Solr Cloud and distributed
indexing (you can run DIH in one of the nodes and every document will be
indexed
Not sure what options DiH has in terms of controlling params - but at the least
you could add an update proc that added a commitWithin param. commitWithin is a
soft commit on Solr 4.
You could also use autoSoftCommit and set it to n seconds.
Sent from my iPhone
On Aug 9, 2012, at 6:02 AM, "g
Hi, you can index content from any database that has a JDBC driver with
Data Import Handler. see http://wiki.apache.org/solr/DataImportHandler
As for crawling your company's website, Solr doesn't crawl, it can be used
to search across the crawled content but you'll have to crawl yourself or
with so
Hi ,
I have such problem. what i did?
Thanks,
Bhavesh Jogi
--
View this message in context:
http://lucene.472066.n3.nabble.com/ServerSolrException-No-such-core-collection1-tp3234581p4000106.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi all,
It seems like possible to deploy solr 1.4 with SQL sever ,but not sure about
the latest version,3.6 even 4.
Of course, it is perfect to use Oracle for the project I am going to
start.But I am not sure about the
difficulties in development,you know,there are many issues like developing
se
I would like to understand if near realtime search is applicable to my
configuration, or if I should change the way I load data.
Currently my application uses data import handler to load new documents
every 15 minutes. This is acceptable, but it would be interesting to
bring online some chan
Thanks for the reply Eric. But I am not very clear here because we have just
one part of app which adds to the index. And if the code is sending wrong
headers then it should do so for all records? Some parts of the code below.
we use the SolrJ API as i mentioned earlier :
.
SolrInputDocument d
On 08.08.2012 20:56, Bing Hua wrote:
Not quite understand but I'd explain the problem I had. The response would
contain only fields and a list of field values that match the query.
Essentially it's querying for field values rather than documents. The
underlying use case would be, when typing in a
55 matches
Mail list logo