Hi Lance,
Thankyou so much. It worked with pre-emptive authentication
On Thu, Jul 1, 2010 at 2:15 AM, Lance Norskog goks...@gmail.com wrote:
Other problems with this error have been solved by doing pre-emptive
authentication.
On Wed, Jun 30, 2010 at 4:26 AM, Rakhi Khatwani
Hi,
I think I would look at a hybrid approach, where you keep adding new synonyms
to a query-side qynonym dictionary for immediate effect. And then every now and
then or every Nth night you move those synonyms over to the index-side
dictionary and trigger a full reindex.
A nice side effect of
it is not that complicated to write an own GUI.
we are working on an integration to our intranet server...
-Ursprüngliche Nachricht-
Von: Peter Spam [mailto:ps...@mac.com]
Gesendet: Donnerstag, 1. Juli 2010 03:21
An: solr-user@lucene.apache.org
Betreff: Re: Very basic questions:
Hi,
I know this topic has been treated many times in the (distant) past, but I
wonder whether there are new better practices/tendencies.
In my application, I'm dealing with documents in different languages. Each
document is monolingual; it has some fields containing free text and a set of
--- On Thu, 7/1/10, Ravi Kiran ravi.bhas...@gmail.com wrote:
From: Ravi Kiran ravi.bhas...@gmail.com
Subject: Dilemma - Very Frequent Synonym updates for Huge Index
To: solr-user@lucene.apache.org
Date: Thursday, July 1, 2010, 7:57 AM
Hello,
Hoping some solr guru can help
me out
Have you had a look at www.twigkit.com ? Could be worth the bucks...
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Training in Europe - www.solrtraining.com
On 1. juli 2010, at 00.59, Peter Spam wrote:
Wow, thanks Lance - it's really fast now!
The last piece of
Hi,
I have chosen the same approach as you, indexing content into text_language
fields with custom analysis, and it works great. Solr does not have any
overhead with this even if there are hundreds of languages, due to the
schema-less nature of Lucene.
And if you know which language is being
Solr trunk now has a built-in UI, and it is also something that works
with Solr 1.4 as well (with some effort). Here's how to get it
working with Solr 1.4:
http://www.lucidimagination.com/blog/2009/11/04/solritas-solr-1-4s-hidden-gem/
In Solr trunk, all you have to do is navigate to
Hi,
I wanna use solr cloud. i downloaded the code from the trunk, and
successfully executed the examples as shown in wiki. but when i try the same
with multicore. i cannot access:
http://localhost:8983/solr/collection1/admin/zookeeper.jsp
it says page not found.
Following is my
Hi,
I had the impression that the StreamingUpateSolrServer in SolrJ would
automatically use the /update/javabin UpdateRequestHandler. Is this not true?
Do we need to call
server.setRequestWriter(new BinaryRequestWriter()) for it to transmit content
with the binary protocol?
--
Jan Høydahl,
The streaming won't use the 'set' Requestwriter. It uses a custom xml
requestwriter embedded in the StreamingUpdateSolrServer.
I was also hoping it would use a BinaryRequestWriter but after digging
it turned-out not to.
On 1-7-2010 15:25, Jan Høydahl / Cominvent wrote:
Hi,
I had the
There's an issue open for this:
https://issues.apache.org/jira/browse/SOLR-1565
I'm not sure off the top of my head how much is involved in making it
happen though.
-Yonik
http://www.lucidimagination.com
On Thu, Jul 1, 2010 at 9:25 AM, Jan Høydahl / Cominvent
jan@cominvent.com wrote:
Hi,
Very nice indeed! That definitely needs to be shouted about in the
docs.
Any way to make it work with facet queries or can dismax requests not
do that? I tried adding a few facet.query parameters but it came back
with nothing in the facet list.
Mark
On 1 Jul 2010, at 12:36 pm, Erik
On Jul 1, 2010, at 10:33 AM, Mark Allan wrote:
Very nice indeed! That definitely needs to be shouted about in the
docs.
Why thanks! And yeah, marketing isn't my strong point, but it is
indeed a way cool feature of Solr that deserves more attention that I
can give it.
Any way to
Hello,
I have one problem with querying solr. I indexed person with 2 fields:
* firstname - Hans
* lastname - Mustermann
and I have copy field 'text' where these fields are copied. 'text' field is
used during query.
Now, when I search:
han*
I do have Hans Mustermann in the query
I have one problem with querying solr. I indexed person
with 2 fields:
* firstname - Hans
* lastname - Mustermann
and I have copy field 'text' where these fields are copied.
'text' field is
used during query.
Now, when I search:
han*
I do have Hans Mustermann in the
Thank you very match for you help and fast answer!
i always add wildcard because I use solr in autocomplete. So as you type
your query you can see temporary results. I also found that adding wild card
returns better temporary results. At least it was easiest solution in some
cases. I'm not sure
I'm not sure weather it can be solved in solr
configuration itself
(for example with query analyzer for the text field, or
with index
analyzer).
Do you have StemFilterFactory in your field type? Remove it from query analyzer
for the text field. Re-start core + re-index.
I have standard configuration for the text field type:
fieldType name=text class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory /
filter class=solr.StopFilterFactory
Hi Jan,
I totally agree with what you said.
In a), you talked about boosting. I guess you meant to boost at the client
side, right?
I still have a question:
does Solr choose the appropriate analysis for the query. i.e., if a query is
compared to a document having English free text
Hi Jan,
I totally agree with what you said.
In a), you talked about boosting. I guess you meant to boost at the client
side, right?
I still have a question:
does Solr choose the appropriate analysis for the query. i.e., if a query is
compared to a document having English free text
Can someone explain what the createWeight methods should do?
And one someone mind explaining what the hashCode method is doing in this
use case?
public int hashCode() {
int h = a.hashCode();
h ^= (h 13) | (h 20);
h += b.hashCode();
h ^= (h 23) | (h 10);
h +=
On Thu, Jul 1, 2010 at 1:02 PM, Blargy zman...@hotmail.com wrote:
Can someone explain what the createWeight methods should do?
Its primary function is to add Searcher context to anything that needs
it (such as weighting a query).
If you're not dealing with relevancy-type queries, value sources
In my application, I have documents like:
DOCUMENT 1:
part_num: ABC123 Spark Plug
application: 2008 Toyota Corolla
application: 2007 Honda Civic
DOCUMENT 2:
part_num: FGH234 Spark Plug
application: 2007 Toyota Corolla
application: 2008 Honda Civic
The application field is set up to be a
Hello Mr.Arslan,
Thank you for promptly responding. This solution is
for searching topics which would provide a aggregation of all content
related to that Topic (like articles/photos/videos etc). So any point of
time the user will be searching for one topic only, for example
Hello Mr. Høydahl,
I thought of doing it exactly as you have said,
Shall try out and see where I land. However Iam still skeptical about that
approach from the performance point of view as we are a round the clock news
organization and huge reindexing might affect the
I will try to remove SnowballPorterFilterFactory (is it
right?) and then restart solr + reindex
Exactly. This will solve your problem.
However remember that wildcard, prefix searches (*) are not analyzed. For
example HAN* won't return anything.
Hello Mr.Arslan,
In your previous email you said Additional you
need to use raw or field query parser. Because query text is spitted at
white-spaces before it reaches KeywordTokenizer
But form the analysis page I dont see the splitting happening on white space
see my
Hello Mr.Arslan,
In your previous email you said
Additional you
need to use raw or field query parser. Because query text
is spitted at
white-spaces before it reaches KeywordTokenizer
But form the analysis page I dont see the splitting
happening on white space
I've got a version 2.3 index that appears to be valid - I can open it
with Luke 1.0.1, and CheckIndex reports no problem.
Just for grins, I crafted a matching schema, and tried to use the
index with Solr 1.4 (and also Solr-trunk).
In either case, I get this exception during startup:
On Jul 1, 2010, at 1:03pm, Ken Krugler wrote:
I've got a version 2.3 index that appears to be valid - I can open
it with Luke 1.0.1, and CheckIndex reports no problem.
[snip]
and Luke overview says:
This time as text:
Index version: 12984d2211c
Index format: -4 (Lucene 2.3)
Index
Cool, I must have configured something wrong then, because it wasn't
working for me.
Thanks!
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Wednesday, June 30, 2010 7:51 PM
To: solr-user@lucene.apache.org
Subject: Re: REST calls
Solr has 304 support with
Hi,
Check out the new eDisMax handler (src) and the new pf2 parameter. Also
available as path SOLR-1553.
Another option to avoid match for doc2 is to add application specific logic in
your frontend which detects car brands and years and rewrite the query into a
phrase or a filter.
--
Jan
mailto:solr-user-subscr...@lucene.apache.org
Please provide us some details. What and how did you index? What
request did you make to Solr?
Erik
On Jul 1, 2010, at 5:56 PM, Moises Muratalla wrote:
I am getting incomplete search results with solr 1.4.0.
Any suggestions on how to fix or debug this?
35 matches
Mail list logo