hello,
I created index with 1.5m docs. When I am post query without facets it
returns in a moment.
When I post query with one facets it takes 14s.
lst name=responseHeader
int name=status0/int
int name=QTime14263/int
−
lst name=params
str name=facettrue/str
str name=indenton/str
str
How many terms are in the wasCreatedBy_fct field? How is that field
and its type configured?
Solr 1.3? Or trunk? Trunk contains massive faceting speed
improvements.
Erik
On Mar 17, 2009, at 4:21 AM, pcurila wrote:
hello,
I created index with 1.5m docs. When I am post
Peter,
If possible try running a 1.4-snapshot of Solr, the faceting
improvements are quite remarkable.
However, if you can't run unreleased code, it might be an idea to try
reducing the number of unique terms (try indexing surnames only?).
Toby.
On 17 Mar 2009, at 10:01, pcurila wrote:
I am using 1.3
How many terms are in the wasCreatedBy_fct field? How is that field
and its type configured?
field contains author names and there are lots of them.
here is type configuration:
fieldType name=facet class=solr.TextField positionIncrementGap=100
analyzer
tokenizer
Hi,
I am implementing Lemmatisation in Solr, which means if user looks for
Mouse then it should display results of Mouse and Mice both. I understand
that this is something context search. I think of using synonym for this but
then synonyms.txt will be having so many records and this will keep on
Hi,
I am searching with any query string, which contains special characters like
è in it. for e.g. If I search for tèst then it shud return all the results
which contains tèst and test etc. There are other special characters also.
I have updated my server.xml file of tomcat server and included
Hi,
I have a query like this
content:the AND iuser_id:5
which means return all docs of user id 5 which have the word the in
content .Since 'the' is a stop word ,this query executes as just user_id :5
inspite of the AND clause ,Whereas the expected result here is since there
is no result for
Victor,
I'd recommend look at the tutorial at http://lucene.apache.org/solr/tutorial.html
and using the list for more specific questions. Also, there a list
of companies (as well as mine!) that do support of Solr at http://wiki.apache.org/solr/Support
that eTrade can contract with to
Have you looked for any open source lemmatizers? I didn't find any in
a quick search, but there probably are some out there.
Also, is there a particular reason you are after lemmatization instead
of stemming? Maybe a light stemmer plus synonyms might suffice?
On Mar 17, 2009, at 6:02 AM,
You will need to create a field that handles the accents in order to
do this. Start by looking at the ISOLatin1AccentFilter.
-Grant
On Mar 17, 2009, at 7:31 AM, dabboo wrote:
Hi,
I am searching with any query string, which contains special
characters like
è in it. for e.g. If I search
stemming and synonyms are working fine in the application but these are
working individually. I guess I will need to add the values in synomyms.txt
to achieve it. Am I right?
Actually its the project requirement to implement the lemmatisation. I also
looked out for lemmatisation but couldnt get
This is the entry in schema.xml
fieldType name=text class=solr.TextField positionIncrementGap=100
omitNorms=true
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
!--tokenizer class=solr.HTMLStripWhitespaceTokenizerFactory /--
!-- in this
Well, by definition, using an analyzer that removes stopwords
*should* do this at query time. This assumes that you used
an analyzer that removed stopwords at index and query time.
The stopwords are not in the index.
You can get the behavior you expect by using an analyzer at
query time that does
Did you reindex after you incorporated the ISOLatin... filter?
On Tue, Mar 17, 2009 at 8:40 AM, dabboo ag...@sapient.com wrote:
This is the entry in schema.xml
fieldType name=text class=solr.TextField positionIncrementGap=100
omitNorms=true
analyzer type=index
tokenizer
I have the same question in mind. How can I configure the same standard request
handler to handle the spell check for given query?
I mean instead of calling
http://localhost:8983/solr/spellCheckCompRH?q=*:*spellcheck.q=globl for
spelling checking the following query request
should take care of
How can I configure the same standard request handler to handle the spell check
for given query? I mean instead of calling
http://localhost:8983/solr/spellCheckCompRH?q=*:*spellcheck.q=elepents for
spelling checking the following query request
should take care of both querying and spell
Hi all,
I'd like to achieve the following:
When searching for e.g. two words, one of them being spelt correctly
the other one misspelt I'd like to receive results for the correct
word but would still like to get spelling suggestions for the wrong
word.
Currently when I search for
Am 17.03.2009 um 14:39 schrieb Shyamsunder Reddy:
I have the same question in mind. How can I configure the same
standard request handler to handle the spell check for given query?
I mean instead of calling http://localhost:8983/solr/spellCheckCompRH?q=*:*spellcheck.q=globl
for spelling
Hello all,
I have a table TEST in an Oracle DB with the following columns: URI
(varchar), CONTENT (varchar), CREATION_TIME (date).
The primary key both in the DB and Solr is URI.
Here is my data-config.xml:
dataConfig
dataSource
driver=oracle.jdbc.driver.OracleDriver
Hi
If I want to commit without optimize.
Because Ive that : start
commit(optimize=true,waitFlush=false,waitSearcher=true)
but I don't want to optimize otherwise my replication will take every time
the full index folder.
Thanks a lot guys for ur help,
ryantxu wrote:
yes. optimize also
I think if you use spellcheck.collate=true, you will still receive the
results for correct word and suggestion for wrong word.
I have name field (which is first name+last name) configured for spell
check. I have name entry: GUY SHUMAKER. I am trying to find out person
names where either 'GUY' or
Yonik Seeley wrote:
Not sure... I just took the stock solr example, and it worked fine.
I inserted o'meara into example/exampledocs/solr.xml
field name=featuresAdvanced o'meara Full-Text Search
Capabilities using Lucene/field
the indexed everything: ./post.sh *.xml
Then queried in various
Thanks Mark, that really did the job! The speed loss in update time is more
than compensated at optimizing time!
Now I am trying to do another test... but not sure if Lucene have this
option, I am using Lucene 2.9-dev.
As I am working with 3G index and always have to optimize (as I said before,
: here is the whole file, if it helps
as i said before, i don't know much about the inner workings of
distributed search, but nothing about your config seems odd to me. it
seems like it should work fine.
a wild the shot in the dark: instead of using a requestHandler named
standard and urls
My advanced search option allows users to search for three different fields
same time.
The fields are - first name, last name and org name. Now I have to add spell
checking feature for the fields.
When wrong spelling is entered for each of these words like first name: jahn,
last name: smath,
Hello,
I am trying to create a basic single-core embedded Solr instance. I
figured out how to setup a single core instance and got (I believe)
all files in right places. However, I am unable to run trivial code
without exception:
SolrServer solr = new EmbeddedSolrServer(
: I have two cores in different machines which are referring to the same data
directory.
this isn't really considered a supported configuration ... both solr
instances are going to try and own the directory for updating, and
unless you do somethign special to ensure only one has control you
: I'm trying to think of a way to use both relevancy and date sorting in
: the same search. If documents are recent (say published within the last
: 2 years), I want to use all of the boost functions, BQ parameters, and
: normal Lucene scoring functions, but for documents older than two years,
:
You haven't really given us a lot of information to work with...
what shows up in your logs?
what did you name the context fragment file?
where did you put the context fragment file?
where did you put the multicore directory?
sharing *exact* directory lisings and the *exact* commands you've
This is a feature of the ShowFileRequestHandler -- it doesn't let people
browse files outside of hte conf directory.
I suppose this behavior could be made configurable (right now the only
config option is hidden for excluding specific files ... we could have
an option to allow files that
I've recently upgraded to Solr 1.3 using Lucene 2.4. One of the reasons I
upgraded was because of the nicer SearchComponent architecture that let me
add a needed feature to the default request handler. Simply put, I needed to
filter a query based on some additional parameters. So I subclassed
below is my setup,
Context docBase=/home/zhangyongjiang/applications/solr/solr.war debug=0
crossContext=true
Environment name=solr/home type=java.lang.String
value=/home/zhangyongjiang/applications/solr override=false /
/Context
then under /home/zhangyongjiang/applications/solr, I have
: bq works only with q.alt query and not with q queries. So, in your case you
: would be using qf parameter for field boosting, you will have to give both
: the fields in qf parameter i.e. both title and media.
FWIW: that statement is false. the boost query (bq) is added to the
query
: Is not particularly helpful. I tried adding adding a bq argument to my
: search:
:
: bq=media:DVD^2
:
: (yes, this is an index of films!) but I find when I start adding more
: and more:
:
: bq=media:DVD^2bq=media:BLU-RAY^1.5
:
: I find the negative results - e.g. films that are DVD but are
FWIW: there has been a lot of dicsussion around how wildcards should work
in various params that involve field names in the past: search the
archives for glob or globbing and you'll find several.
: That makes sense, since hl.fl probably can get away with calculating in the
: writer, and not as
: My original assumption for the DisMax Handler was, that it will just take the
: original query string and pass it to every field in its fieldlist using the
: fields configured analyzer stack. Maybe in the end add some stuff for the
: special options and so ... and then send the query to lucene.
: below is my setup,
:
: Context docBase=/home/zhangyongjiang/applications/solr/solr.war debug=0
crossContext=true
:Environment name=solr/home type=java.lang.String
value=/home/zhangyongjiang/applications/solr override=false /
: /Context
you provided that information before, but you
: I'm using StandardRequestHandler and I wanted to filter results by two fields
: in order to avoid duplicate results (in this case the documents are very
: similar, with differences in fields that are not returned in a query
: response).
...
: I'm manage to do the filtering in the
: I have an index which we are setting the default operator to AND.
: Am I right in saying that using the dismax handler, the default operator in
: the schema file is effectively ignored? (This is the conclusion I've made
: from testing myself)
correct.
: The issue I have with this, is that if I
: My problem was that the XMLResponseWriter is using the searcher of the
: original request to get the matching documents (in the method writeDocList
: of the class XMLWriter). Since the DocList contains id from the index of the
: second core, there were not valid in the index of the core
: I am using Apache POI parser to parse a Word Doc and extract the text
: content. Then i am passing the text content to SOLR. The Word document has
: many pictures, graphs and tables. But when i am passing the content to SOLR,
: it fails. Here is the exception trace.
:
: 09:31:04,516 ERROR
I'm using dismax with the default operator set to AND, and don't use
Minimum Match (commented out in solrconfig.xml), meaning 100% of the
terms must match. Then in my application logic I use a regex that
checks if the query contains OR , and if it does I add mm=1 to the
solr request to
: Can I set the phrase slop value to standard request handler? I want it
: to be configurable in solrconfig.xml file.
if you mean when a user enters a query like...
+fieldA:some phrase +(fieldB:true fieldC:1234)
..you want to be able to control what slop value gets used for some
On Wed, Mar 18, 2009 at 12:34 AM, Vauthrin, Laurent
laurent.vauth...@disney.com wrote:
Hello,
I have a couple of questions relating to replication in Solr. As far as
I understand it, the replication approach for both 1.3 and 1.4 involves
having the slaves poll the master for updates to the
Created SOLR-1073 in JIRA with the class file:
https://issues.apache.org/jira/browse/SOLR-1073
-- Original Message --
From: Chris Hostetter hossman_luc...@fucit.org
To: solr-user@lucene.apache.org
Subject: Re: CJKAnalyzer and Chinese Text sort
Date: Mon, 16 Mar 2009 21:34:09 -0700
45 matches
Mail list logo