Hi all,
I have a question related to solr 3.5 on field facet. Here is my query:
http://localhost:8081/solr_new/select?tie=0.1q.alt=*:*q=bankqf=nameaddressfq=
*portal_uuid:+A4E7890F-A188-4663-89EB-176D94DF6774*defType=dismax*
facet=true*facet.field=*location_uuid*facet.field=*sub_category_uuids*
You could add this filter directly in the solr query. Here is an example
using SolrJ:
SolrQuery solrQuery = new SolrQuery();
solrQuery.set(q, *:*);
solrQuery.addFilterQuery(-myfield:N/A);
Christian von Wendt-Jensen
On 07/01/2012 1:32 PM, Darren Govoni dar...@ontrenet.com wrote:
some benchmark added. pls check jira
On Fri, Jul 6, 2012 at 11:13 PM, Dmitry Kan dmitry@gmail.com wrote:
Mikhail,
you have my +1 and a jira comment :)
// Dmitry
On Fri, Jul 6, 2012 at 7:41 PM, Mikhail Khludnev
mkhlud...@griddynamics.com
wrote:
Okay, why do you think this idea
Hi Bruno
I'm not sure if that makes sense for a query which does not have a boolean
element to it. What is your use-case
On 7 July 2012 18:36, Bruno Mannina bmann...@free.fr wrote:
Dear Solr users,
I have a field name fid defined as:
field name=fid type=string indexed=true stored=true
Hi Bruno,
As described See http://wiki.apache.org/solr/FieldCollapsing but also
faceting as this often fits the bill
On 7 July 2012 22:27, Bruno Mannina bmann...@free.fr wrote:
Dear Solr users,
I have a field named FID for Family-ID:
field name=fid type=string indexed=true stored=true
Hi,
My docs are patents. Patents have family members and I would like to get
docs by PN (field Patent Number (uniquekey)).
My request will be
?q=pn:EP100A1mlt=true.
with this method I will get all equivalents (family members of EP100A1)
If set automaticaly mlt.count to MAX is
Hi Lee,
I tried group to my FID field and outch error 500 + outofmemory...
I don't yet tested facets
Thanks,
Bruno
Le 08/07/2012 11:19, Lee Carroll a écrit :
Hi Bruno,
As described See http://wiki.apache.org/solr/FieldCollapsing but also
faceting as this often fits the bill
On 7 July
see http://wiki.apache.org/solr/SolrPerformanceFactors#OutOfMemoryErrors
On 8 July 2012 12:37, Bruno Mannina bmann...@free.fr wrote:
Hi Lee,
I tried group to my FID field and outch error 500 + outofmemory...
I don't yet tested facets
Thanks,
Bruno
Le 08/07/2012 11:19, Lee Carroll
Solr faceting only counts documents that satisfy the query. Think of it
as assembling a list of all possible values for a field and then adding
1 for each value found in each document that satisfies the overall
query (including the filter query). So you can get counts of 0, that's
expected. Adding
I get a JSON parse error (pasted below) when I send an update to a replica
node. I downloaded solr 4 alpha and followed the instructions at
http://wiki.apache.org/solr/SolrCloud/ and setup numShards=1 with 3 total
servers managed by a zookeeper ensemble, the primary at 8983 and the other
two at
I am trying to wrap my head around replication in SolrCloud. I tried the
setup at http://wiki.apache.org/solr/SolrCloud/. I mainly need replication
for high query throughput. The setup at the URL above appears to maintain
just one copy of the index at the primary node (instead of a replicated
My understanding is that the DIH in solr only enters last_indexed_time in
dataimport.properties, but not say last_indexed_id for a primary key 'id'.
How can I efficiently get the max(id) (note that 'id' is an auto-increment
field in the database) ? Maintaining max(id) outside of solr is brittle
Is there any more information that folks need to dig into this? I
have been unable to this point to figure out what specifically it is
happening, so would appreciate any help.
On Fri, Jul 6, 2012 at 2:13 PM, Jamie Johnson jej2...@gmail.com wrote:
A little more information on this.
I tinkered
Hi,
I want to store top 5 high frequency non-stopwords words. I use DIH to
import data. Now I have two approaches -
1. Use DIH JavaScript to find top 5 frequency words and put them in a
copy field. The copy field will then stem it and remove stop words based on
appropriate tokenizers.
In theory, with SolrCloiud you can add to any replica and the change gets
propagated automatically to all of the other replicas for that shard. In
theory.
The stack trace message suggests that Solr is trying to parse your input as
JSON when in fact your input is XML. I vaguely recall that
Can you show us exactly how you are adding the document?
Eg, what update handler are you using, and what is the document you are adding?
On Jul 8, 2012, at 12:52 PM, avenka wrote:
I get a JSON parse error (pasted below) when I send an update to a replica
node. I downloaded solr 4 alpha and
I tried adding in two ways with the same outcome: (1) using solrj to call
HttpSolrServer.add(docList) using BinaryRequestWriter; (2) using
DataImportHandler to import directly from a database through a
db-data-config.xml file.
The document I'm adding has a long primary key id field and a few
Hi,
Platform: ubuntu 12.04
Package: apache-solr-4.0-2012-07-07_11-55-05-src.tgz
Web: Apache Tomcat/7.0.26
I'm trying to use the LUCENE-2899 patch
(https://issues.apache.org/jira/browse/LUCENE-2899). As an end-user I
believe this is the correct list to post to.
I'm new to Solr, so I started by
Hi All,
I would like to know how to use postCommit in SOLR properly. I would like to
grab the indexed document and do further processing with it. How do I capture
the documents being committed to the SOLR through the arguments in the
postCommit config? I'm not using SolrJ and have no
Please post a trimmed-down version of your schema.xml and a sample document.
On Sun, Jul 8, 2012 at 11:54 AM, Jamie Johnson jej2...@gmail.com wrote:
Is there any more information that folks need to dig into this? I
have been unable to this point to figure out what specifically it is
Hi,
I would recommend indexing wikipedia xml dump. Check out dataimport
hander example of indexing
wikipedia(http://wiki.apache.org/solr/DataImportHandler#Example%3a_Indexing_wikipedia).
Thanks
Vineet Yadav
On Sun, Jul 8, 2012 at 9:15 AM, kiran kumar kirankumarsm...@gmail.com wrote:
Hi,
In our
Thanks James for your reply.
I am using spell check collation options (except
spellcheck.maxCollationTries).
However, Will spellcheck.maxCollationTries consider other parqameneters in
query or just all spellcheck words in q ?
Becuse in my case, if original query is --
22 matches
Mail list logo