Hi Hoss,
Thanks for your reply, Please find answers to your questions below.
*Well, for starters -- have you considered at least looking into using the java
based Replicationhandler instead of the rsync scripts?*
- There was an attempt to to implement java based replication but it was
very slow
On 6/4/2013 11:48 PM, Aaron Greenspan wrote:
I thought I'd document my process of getting set up with Solr 4.3.0 on a
Linux server in case it's of use to anyone. I'm a moderately experienced
Linux system administrator, so without passing judgment (at least for now),
let me just say that I
Hi,
We use this very same scenario to great effect - 2 instances using the same
dataDir with many cores - 1 is a writer (no caching), the other is a
searcher (lots of caching).
To get the searcher to see the index changes from the writer, you need the
searcher to do an empty commit - i.e. you
Hi,
I am trying to index a heavy dataset with 1 particular field really too
heavy...
However, As I start, I get Memory warning and rollback (OutOfMemoryError).
So, I have learned that we can use -Xmx1024m option with java command to
start the solr and allocate more memory to the heap.
My
Hi ,
I am having solr index of 80GB with 1 million documents .Each document of
aprx. 500KB . I have a machine with 16GB ram.
I am running mlt query on 3-5 fields of theses document .
I am getting solr out of memory problem .
Exception in thread main java.lang.OutOfMemoryError: Java heap
and I just asked a similar question just 1 sec ago
On Wed, Jun 5, 2013 at 2:07 PM, Varsha Rani varsha.ya...@orkash.com wrote:
Hi ,
I am having solr index of 80GB with 1 million documents .Each document of
aprx. 500KB . I have a machine with 16GB ram.
I am running mlt query on 3-5
Varsha,
Unless I'm mistaken, the ramBufferSizeMB param is used to do buffering of
document before write them to disk.
Can you post the cache config that you have in the solrconfig.xml, what version
are you using?
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On
Hi yriveiro,
I am using Solr version3.6.
My cache config is below :
filterCache
class=solr.FastLRUCache
size=131072
initialSize=4096
autowarmCount=2048
cleanupThread=true/
queryResultCache
class=solr.FastLRUCache
size=131072
Hi,
I have a small question about solr logging.
In resourceslog4j.properties, we have
*log4j.rootLogger=INFO, file, CONSOLE*
However, what I want is:
*log4j.rootLogger=INFO, file
*
and
*log4j.rootLogger=WARN, CONSOLE*
(both simultaneously).
Is it possible?
--
Regards,
Raheel Hasan
Varsha,
How is the size of your jvm heap?
Other question is the document cache. The documentCache does cache of document
objects fetched from the disk
(http://wiki.apache.org/solr/SolrCaching#documentCache), if each document has
500KB aprx. and you configure a cache of 131072 size, you
Am 05.06.2013 11:28, schrieb Raheel Hasan:
Hi,
I have a small question about solr logging.
In resourceslog4j.properties, we have
*log4j.rootLogger=INFO, file, CONSOLE*
However, what I want is:
*log4j.rootLogger=INFO, file
*
and
*log4j.rootLogger=WARN, CONSOLE*
(both
OK thanks... it works... :D
Also I found that we could put both of them and it will also work:
log4j.rootLogger=INFO, file
log4j.rootLogger=WARN, CONSOLE
On Wed, Jun 5, 2013 at 2:42 PM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
Am 05.06.2013 11:28, schrieb Raheel Hasan:
Hi,
Hi,
I am trying to optimize solr.
The default solrConfig that comes with solrcollection1 has a lot of libs
included I dont really need. Perhaps if someone could help we identifying
the purpose. (I only import from DIH):
Please tell me whats in these:
contrib/extraction/lib
solr-cell-
Hi,
When i start a core in solr-cloud im getting below message in log
I have setup zookeeper separately and uploaded the config files.
When i start the solr instance in cloud mode, state is down.
INFO: Update state numShards=null message={
operation:state,
numShards:null,
shard:shard1,
Hello Solr-Friends,
I have a problem with my current solr configuration. I want to import
two tables into solr. I got it to work for the first table, but the
second table doesn't get imported (no errormessage, 0 rows skipped).
I have two tables called name and title and i want to load their
Hello Solr-Friends,
I have a problem with my current solr configuration. I want to import
two tables into solr. I got it to work for the first table, but the
second table doesn't get imported (no errormessage, 0 rows skipped).
I have two tables called name and title and i want to load their
davers wrote
I want to elevate certain documents differently depending a a certain fq
parameter in the request. I've read of somebody coding solr to do this but
no code was shared. Where would I start looking to implement this feature
myself?
Davers,
I am also looking into this feature.
Hi yriveiro,
When i was using document cache size= 131072, i got exception in 5000-6000
mlt queries.
But once i done document cache size=16384, i got same problem in 1500-2000
mlt queries.
--
View this message in context:
1. SolrCell (ExtractingRequestHandler) - extract and index content from rich
documents, such as PDF, Office docs, HTML (uses Tika)
2. Clustering - for result clustering.
3. Language identification (two update processors) - analyzes text of fields
to determine language code.
None of those is
Consider the following Solr query:
select?q=*:*fq=tags:dotan-*facet=truefacet.field=tagsrows=0
The 'tags' field is a multivalue field. I would expect the previous
query to return only tags that begin with the string 'dotan-' such as:
dotan-home
dotan-work
...but not strings which do not begin
3) Use the parameter facet.prefix, e.g, facet.prefix=dotan-. Note: this
particular case will not work if the field you're facetting on is tokenised
(with - being used as a taken separator).
4) Use the parameter facet.mincount - looks like you want to set it to 1,
instead of the default which is
Hi Dotan,
I think all you need to do is add:
facet.mincount=1
i.e.
select?q=*:*fq=tags:dotan-*facet=truefacet.field=tags
rows=0facet.mincount=1
Note that you can do it per field as well:
select?q=*:*fq=tags:dotan-*facet=truefacet.field=tags
rows=0f.tags.facet.mincount=1
Hi,
I have a problem where our text corpus on which we need to do search
contains many misspelled words. Same word could also be misspelled in
several different ways. It could also have documents that have correct
spellings However, the search term that we give in query would always be
correct
On Wed, Jun 5, 2013 at 3:38 PM, Raymond Wiker rwi...@gmail.com wrote:
3) Use the parameter facet.prefix, e.g, facet.prefix=dotan-. Note: this
particular case will not work if the field you're facetting on is tokenised
(with - being used as a taken separator).
4) Use the parameter
Did you try reducing filter and query cache. They are fairly large too unless
you really need them to be cached for your use cache.
Do you have that many distinct filter queries hitting solr for the size you
have defined for filterCache?
Are you doing any sorting? as this will chew up a lot of
On Wed, Jun 5, 2013 at 3:41 PM, Brendan Grainger
brendan.grain...@gmail.com wrote:
Hi Dotan,
I think all you need to do is add:
facet.mincount=1
i.e.
select?q=*:*fq=tags:dotan-*facet=truefacet.field=tags
rows=0facet.mincount=1
Note that you can do it per field as well:
On 6/5/2013 3:46 AM, Raheel Hasan wrote:
OK thanks... it works... :D
Also I found that we could put both of them and it will also work:
log4j.rootLogger=INFO, file
log4j.rootLogger=WARN, CONSOLE
If this completely separates INFO from WARN and ERROR, then you would
want to rethink and
On 6/5/2013 3:08 AM, Raheel Hasan wrote:
Hi,
I am trying to index a heavy dataset with 1 particular field really too
heavy...
However, As I start, I get Memory warning and rollback (OutOfMemoryError).
So, I have learned that we can use -Xmx1024m option with java command to
start the solr
On 6/5/2013 3:07 AM, Varsha Rani wrote:
Hi ,
I am having solr index of 80GB with 1 million documents .Each document of
aprx. 500KB . I have a machine with 16GB ram.
I am running mlt query on 3-5 fields of theses document .
I am getting solr out of memory problem .
This wiki page has
On Wed, Jun 5, 2013 at 1:48 AM, Aaron Greenspan
aar...@thinkcomputer.com wrote:
I say this not because I enjoy starting flame wars or because I have the time
to participate in them--I don't. I realize that there's a long history to
Solr and I am the new kid who doesn't get it. Nonetheless,
ok thanks for the reply The field having values like 60kb each
Furthermore, I have realized that the issue is with MySQL as its not
processing this table when a where is applied
Secondly, I have turned this field to *stored=false* and now the *select/
* is fast working again
some values in the field are up to a 1M as well
On Wed, Jun 5, 2013 at 7:27 PM, Raheel Hasan raheelhasan@gmail.comwrote:
ok thanks for the reply The field having values like 60kb each
Furthermore, I have realized that the issue is with MySQL as its not
processing this table when
If we see the UI of other cloud base softwares like couchbase or riak, they are
more intuitive than solr's UI. Of course the UI is brand new and need a lot of
improvements. Per example the possibility of select a existing config from
zookeeper when you are using the wizard to create a
I have the exact same problem as the guy here:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201105.mbox/%3C3A2B3E42FCAA4BF496AE625426C5C6E4@Wurstsemmel%3E
AFAICS he did not get an answer. Is this a known issue? What can I do
other than doing what copyField should do in my
Hello Solr-Friends,
I have a problem with my current solr configuration. I want to import
two tables into solr. I got it to work for the first table, but the
second table doesn't get imported (no errormessage, 0 rows skipped).
I have two tables called name and title and i want to load their
I think the suggestion I have seen is that copyField should be
index-only and - therefore - will not be returned. It is primarily
there to make searching easier by aggregating fields or to provide
alternative analyzer pipeline.
Can you make your copyField destination not stored?
Regards,
We have a number of Jira issues that specifically deal with something
called Developer Curb Appeal. I think it's pretty clear that we need
to tackle a bunch of things we could call Newcomer Curb Appeal. I can
work on filing some issues, some of which will address code, some of
which will
Sorry for opening a new thread. As i sent first message w/o subscribing the
mailing list, i couldn't find a possible solution to reply original thread.
The messaging stream is attached below.
Actually the requirement came up from such a scenario: We collect some xml
documents from some external
How would one write a query which should perform set union on the
search terms (term1 OR term2 OR term3), and yet also perform phrase
matching if both terms are found? I tried a few variants of the
following, but in every case I am getting set intersection on the
search terms:
On 6/5/2013 9:03 AM, Dotan Cohen wrote:
How would one write a query which should perform set union on the
search terms (term1 OR term2 OR term3), and yet also perform phrase
matching if both terms are found? I tried a few variants of the
following, but in every case I am getting set
Everything is working great now.
Thanks David
On Wed, Jun 5, 2013 at 12:07 AM, David Smiley (@MITRE.org)
dsmi...@mitre.org wrote:
maxDistErr should be like 0.3 based on earlier parts of this discussion
since
your data is to one of a couple hours of the day, not whole days. If it
was
Try describing your own symptom in your own words - because his issue
related to Solr 1.4. I mean, where exactly are you setting
allowDuplicates=false?? And why do you think it has anything to do with
adding documents to Solr? Solr 1.4 did not have atomic update, so sending
the exact same
term1 OR term2 OR term1 term2^2
term1 OR term2 OR term1 term2~10^2
The latter would rank documents with the terms nearby higher, and the
adjacent terms highest.
term1 OR term2 OR term1 term2~10^2 OR term1 term2^20 OR term2 term1^20
To further boost adjacent terms.
But the edismax
Guys,
I am going to use the Solr4.3 to my Shopping cart project.
So I need to support my website with two languages(English and French).
So I want some guide for implement the internationalization with the
Slor4.3.
Please guide with some sample configuration to support the French language
with
Shawn:
You're right, I thought I'd seen it as a field option but I think I
was confusing really old solr.
Thanks for catching, having gotten it wrong once I'm sure I'll
remember it better for next time!
Erick
On Tue, Jun 4, 2013 at 1:57 PM, SandeepM skmi...@hotmail.com wrote:
Thanks Eric and
Hi,
Is it possible to configure solr to suggest the indexed string for all the
searches of the substring of the string?
Thanks,
Prathik
Hi,
I downloaded Solr 4.3 and I am attempting to run and configure a
separate
Solr instance under Jetty. I copied the Solr dist directory contents to a
directory called solrDist under the single core db that I was running. I
then attempted to get the DataImportHandler using the following
Sounds like a bug - we probably don't have a test that updates a link - if you
can make a JIRA issue, I'll be happy to look into it soon.
- Mark
On Jun 4, 2013, at 8:16 AM, Shawn Heisey s...@elyograg.org wrote:
I've got Solr 4.2.1 running SolrCloud. I need to change the config set
On Wed, Jun 5, 2013 at 6:10 PM, Shawn Heisey s...@elyograg.org wrote:
On 6/5/2013 9:03 AM, Dotan Cohen wrote:
How would one write a query which should perform set union on the
search terms (term1 OR term2 OR term3), and yet also perform phrase
matching if both terms are found? I tried a few
Hi Peter,
Thank you, I am glad to read that this usecase is not alien.
I'd like to make the second instance (searcher) completely read-only, so I
have disabled all the components that can write.
(being lazy ;)) I'll probably use
http://wiki.apache.org/solr/CollectionDistribution to call the
apache-solr-dataimporthandler-.*\.jar - note that the apache- prefix has
been removed from Solr jar files.
-- Jack Krupansky
-Original Message-
From: O. Olson
Sent: Wednesday, June 05, 2013 12:01 PM
To: solr-user@lucene.apache.org
Subject: No files added to classloader from lib
Hi,
On Wed, Jun 5, 2013 at 6:23 PM, Jack Krupansky j...@basetechnology.com wrote:
term1 OR term2 OR term1 term2^2
term1 OR term2 OR term1 term2~10^2
The latter would rank documents with the terms nearby higher, and the
adjacent terms highest.
term1 OR term2 OR term1 term2~10^2 OR term1
ngrams?
See:
http://lucene.apache.org/core/4_3_0/analyzers-common/org/apache/lucene/analysis/ngram/NGramFilterFactory.html
-- Jack Krupansky
-Original Message-
From: Prathik Puthran
Sent: Wednesday, June 05, 2013 11:59 AM
To: solr-user@lucene.apache.org
Subject: Configuring lucene to
/So we see the jagged edge waveform which keeps climbing (GC cycles don't
completely collect memory over time). Our test has a short capture from
real traffic and we are replaying that via solrmeter./
Any idea why the memory climbs over time. The GC should cleanup after data
is shipped back.
Is there any other documentation that I should review?
It's in the works! Within a week or two.
-- Jack Krupansky
-Original Message-
From: Dotan Cohen
Sent: Wednesday, June 05, 2013 12:06 PM
To: solr-user@lucene.apache.org
Subject: Re: Phrase matching with set union as opposed to set
Yes. My ID field is uniquekey. How can I don't override each other?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Create-index-on-few-unrelated-table-in-Solr-tp4068054p4068371.html
Sent from the Solr - User mailing list archive at Nabble.com.
Maybe problem is two document declare in data-config.xml.
You will try change this one.
document
entity name=name query=SELECT id, name FROM name/entity
entity name=title query=SELECT id AS titleid, title FROM
name/entity
/document
--
View this message in context:
Hehe.
Yes my all tables ID field names are different.
For example:
I have 5 table. These names are 'admin, account, group, checklist'
admin=id -uniquekey
account=account_id -uniquekey
group=group_id -uniquekey
checklist=id-uniquekey
Also I thought last entity overwrite other entities.
I'm
ngrams won't work here. If I index all the ngrams of the string and when I
try to search for some string it would suggest all the ngrams as well.
Eg:
Dictionary contains the word Jason Bourne and you index all the ngrams of
the above word.
When I try to search for Jason solr suggests all the
: How can I don't overwrite other entities?
: Please assist me on this example.
I'm confused, you sent this in direct reply to my last message, which
contained the following...
1) a paragraph describing the general approach to solving this type of
problem...
You can use TemplateTransformer
Hi,
We have a setup where we have 3 shards in a collection, and each shard in
the collection need to load different sets of data
That is
Shard1- will contain data only for Entity1
Shard2 - will contain data for entity2
shard3- will contain data for entity3
So in this case,. the db-data-config.xml
Please don't create new threads re-asking the same questions -- especailly
when the existing thread is only a day old, and still actively getting
responses.
it just increases the overall noise of of the list, and results in
multiple people wasting their time providing you with the same
select?defType=edismaxq={!q.op=OR}search_field:term1 term2pf=search_field
Is there any way to perform a fuzzy search with this method? I have
tried appending ~1 to every term in the search like so:
select?defType=edismaxq={!q.op=OR}search_field:term1~1%20term2~1pf=search_field
However, two
I have not implemented it yet. And I forget the exact webpage I found. But
there was a person on that page discussing the same problem and said it was
easy to implement a solution for it but he did not share his solution. If
you figure it out let me know.
--
View this message in context:
Hoss,
We rely heavily on facet.mincount because once a user has selected a facet,
it doesn't make sense for us to show that facet field to him and let him
filter again with the same facet. Also, when a facet has only one value, it
doesn't make sense to show it to the user, since searching with
OK, I have two fields defined as follows:
field name=name type=string indexed=true stored=true
multiValued=false /
field name=name2 type=string_ci indexed=true
stored=true multiValued=false /
and this copyField directive
copyField source=name dest=name2/
I updated the Index
Thanks a lot for your response Hoss.. I thought about using scriptTransformer
too but just thought of checking if there is any other way to do that..
Btw, for some reason the values are getting overridden even though its a
multivalued field.. Not sure where I am going wrong!!!
for latlong values
So here it is for a record how I am solving it right now:
Write-master is started with: -Dmontysolr.warming.enabled=false
-Dmontysolr.write.master=true -Dmontysolr.read.master=http://localhost:5005
Read-master is started with: -Dmontysolr.warming.enabled=true
-Dmontysolr.write.master=false
Okey. I'm so sorry. I will not create same task in separate topic next time.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Create-index-on-few-unrelated-table-in-Solr-tp4068054p4068405.html
Sent from the Solr - User mailing list archive at Nabble.com.
Are you using IE? If so, you might want to try using Firefox.
-Original Message-
From: sathish_ix [mailto:skandhasw...@inautix.co.in]
Sent: Wednesday, June 05, 2013 6:16 AM
To: solr-user@lucene.apache.org
Subject: Sole instance state is down in cloud mode
Hi,
When i start a core in
There is also http://wiki.apache.org/solr/SolrRelevancyCookbook with
nice examples.
On 06/05/2013 12:13 PM, Jack Krupansky wrote:
Is there any other documentation that I should review?
It's in the works! Within a week or two.
-- Jack Krupansky
-Original Message- From: Dotan Cohen
That was a very silly mistake. I forgot to add the values to array before
putting it inside row..the below code works.. Thanks a lot...
--
View this message in context:
On Wed, Jun 5, 2013 at 9:04 PM, Eustache Felenc
eustache.fel...@idilia.com wrote:
There is also http://wiki.apache.org/solr/SolrRelevancyCookbook with nice
examples.
Thank you.
--
Dotan Cohen
http://gibberish.co.il
http://what-is-what.com
This may be more suitable on the dev-list, but distributed pivot facets is
a very powerful feature. The Jira issue for this is SOLR-2894 (
https://issues.apache.org/jira/browse/SOLR-2894). I have done some testing
of the last patch for this issue, and it is as Andrew says: Everything but
datetime
Thanks so far.
This change makes Solr work over the title-entries too, yay!
Unfortunatly they don't get processed(skipped rows). In my log it says
missing required field id for every entry.
I checked my schema.xml. In there id is not set as a required field.
removing the uniquekey-property
On Jun 5, 2013, at 20:39 , Stavros Delisavas stav...@delisavas.de wrote:
Thanks so far.
This change makes Solr work over the title-entries too, yay! Unfortunatly
they don't get processed(skipped rows). In my log it says
missing required field id for every entry.
I checked my schema.xml.
On 6 June 2013 00:09, Stavros Delisavas stav...@delisavas.de wrote:
Thanks so far.
This change makes Solr work over the title-entries too, yay! Unfortunatly
they don't get processed(skipped rows). In my log it says
missing required field id for every entry.
I checked my schema.xml. In
Look in the Solr log - the error message should tell you what the multiple
values are. For example,
95484 [qtp2998209-11] ERROR org.apache.solr.core.SolrCore –
org.apache.solr.common.SolrException: ERROR: [doc=doc-1] multiple values
encountered for non multiValued field content_s: [def, abc]
Good call Jack. I totally missed that. I am curious how dataimport handler
worked before – if I made a mistake in the specification and it did not get
the jar. Anyway, it works now. Thanks again.
O.O.
apache-solr-dataimporthandler-.*\.jar - note that the apache- prefix has
been removed from
Thanks for the hints.
I am not sure how to solve this issue. I previously made a typo, there
are definetly two different tables.
Here is my real configuration:
http://pastebin.com/JUDzaMk0
For testing purposes I added LIMIT 10 to the SQL-statements because my
tables are very huge and tests
Hi,
I am using the standard edismax parser and my example query is as follows:
{!edismax qf='object_description ' rows=10 start=0 mm=-40% v='object'}
In this case, 'object' happens to be a stopword in the StopWordsFilter in my
datatype 'object_description'. Now, since 'object' is not
Check out this
http://stackoverflow.com/questions/5549880/using-solr-for-indexing-multiple-languages
http://wiki.apache.org/solr/LanguageAnalysis#French
French stop words file (sample):
http://trac.foswiki.org/browser/trunk/SolrPlugin/solr/multicore/conf/stopwords-fr.txt
Solr includes three
On 6/5/2013 10:05 AM, Mark Miller wrote:
Sounds like a bug - we probably don't have a test that updates a link - if you
can make a JIRA issue, I'll be happy to look into it soon.
I will go ahead and create an issue so that a test can be built, but I
have some more info: It works perfectly
Hi,
I've tested a query using solr admin web interface and it works fine.
But when I'm trying to execute the same search using solrj, it doesn't
include Stats information.
I've figured out that it's because my query is encoded.
Original query is like q=eventTimestamp:[2013-06-01T12:00:00.000Z TO
Please excuse my misunderstanding, but I always wonder why this index time
processing is suggested usually. from my POV is the case for query-time
processing i.e. PrefixQuery aka wildcard query Jason* .
Ultra-fast term retrieval also provided by TermsComponent.
On Wed, Jun 5, 2013 at 8:09 PM,
Sounds like the Solr Admin UI is too-aggressively encoding the query part of
the URL for display. Each query parameter value needs to be encoded, not the
entire URL query string as a whole.
-- Jack Krupansky
-Original Message-
From: ethereal
Sent: Wednesday, June 05, 2013 4:11 PM
To add some numbers to adityab's comment.
Each entry in your filter cache will probably consist
of maxDocs/8 bytes plus some overhead. Or about 16G.
This will only grow as you fire queries at Solr, so
it's no surprise you're running out of memory as you
process queries.
Your documentCache is
Note that stored=true/false is irrelevant to the raw search time.
What it _is_ relevant to is the time it takes to assemble the doc
for return, if (and only if) you return that field. I claim your search
time would be fast if you went ahead and stored the field,
and specified an fl clause that
: I've tested a query using solr admin web interface and it works fine.
: But when I'm trying to execute the same search using solrj, it doesn't
: include Stats information.
: I've figured out that it's because my query is encoded.
I don't think you are understading how to use SolrJ andthe
My usual admonishment is that Solr isn't a database, and when
you try to use it like one you're just _asking_ for problems. That
said
Consider two options:
1 use a different core for each table.
2 in schema.xml, remove the id field (required=true _might_ be specified)
Your problem statement is fairly odd. You say
you've defined object as a stopword, but then
you want your query to return documents that
contain object. By definition stopwords are
something that is considered irrelevant for searching
and are ignored.
So why not just take object out of your
I have a location-type field in my schema where I store lat / lon of a
document when this data is available. In around half of my documents this
info is not available and I just don't store anything.
I am trying to find the documents where the location is not set but nothing
is working.
I
A Solr index does not need a unique key, but almost all indexes use one.
http://wiki.apache.org/solr/UniqueKey
Try the below query passing id as id instead of titleid..
document
entity name=title query=SELECT id, title FROM
name/entity
/document
A proper dataimport config will look
On 6/5/2013 2:11 PM, ethereal wrote:
Hi,
I've tested a query using solr admin web interface and it works fine.
But when I'm trying to execute the same search using solrj, it doesn't
include Stats information.
I've figured out that it's because my query is encoded.
Original query is like
: filter again with the same facet. Also, when a facet has only one value, it
: doesn't make sense to show it to the user, since searching with that facet
: is just going to give the same result set again. So when facet.missing does
: not work with facet.mincount, it is a bit of a hassle for
select?q=*-location_field:** worked for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-for-docs-where-location-not-present-tp4068444p4068452.html
Sent from the Solr - User mailing list archive at Nabble.com.
Either have your update client explicitly set a boolean field that indicates
whether location is present, or use an update processor to set an explicit
boolean field that means no location present:
updateRequestProcessorChain name=location-present
processor
: I updated the Index using SolrJ and got the exact same error message
there aren't a lot of specifics provided in this thread, so this may not
be applicable, but if you mean you actaully using the atomic updates
feature to update an existing document then the problem is that you still
have
: Furthermore, I have realized that the issue is with MySQL as its not
: processing this table when a where is applied
Thanks for the replies.
I found that -location_field:* returns documents that both have and don't
have the field set.
I should clarify that I am using Solr 3.4
the location type is set to solr.LatLonType
Although I could add a boolean field that is true if location is set I'd
rather not have
1 - 100 of 111 matches
Mail list logo