is it possible with geofilt and facet.query?
facet.query={!geofilt pt=45.15,-93.85 sfield=store d=5}
On Thu, Oct 13, 2011 at 4:20 PM, roySolr royrutten1...@gmail.com wrote:
I don't want to use some basic facets. When the user doesn't get any
results
i want
to search in the radius of his
current: bool //for fq which searches only current versions
last_current_at: date time // for date range queries or group sorting
what was current for a given date
sorry if i've missed a requirement
lee c
On 13 October 2011 15:01, Mike Sokolov soko...@ifactory.com wrote:
We have the identical
sorry missed the permission stuff:
I think thats ok if you index the acl as part of the document. That is
to say each version has its own acl. Match users against version acl
data
as a filter query and use last_current_at date as a sort
On 13 October 2011 22:04, lee carroll
: Deja-Vu...
:
:
http://www.lucidimagination.com/search/document/3551f130b6772799/excluding_docs_from_results_based_on_matched_field
:
: -Hoss
:
: Thanks for the answer, the problem is that the query like this:
:
: q=foodefType=dismaxqf=titlebq={!dismax qf='title desc' v=$q}
:
: causes
Hi Otis,
I know it is coming from Tomcat and was curious if anyone had the same
problem before... and as for details it is the only thing i got in the logs
as an error... i can put more details if you tell me what exactly you want
to see... i am confused and dunno what else i can put other than
I have the luxury of JMS in my environment, so that may be a simple way to
solve this...
Sent from my iPhone
On Oct 13, 2011, at 4:02 PM, Robert Stewart bstewart...@gmail.com wrote:
Yes that is a good point. Thanks.
I think I will avoid using NAS/SAN and use two masters, one setup as a
We recently updated our Solr and Solr indexing from DIH using Solr 1.4 to our
own Hadoop import using SolrJ and Solr 3.4.
While everything seems to be working, we seem to have one stumper of a
problem.
Any document that has a string field value with a carriage return \r is
having that
Hello,
I'm working on solr 1.4 with around 10 millions documents. Usually, it's
fine. However, the issue arises when I add new field to the schema.xml, I
need to reindex the whole database for that new field. Indexing the whole
database with the whole properties takes so long to do. It would be
Hi Chhorn,
There is currently no better way - you need to update/re-add the whole document.
But Solr 1.4 is rather old. If you get Solr 3.4 your indexing speed will go up
noticeably!
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search ::
Thanks for the response but I have seen this page and I had a few
questions.
1. Since I am using tomcat, I had to move the example directory into the
tomcat directory structure. In the multicore, there is no example.xsl.
Where do I
need to put it? Also, how do I send docs for indexing when
10 matches
Mail list logo