Thanks Marc.
On May 4, 2012, at 8:52 PM, Marc Sturlese wrote:
http://lucene.472066.n3.nabble.com/Multiple-Facet-Dates-td495480.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/Faceting-on-a-date-field-multiple-times-tp3961282p3961865.html
Sent from the Solr - User
Hi.
I would like to be able to do a facet on a date field, but with different
ranges (in a single query).
for example. I would like to show
#documents by day for the last week -
#documents by week for the last couple of months
#documents by year for the last several years.
is there a way to
Hi.
I want to store a list of documents (say each being 30-60k of text) into a
single SolrDocument. (to speed up post-retrieval querying)
In order to do this, I need to know if lucene calculates the TF/IDF score over
the entire field or does it treat each value in the list as a unique field?
between a multi-valued field and storing all the data in a
single field
as far as relevance calculations are concerned.
so.. it will suck regardless.. I thought we had per-field relevance in the
current trunk. :-(
Best
Erick
On Tue, May 31, 2011 at 11:02 AM, Ian Holsman had...@holsman.net
, May 31, 2011 at 12:16 PM, Ian Holsman had...@holsman.net wrote:
On May 31, 2011, at 12:11 PM, Erick Erickson wrote:
Can you explain the use-case a bit more here? Especially the post-query
processing and how you expect the multiple documents to help here.
we have a collection of related
I just saw this on twitter, and thought you guys would be interested.. I
haven't tried it, but it looks interesting.
http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Solr+Plugin
Thanks for the RT Shalin!
On 2/24/10 8:42 AM, Grant Ingersoll wrote:
What would it be?
most of this will be coming in 1.5,
but for me it's
- sharding.. it still seems a bit clunky
secondly.. this one isn't in 1.5.
I'd like to be able to find interesting terms that appear in my result
set that don't appear in the
On 1/5/10 12:46 AM, Shalin Shekhar Mangar wrote:
sitename:XYZ OR sitename:All Sites) AND (localeid:1237400589415) AND
((assettype:Gallery)) AND (rbcategory:ABC XYZ ) AND (startdate:[* TO
2009-12-07T23:59:00Z] AND enddate:[2009-12-07T00:00:00Z TO
*])rows=9start=63sort=date
On 12/18/09 2:46 AM, Siddhant Goel wrote:
Let say we have a search engine (a simple front end - web app kind of a
thing - responsible for querying Solr and then displaying the results in a
human readable form) based on Solr. If a user searches for something, gets
quite a few search results, and
Brian Klippel wrote:
Nope, chrome treats xml as html. Either view source or use another
browser.
I always thought the XML output should contain a XSLT file in it by default.
that way I could debug with safari (and chrome).
-Original Message-
From: Jason Rutherglen
Asif Rahman wrote:
Hi Grant,
I'll give a real life example of the problem that we are trying to solve.
We index a large number of current news articles on a continuing basis. We
tag these articles with news topics (e.g. Barack Obama, Iran, etc.). We
then use these tags to facet our queries.
hi guys.
I've noticed that one of the new features in Solr 1.4 is the Termscomponent
which enables the Autosuggest.
but what puzzles me is how to actually use it in an application.
most autosuggests are case insensitive, so there is no difference if I type
in 'San Francisco' or 'san francisco'.
hi.
I don't think this is a FAQ, but it's been bugging me for a while.
I want to store key/value pairs in a single field. for example
field name=tags type=keyval indexed=true stored=true
multiValued=true /
where keyval would be a ID# and the value.
I'm guessing it is as simple as creating
about stuffing 2 fields into the same field irks me thats all.
I've got them set up as 2 separate MV fields at the moment.
On Mon, Jan 12, 2009 at 5:36 AM, Ian Holsman li...@holsman.net wrote:
hi.
I don't think this is a FAQ, but it's been bugging me for a while.
I want to store key/value
There was a patch by Sean Timm you should investigate as well.
It limited a query so it would take a maximum of X seconds to execute,
and would just return the rows it had found in that time.
Feak, Todd wrote:
I see value in this in the form of protecting the client from itself.
For
if thats the case putting apache in front of it would be handy.
something like
limit POST
order deny,allow
deny from all
allow from 192.168.0.1
/limit
might be helpful.
Sean Timm wrote:
I believe the Solr replication scripts require POSTing a commit to
read in the new index--so at least
the picture.
Getting warmer!
Erik
On Nov 17, 2008, at 4:11 PM, Ian Holsman wrote:
if thats the case putting apache in front of it would be handy.
something like
limit POST
order deny,allow
deny from all
allow from 192.168.0.1
/limit
might be helpful.
Sean Timm wrote:
I believe the Solr
Erik Hatcher wrote:
I'm pondering the viability of running Solr as effectively a UI
server... what I mean by that is having a public facing browser-based
application hitting a Solr backend directly for JSON, XML, etc data.
I know folks are doing this (I won't name names, in case this thread
Erik Hatcher wrote:
On Nov 16, 2008, at 5:41 PM, Ian Holsman wrote:
First thing I would look at is disabling write access, or writing a
servlet that sits on top of the write handler to filter your data.
We can turn off all the update handlers, but how does that affect
replication? Can
Ryan McKinley wrote:
not sure if it is something we can do better or part of HttpClient...
From:
http://www.nabble.com/CLOSE_WAIT-td19959428.html
it seems to suggest you may want to call:
con.closeIdleConnections(0L);
But if you are creating a new MultiThreadedHttpConnectionManager for
each
Hi guys.
I'm running a little upload project that uploads documents into a solr
index. there is also a 2nd thread that runs a deleteby query and a
optimize every once and a while.
in an effort to reduce the probably of things being held onto I've made
everything local, but it is still
Noble Paul നോബിള് नोब्ळ् wrote:
If you are looking for an immediate need waiting for a release I must
advice you against waiting for the solr1.3 release. The best strategy
would be to take a nightly and start using it. Test is thoroughly and
if bugs are found report them back . If everything is
The current scripts use rsync to minimize the amount of data actually
being copied.
I've had a brief look and found only 1 implementation which is GPL and
abandoned
http://sourceforge.net/projects/jarsync.
Personally I still think the size of the transfer is important (as for
most use cases
Hi Thijs.
If you are not concerned with a *EXACT* number there is a paper that was
published in 1990 that discusses this problem.
http://dblab.kaist.ac.kr/Publication/pdf/ACM90_TODS_v15n2.pdf
from the paper (If I understand it correctly)
For 120,000,000 records you can sample 10,112,529
Clay Webster wrote:
There seem to be a few other players in this space too.
Are you from Rackspace?
(http://highscalability.com/how-rackspace-now-uses-mapreduce-and-hadoop-
query-terabytes-data)
AOL also has a Hadoop/Solr project going on.
CNET does not have much brewing there. Although
the solution that works for me is to store the field in reverse order,
and have your application reverse the field in the query.
so the field www.example.com would be stored as
moc.elmpaxe.www
so now I can do a search for *.example.com in my application.
Regards
Ian
(hat tip to erik for the
Hi.
I'm in the middle of bringing up a new solr server and am using the
trunk. (where I was using an earlier nightly release of about 2-3 weeks
ago on my old server)
now, when I do a search for 日本 (japan) it used to show the kanji in
the q area, but now it shows gibberish instead æ¥æ¬
Thanks.. I'll do that
sunrise1984 wrote:
Maybe the following is useful for you.(It comes from
http://wiki.apache.org/solr/SolrTomcat)
If you are going to query Solr using international characters (127) using
HTTP-GET, you must configure Tomcat to conform to the URI standard by
accepting
Hi.
I was wondering if there was a easy way to give solr a list of things
and finding out which have entries.
ie I pass it a list
Bill Clinton
George Bush
Mary Papas
(and possibly 20 others)
to a solr index which contains news articles about presidents. I would
like a response saying
Yonik Seeley wrote:
On 10/3/07, Ian Holsman [EMAIL PROTECTED] wrote:
Hi.
I was wondering if there was a easy way to give solr a list of things
and finding out which have entries.
ie I pass it a list
Bill Clinton
George Bush
Mary Papas
(and possibly 20 others)
to a solr index which
Have you guys seen Local Lucene ?
http://www.nsshutdown.com/projects/lucene/whitepaper/*locallucene*.htm
no need for mysql if you don't want too.
rgrds
Ian
Will Johnson wrote:
With the new/improved value source functions it should be pretty easy to
develop a new best practice. You should be
[moving this thread to solr-user, as it really has nothing to do with
hadoop]
Daniel Clark wrote:
There's info on website
http://blog.foofactory.fi/2007/02/online-indexing-integrating-nutch-with.htm
l, but it's not clear.
Sami has a patch in there which used a older version of the solr
Thanks Brian.
I'm sure this will help lots of people.
Brian Whitman wrote:
But we still use a version of Sami's patch that works on both trunk
nutch and trunk solr (solrj.) I sent my changes to sami when we did
it, if you need it let me know...
I put my files up here:
Hi.
I've been playing with Kettle (http://kettle.pentaho.org/ ) as a method
to inject data into Solr (and other things at the same time), and it
looks really promising.
I was wondering if anyone else had some experience using it with Solr
and if they set it up to add a document at a time,
Hi.
For a project i'm working on, I'm getting a RDF formatted feed.
I was wondering if someone has built a RDF to solr upload function
similar to the CSV and mysql ones sitting in Jira.
regards
Ian
Walter Underwood wrote:
This is for monitoring -- what happened in the last 30 seconds.
Log file analysis doesn't really do that.
I would respectfully disagree.
Log file analysis of each request can give you that, and a whole lot more.
you could either grab the stats via a regular cron
hi.
so I finally managed to find a bit of time to get a SolR instance
going, and now have some questions about it ;-)
first the application is tagging. ie.. to associate some keywords
with a given item, and to show them on a particular object (you can
see this in action here
I think I could get some python bindings off those as well.
and if people feel there is a need some C/APR ones as well.
On 02/06/2006, at 11:16 AM, Brian Lucas wrote:
Erik,
I'll get the PHP bindings out to see how they suit the needs of
people and
use that feedback for the Rails bindings.
38 matches
Mail list logo