Thanks for the reply
Your thoughts are what I initially was thinking. But, given some more
consideration, I imagined a system that would take all the docs that
would be returned for a given facet, and get an average score based on
their scores from the original search that produced the facets. T
Thanks for the feedback. Will read up on upgrading. I actually went
with the trunk, not a nightly.
When you say Test ... Are you suggesting there is a test suite I
should run, or do just do my own testing?
thanks
gene
On Fri, Apr 17, 2009 at 7:26 PM, Shalin Shekhar Mangar
wrote:
> On Fri,
That's excellent. Thanks for the reply.
gene
On Tue, Sep 23, 2008 at 6:39 AM, Chris Hostetter
<[EMAIL PROTECTED]> wrote:
>
> : I haven't heard of or found a way to find the number of times a term
> : is found on a page.
> : Lucene uses it in scoring, I believe, (solr scoring:
> http://tinyurl
I decided to store the word X number of times when indexing the doc.
times = 5
value = times * "dog " # "dog dog dog dog dog " gets indexed, of
course times is specific to each doc.
thanks for the help and advice Otis!!
cheers
gene
On Thu, Sep 18, 2008 at 4:27 AM, Otis Gospodnetic
<[EMAIL PRO
OK thanks Otis. Any gut feeling on the best approach to get this
collapsed data? I hate to ask you to do my homework, but I'm coming
to the
end of my Solr/Lucene knowledge. I don't code java too well - used
to, but switched to Python a while back.
gene
On Wed, Sep 17, 2008 at 12:47 PM, Otis
I was pretty sure you'd say that. But, I means lots that you take the
time to confirm it. Thanks Otis.
I don't want to give details, but we crawl for our data, and we don't
save it in a DB or on disk. It goes from download to index. Was a
good idea at the time; when we thought our designs were
Thanks for the reply Erik
Sorry for being vague. To be clear we have 1-2 million records, and
rough 12000-14000 groups.
Each record is in one and only one group.
I see it working something like this
1. Identify all records that would match search terms. (Suppose I
search for 'dog', and get 45
ROTECTED]> wrote:
> On Tue, 19 Aug 2008 10:18:12 +1200
> "Gene Campbell" <[EMAIL PROTECTED]> wrote:
>
>> Is this interpreted as meaning, there are 10 documents that will match
>> with 'car' in the title, and likewise 6 'boat' and 2 'bike&
I have to check I understand this right
If I have the following response from a search like this
http://&facet.field=title&facet.limit=-1&facet.mincount=1
10
6
2
Is this interpreted as meaning, there are 10 documents that will match
with 'car' in the
terms which are present (frequency >= 1) in your results.
>
> On Mon, Aug 18, 2008 at 10:58 AM, Gene Campbell <[EMAIL PROTECTED]> wrote:
>
>> OK, more testing seems to say that if i do mincount=1, I only get
>> facet field values that are actually in the docu
p;wt=python&indent=on&facet=true&facet.field=title&facet.limit=-1&facet.sort=true&facet.mincount=1
assuming title is a facetable field.
please correct if I'm on the wrong track.
cheers
gene
On Mon, Aug 18, 2008 at 5:10 PM, Gene Campbell <[EMAIL PROTECTED]&g
I'm still learning how to use facets with Solr correctly. It seems
that you get facet counts computed over all docs in your index.
For example, I tried this on a local index I've built up for testing.
This index has urls uniquely indexed, so no two docs
have the same url value.
http://localhost:8
leFacetParameters
> http://wiki.apache.org/solr/SolrFacetingOverview
>
> It will give you words with their frequencies for the fields you select.
> However, it will give you all the facets (tags) and your front-end must do
> the filtering with the master list.
>
> On Sun, Au
Hello Solrites,
I'm somewhat new to Solr and Lucene. I would like to build a tag
cloud based on a filtered set of words from documents. I have a
master list of approved tags. So, what I need from each document is
the list of words and frequencies such that that words appear in the
master list (
14 matches
Mail list logo