On Feb 21, 2007, at 9:29 PM, Jack L wrote:
Thanks Chris and Eric for the replies. Very helpful.
no, each instance manages a single schema and a single data index
-- but
thta schema can allow for various differnt types of documents that
don't
need to have anything in common.
Does this me
Thanks Chris and Eric for the replies. Very helpful.
> no, each instance manages a single schema and a single data index -- but
> thta schema can allow for various differnt types of documents that don't
> need to have anything in common.
Does this mean that as long as I have the schema for all do
On Feb 21, 2007, at 4:25 PM, Jack L wrote:
couple of times today at around 158 documents / sec.
This is not bad at all. How about search performance?
How many concurrent queries have people been having?
What does the response time look like?
I'm the only user :) What I've done is a proof-o
Looks like it was actually an error with SOLR-133 not handling CDATA
properly. I fixed it and updated the patch.
at least SOLR-20 ins't to blame!
On 2/21/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
On Feb 21, 2007, at 5:10 PM, Yonik Seeley wrote:
>
> So far so good for me.
> I started with e
On Feb 21, 2007, at 4:37 PM, Jack L wrote:
2. For each index, do I need to copy this directory and start
a solr instance? Is it possible to run one solr instance
for multiple indices?
Further on this than Hoss mentioned... you can share a common
configuration among multiple Solr instances wi
: 1. I suppose I can start from copying the whole example
: directory and name it myindex. I understand that I need
: to modify the solr/conf/schema.xml to suit my data. Besides
: that, is there anything else that I must/should change?
: I'll take a look at the stopwords.txt, etc. to see if any
:
On Feb 21, 2007, at 5:10 PM, Yonik Seeley wrote:
So far so good for me.
I started with example/exampledocs/solr.xml and added an additional
field value for "features" of size 500K
It starts with "this is the first line", then repeats the ASL over and
over, then
ends with "this is the last line".
On 2/21/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
> Ouch... sounds serious (assuming you aren't talking about
> highlighting).
> Could you open a JIRA issue and describe or attach a test that can
> reproduce it?
> I'll try to reproduce this myself in the meantime.
So far so good for me.
I st
Ouch... sounds serious (assuming you aren't talking about
highlighting).
Could you open a JIRA issue and describe or attach a test that can
reproduce it?
I'll try to reproduce this myself in the meantime.
Not highlighting, no. I'll try to make a test case. I am using the
SOLR-20 client to
On 2/21/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
I am sending Solr stored fields of sizes in the 10-50K range. My
maxFieldLength is 5, and the field in question is a
solr.TextField. I am finding that fields that have more than a few K
of text come back "clipped:" if I try to index the fie
On 2/21/07, Jack L <[EMAIL PROTECTED]> wrote:
> Thanks to the others that clarified. I run my indexers in
> parallel... but a single instance of Solr (which in turn handles
> requests in parallel as well).
Do you feel if multi-threaded posting is helpful?
I suppose when solr does indexing, it'
I am sending Solr stored fields of sizes in the 10-50K range. My
maxFieldLength is 5, and the field in question is a
solr.TextField. I am finding that fields that have more than a few K
of text come back "clipped:" if I try to index the field with 40K of
text, the search result will sho
I have played with the "example" directory for a while.
Everything seems to work well. Now I'd like to start my own
index and I have a few questions.
1. I suppose I can start from copying the whole example
directory and name it myindex. I understand that I need
to modify the solr/conf/schema.xml
Thanks for all who replied.
> my number 1000 was per minute, not second!
I can't read! :-p
> couple of times today at around 158 documents / sec.
This is not bad at all. How about search performance?
How many concurrent queries have people been having?
What does the response time look like?
>
Andreas Hochsteger <[EMAIL PROTECTED]> skrev:
The naive approach would be to perform separate solr queries for each
category, take to top 10 and aggregate the results.
This works, but it's really slow, since there may be up to 40
categories on one page.
I have also the same question. As far as I
Hi,
is it possible to do a faceted search and additionally returning the
first 10 results for each facet (or category) in one query?
I have the requirement to produce a page like this:
Category 1:
- Link 1
- Link 2
- ...
- Link 10
Category 2:
- Link 1
- Link 2
- ...
- Link 10
...
Category n:
16 matches
Mail list logo