Re: Sorting performance

2008-10-21 Thread Chris Hostetter

: The problem is that I will have hundreds of users doing queries, and a
: continuous flow of document coming in.
: So a delay in warming up a cache "could" be acceptable if I do it a few times
: per day. But not on a too regular basis (right now, the first query that loads
: the cache takes 150s).
: 
: However: I'm not sure why it looks not to be a good idea to update the caches

you can refresh the caches automaticly after updating, the "newSearcher" 
event is fired whenever a searcher is opened (but before it's used by 
clients) so you can configure warming queries for it -- it doesn't have to 
be done manually (or by the first user to use that reader)

so you can send your updates anytime you want, and as long as you only 
commit every 5 minutes (or commit on a master as often as you want, but 
only run snappuller/snapinstaller on your slaves every 5 minutes) your 
results will be at most 5minutes + warming time stale.


-Hoss



Re: Out of Memory Errors

2008-10-21 Thread Mark Miller
How much RAM in the box total? How many sort fields and what types? 
Sorts on each core?


Willie Wong wrote:

Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.


Solr Setup
- 3 cores 
- 3163615 documents each

- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once


Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 


A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc


Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.



Regards,

WW

  




Out of Memory Errors

2008-10-21 Thread Willie Wong
Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.

Solr Setup
- 3 cores 
- 3163615 documents each
- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once

Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 

A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc

Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.


Regards,

WW


Understanding prefix query searching

2008-10-21 Thread Rupert Fiasco
So I tried to look on google for an answer to this before I posted
here. Basically I am trying to understand how prefix searching works.

I have a dynamic text field (indexed and stored) "full_name_t"

I have some data in my index, specifically a record with full_name_t =
"Robert P Page"

A search on:

full_name_t:Robert

yields that document, however a search on

full_name_t:Robert*

yields nothing.

Why?

To get around this I am doing something like

(full_name_t:Robert OR full_name_t:Robert*)

But I would like to understand why the wildcard doesnt work, shouldn't
it match anything after the first characters of "Robert"?

Thanks

-Rupert


Ocean realtime search + Solr

2008-10-21 Thread Jon Baer

Hi,

Im pretty intrigued by the Ocean search stuff and the Lucene patch, Im  
wondering if it's something that a tweaked Solr w/ mod Lucene can run  
now?  Has anyone tried merging that patch and running w/ Solr?  Im  
sure there is more to it than just swapping out the libs but the real  
time indexing Im sure would be possible, no?


Thanks.

- Jon


Re: Using Solr with an existing Lucene index

2008-10-21 Thread Chris Hostetter

: I want to use Solr to provide a web-based read-only view of this index. 
: I require that the index not be locked in any way while Solr is using it 
: (so the Java program can continue updating it) and that Solr is able to 
: see new documents added to the index, if not immediately, at least a 
: couple of times a day.

This should be possible.  It's essentially what happens on a 'slave" 
instance of SOlr where replication updates the index.  you notify solr 
with a "commit" to tell it to open the new index after you've made changes 
with your existing application.

: My first attempt to do this resulted in my Java program throwing a 
: CorruptIndex exception. It appears as though Solr has somehow modified 

I'm not sure why this might have happened if you didn't attempt any 
updates with Solr, but one thing to check is if you are using 
single in your solrconfig.xml?  

that would make solr use an in memory lock file and it wouldn't notice 
your application already had a lock for updating the index if it did try 
to make a change.  You'll want to use simple (and 
make sure your app doesn't override the location of the lock file -- it 
needs to use the default lock file name in the index directory)



-Hoss



RE: Best way to prevent max warmers error

2008-10-21 Thread sundar shankar
Thanks for the reply Hoss.
As far as our application goes, Commits and reads are done to the index during 
the normal business hours. However, we observed the max warmers error happening 
during a nightly job when the only operation is 4 parallel threads commits data 
to index and Optimizes it finally. We increased the number of warmers from 2 to 
4 after seeing the error getting solved in QA.

I am not sure what could be wrong here. Like I said, We enabled cold searchers 
option too and still I seem to be getting this.

To Answer your Questions
1. We have about 30 searchers at the most and about 6 searchers on an average.
2. The data needs to be visible to the searcher as soon as we can, which we 
make sure by committing data to the index as soon as it is added. Deletes and 
external updates to data (from other applications) are handled by the nightly 
cron.
3. The Solr server has 8 Gig ram and 4Gig VM

My Question to you is, 

when you said opening a new searcher and limiting the searcher, I am not sure I 
get how exactly that can be done via solrj! Would appreciate if you can help me 
out with that.

Regards
Sundar

P.S : As far as my understanding of warmers is right, it is the threads that 
loads up the index in memory which is basically accessed by the searcher, isnt 
it?

> Date: Tue, 21 Oct 2008 16:09:11 -0700
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Re: Best way to prevent max warmers error
> 
> : Subject: Best way to prevent max warmers error
> 
> Slightly old thread, but i haven't seen any replies...
> 
> :  We have an application with more 2.5 million docs currently. It is 
> : hosted on a single box with 8 GIG memory. The number of warmers 
> : configured are 4 and Cold-searcher is allowed too. The application is 
> 
> the current example configs suggest 2 ... i can't honestly think of any 
> good reason to have more then that.
> 
> There is a fairly fundemental trade off question here. the knobs
> you can adjust are:
>   a) how often new searchers are opened
>   b) how much warming you wnat to do
>   c) how powerful your hardware is
> 
> if you are getting overlapping searchers, you can either do less warming (and 
> make 
> the first users of those searchers pay an extra cost because of the empty 
> caches) or you can open new searchers less frequently so that the warm 
> time doesn't exceed the frequency, or you can throw hardware at the 
> problem and try speed things up that way.
> 
> deciding between a, b, and c is't really a technical question so much as a 
> business question.
> 
> FWIW: There may be a "d) make Solr more efficient" option, and by all 
> means if you find that knob, please let the rest of us know where it is so 
> we can all turn it :)
> 
> 
> -Hoss
> 

_
Movies, sports & news! Get your daily entertainment fix, only on live.com
http://www.live.com/?scope=video&form=MICOAL

Re: Data Folder in Multicore | Solconfig entry not working

2008-10-21 Thread Chris Hostetter

: Admin screen, index document is coming up correctly in the response. But
: when I start locating index (.cfs) file into folders . File is not created
: at all. 
: 
: Solr Config xml entry (core 1) :
: 
: ${solr.data.dir:./solr/data} 

unless yo uare defining a solr.data.dir system property, i believe that 
"./solr/data" directory is getting created in whatever the current working 
directory is for your Servlet Container.  if you eliminate that  
directive solr will create a data directory in the directory you have for 
your core.




-Hoss



Re: Synonym Pattern | Need more details

2008-10-21 Thread Chris Hostetter

: Can somebody explain meaning of different patterns?

Vicky: there is a lot of good information about the SynonymFilterFactory 
on the wiki...

http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#SynonymFilter


-Hoss



Re: date math in bf?

2008-10-21 Thread Chris Hostetter

: Is it possible to do date math in a FunctionQuery?  This doesn't work, but I'm
: looking for something like:
: 
: bf=recip((NOW-updated),1,200,10) when using DisMax to get the elapsed time
: between NOW and when the document was updated (where updated is a Date field).

"Date Math" (as implemented by the DateMathParser) only deals with 
rounding or adding intervals to dates -- not computing differneces between 
dates.  The key distinction being that DateMath expressions all evaluate 
to real dates, what you are descrbing owuld evaluate to a date interval 
(ie: a number)

I imagine what you are describing could be implmeneted as a new type of 
ValueSource that took in two date fields and a unit (ie: month, day, year, 
etc...) and returned the difference between those dates as a numeric valud 
of those units.  An alternate form could take in a date math expression as 
a string in place of one date field so you could compute the difference 
beteween a field and a fixed value.

...but that doesn't exist yet.


-Hoss



Re: qf boosting - unique count?

2008-10-21 Thread Chris Hostetter

: &q=test&qf=title^1
: 
: Title = "test, taking test, exam"
: 
: Solr boosts 2 points for 2 occurrences of the word "test" in the title. Is
: there a way to configure solr to just give 1 point, regardless the number of
: occurrences?

the "tf" value of the term "test" is what comes into play here ... if you 
want to change how significant the tf is in the scoring function you need 
to provide a custom similarity and make the tf() function return a flat 
value.  

There is ongoing work in Jira to add a new "omitTf" option to fields to 
give an easy way to eliminate tf completley (since there are internal 
otimizations that can be made in that case) but you'd have to patch Solr 
to try that out.

BTW: the term "boost" isn't quite right in this context.  a boost is a 
numbe that comes from the client to affect scores ... there is a "query 
boost" in your example which is "1" because of the "^1" in the qf param 
but what you are talking about is just the score.




-Hoss



Re: Deploying Solr with winstone servlet

2008-10-21 Thread Chris Hostetter

: I need to deploy the Solr using winstone servlet engine. Please help me
: how to configure it.

I've never heard of winstone until reading your thread, but according to 
the home page...

http://winstone.sourceforge.net/

...you just specify the name of the war on the command line...

 java -jar target/winstone-0.9.10.jar --warfile=solr.war

You'll need to specify the Sole Home Directory somehow, either using the 
current working directory, or using a system property...

 java -Dsolr.solr.home=/some/dir -jar target/winstone-0.9.10.jar 
--warfile=solr.war

..or using JNDI (aparently winestone has some options for that)

Note however that the winstone webpage explicitly says it's not a 
fully compliant servlet container, so it's very possible some things may 
not work properly.

-Hoss



Re: Best way to prevent max warmers error

2008-10-21 Thread Chris Hostetter
: Subject: Best way to prevent max warmers error

Slightly old thread, but i haven't seen any replies...

:  We have an application with more 2.5 million docs currently. It is 
: hosted on a single box with 8 GIG memory. The number of warmers 
: configured are 4 and Cold-searcher is allowed too. The application is 

the current example configs suggest 2 ... i can't honestly think of any 
good reason to have more then that.

There is a fairly fundemental trade off question here. the knobs
you can adjust are:
  a) how often new searchers are opened
  b) how much warming you wnat to do
  c) how powerful your hardware is

if you are getting overlapping searchers, you can either do less warming (and 
make 
the first users of those searchers pay an extra cost because of the empty 
caches) or you can open new searchers less frequently so that the warm 
time doesn't exceed the frequency, or you can throw hardware at the 
problem and try speed things up that way.

deciding between a, b, and c is't really a technical question so much as a 
business question.

FWIW: There may be a "d) make Solr more efficient" option, and by all 
means if you find that knob, please let the rest of us know where it is so 
we can all turn it :)


-Hoss



RE: solr1.3 / tomcat55 / MySql but character_set_client && character_set_connection LATIN1

2008-10-21 Thread Feak, Todd
Any chance this is a MySql server configuration issue?

http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html

-Todd

-Original Message-
From: sunnyfr [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 21, 2008 1:09 PM
To: solr-user@lucene.apache.org
Subject: Re: solr1.3 / tomcat55 / MySql but character_set_client &&
character_set_connection LATIN1


Any idea,? What can I do?


sunnyfr wrote:
> 
> Hi,
> 
> How can I do to manage that ?? 
> | character_set_client| latin1

> | 
> | character_set_connection| latin1

> | 
> | character_set_database  | utf8

> | 
> | character_set_filesystem| binary

> | 
> | character_set_results   | latin1

> | 
> | character_set_server| utf8

> | 
> | character_set_system| utf8

> | 
> | character_sets_dir  |
> /usr/local/mysql-5.0.51b-sphinx/share/mysql/charsets/ | 
> | collation_connection| latin1_swedish_ci

> | 
> | collation_database  | utf8_general_ci

> | 
> | collation_server| utf8_general_ci
> 
> Thanks a lot,
> 
> 

-- 
View this message in context:
http://www.nabble.com/solr1.3---tomcat55---MySql-but-character_set_clien
tcharacter_set_connection---LATIN1-tp20090455p20098329.html
Sent from the Solr - User mailing list archive at Nabble.com.




Re: solr1.3 / tomcat55 / MySql but character_set_client && character_set_connection LATIN1

2008-10-21 Thread sunnyfr

Any idea,? What can I do?


sunnyfr wrote:
> 
> Hi,
> 
> How can I do to manage that ?? 
> | character_set_client| latin1
>
> | 
> | character_set_connection| latin1
>
> | 
> | character_set_database  | utf8  
>
> | 
> | character_set_filesystem| binary
>
> | 
> | character_set_results   | latin1
>
> | 
> | character_set_server| utf8  
>
> | 
> | character_set_system| utf8  
>
> | 
> | character_sets_dir  |
> /usr/local/mysql-5.0.51b-sphinx/share/mysql/charsets/ | 
> | collation_connection| latin1_swedish_ci 
>
> | 
> | collation_database  | utf8_general_ci   
>
> | 
> | collation_server| utf8_general_ci
> 
> Thanks a lot,
> 
> 

-- 
View this message in context: 
http://www.nabble.com/solr1.3---tomcat55---MySql-but-character_set_clientcharacter_set_connection---LATIN1-tp20090455p20098329.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: error with delta import

2008-10-21 Thread Steven A Rowe
Wow, I really should read more closely before I respond - I see now, Noble, 
that you were talking about DIH's ability to parse escaped '<'s in attribute 
values, rather than about whether '<' was an acceptable character in attribute 
values.

I should repurpose my remarks to note to Shalin, though, that all (conformant) 
XML parsers have to be able to handle escaped '<'s in attribute values, since 
an XML document with a '<' in an attribute value is not well-formed.

Steve

On 10/21/2008 at 1:10 PM, Steven A Rowe wrote:
> On 10/21/2008 at 12:14 AM, Noble Paul നോബിള്‍ नोब्ळ् wrote:
> > On Tue, Oct 21, 2008 at 12:56 AM, Shalin Shekhar Mangar
> <[EMAIL PROTECTED]> wrote:
> > > Your data-config looks fine except for one thing -- you do not need to
> > > escape '<' character in an XML attribute. It maybe throwing off the
> > > parsing code in DataImportHandler.
> > 
> > not really '<' is fine in attribute
> 
> Noble, I think you're wrong - AFAICT from the XML spec., '<' is *not*
> fine in an attribute value - from
> :
> 
>   [10]  AttValue ::= '"' ([^<&"] | Reference)* '"'
>  |   "'" ([^<&'] | Reference)* "'"
> 
> where an attribute  is:
> 
>   [41] Attribute ::= Name Eq AttValue
> 
> Steve


RE: error with delta import

2008-10-21 Thread Steven A Rowe
On 10/21/2008 at 12:14 AM, Noble Paul നോബിള്‍ नोब्ळ् wrote:
> On Tue, Oct 21, 2008 at 12:56 AM, Shalin Shekhar Mangar <[EMAIL PROTECTED]> 
> wrote:
> > Your data-config looks fine except for one thing -- you do not need to
> > escape '<' character in an XML attribute. It maybe throwing off the
> > parsing code in DataImportHandler.
>
> not really '<' is fine in attribute

Noble, I think you're wrong - AFAICT from the XML spec., '<' is *not* fine in 
an attribute value - from :

  [10]  AttValue ::= '"' ([^<&"] | Reference)* '"' 
 |   "'" ([^<&'] | Reference)* "'"

where an attribute  is:

  [41] Attribute ::= Name Eq AttValue

Steve


Re: To make sure XML is UTF-8

2008-10-21 Thread sunnyfr

Hi Jeffrey,

How did you manage with your database conneciton in latin-1 to get your
information properly in utf-8 ?
to manage stemming  everything ???

Thanks a lot,

How did you manage if 

Tiong Jeffrey wrote:
> 
> Hi Ajanta,
> 
> thanks! Since I used PHP, I managed to use the PHP decode function to
> change
> it to UTF-8.
> 
> But just a question, even if we change mysql default char-set to UTF-8,
> and
> if the input originally is in other format, the mysql engine won't help to
> convert it to UTF-8 rite? I think my question is, what is the use of
> defining the char-set in mysql other than for labeling purpose?
> 
> Thanks!
> 
> Jeffrey
> 
> On 6/13/07, Ajanta Phatak <[EMAIL PROTECTED]> wrote:
>>
>> Hi
>>
>> Not sure if you've had a solution for your problem yet, but I had dealt
>> with a similar issue that is mentioned below and hopefully it'll help
>> you too. Of course, this assumes that your original data is in utf-8
>> format.
>>
>> The default charset encoding for mysql is Latin1 and our display format
>> was utf-8 and that was the problem. These are the steps I performed to
>> get the search data in utf-8 format..
>>
>> Changed the my.cnf as so (though we can avoid this by executing commands
>> on every new connection if we don't want the whole db in utf format):
>>
>> Under: [mysqld] added:
>> # setting default charset to utf-8
>> collation_server=utf8_unicode_ci
>> character_set_server=utf8
>> default-character-set=utf8
>>
>> Under: [client]
>> default-character-set=utf8
>>
>> After changing, restarted mysqld, re-created the db, re-inserted all the
>> data again in the db using my data insert code (java program) and
>> re-created the Solr index. The key is to change the settings for both
>> the mysqld and client sections in my.cnf - the mysqld setting is to make
>> sure that mysql doesn't convert it to latin1 while storing the data and
>> the client setting is to ensure that the data is not converted while
>> accessing - going in or coming out from the server.
>>
>> Ajanta.
>>
>>
>> Tiong Jeffrey wrote:
>> > Ya you are right! After I change it to UTF-8 the error still there... I
>> > looked at the log, this is what it appears,
>> >
>> > 127.0.0.1 -  -  [10/06/2007:03:52:06 +] "POST /solr/update
>> > HTTP/1.1" 500
>> > 4022
>> >
>> > I tried to search but couldn't understand what error is this, anybody
>> has
>> > any idea on this?
>> >
>> > Thanks!!!
>> >
>> > On 6/10/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>> >>
>> >> : way during indexing is - "FATAL: Connection error (is Solr running
>> at
>> >> : http://localhost/solr/update
>> >> : ?): java.io.IOException: Server returned HTTP Response code: 500 for
>> >> URL:
>> >> : http://local/solr/update";
>> >> : 4.Although the error code doesnt specify is XML utf-8 code error,
>> >> but I
>> >> did
>> >> : a bit research, and look at the XML file that i have, it doesn't
>> >> fulfill
>> >> the
>> >> : utf-8 encoding
>> >>
>> >> I *strongly* encourage you to look at the body of the response and/or
>> >> the
>> >> error log of your Servlet container and find out *exactly* what the
>> >> cause
>> >> of the error is ... you could spend a lot of time working on this and
>> >> discover it's not your real problem.
>> >>
>> >>
>> >>
>> >> -Hoss
>> >>
>> >
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/To-make-sure-XML-is-UTF-8-tp11031646p20093197.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: Problem implementing a BinaryQueryResponseWriter

2008-10-21 Thread Feak, Todd
I do have that in my config. It's existence doesn't seem to affect this
particular issue. I've tried it with and without.

-Todd

-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 20, 2008 4:36 PM
To: solr-user@lucene.apache.org
Subject: Re: Problem implementing a BinaryQueryResponseWriter

do you have handleSelect set to true in solrconfig?

   
...

if not, it would use a Servlet that is now deprecated



On Oct 20, 2008, at 4:52 PM, Feak, Todd wrote:

> I found out what's going on.
>
> My test queries from existing Solr (not 1.3.0) that I am using have  
> *2*
> "select" in the URL. http://host:port/select/select?q=foo . Not sure
> why, but that's a separate issue. The result is that it is following a
> codepath that bypasses this decision point, and it falls back on
> something that assumes it will *not* be a BinaryQueryResponseWriter,
> even though it does correctly locate and use my new writer.
>
> The solution was to map /select/select to a new handler.
>
> Not sure if this raises another issue or not, but for me it solves the
> problem. Thanks for the help.
>
> -Todd
>
> -Original Message-
> From: Grant Ingersoll [mailto:[EMAIL PROTECTED]
> Sent: Monday, October 20, 2008 1:09 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Problem implementing a BinaryQueryResponseWriter
>
> I'd start by having a look at SolrDispatchFilter and put in a debug
> breakpoint at:
>
>   QueryResponseWriter responseWriter =
> core.getQueryResponseWriter(solrReq);
>
> response.setContentType(responseWriter.getContentType(solrReq,
> solrRsp));
>   if (Method.HEAD != reqMethod) {
> if (responseWriter instanceof
> BinaryQueryResponseWriter) {
>   BinaryQueryResponseWriter binWriter =
> (BinaryQueryResponseWriter) responseWriter;
>   binWriter.write(response.getOutputStream(),
> solrReq, solrRsp);
> } else {
>   PrintWriter out = response.getWriter();
>   responseWriter.write(out, solrReq, solrRsp);
>
> }
>
>
> On Oct 20, 2008, at 3:59 PM, Feak, Todd wrote:
>
>> Yes.
>>
>> I've gotten it to the point where my class is called, but the wrong
>> method on it is called.
>>
>> -Todd
>>
>> -Original Message-
>> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
>> Sent: Monday, October 20, 2008 12:19 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Problem implementing a BinaryQueryResponseWriter
>>
>> Hi Todd,
>>
>> Did you add your response writer in solrconfig.xml?
>>
>> > class="org.apache.solr.request.XMLResponseWriter" default="true"/>
>>
>> On Mon, Oct 20, 2008 at 9:35 PM, Feak, Todd <[EMAIL PROTECTED]>
>> wrote:
>>
>>> I switched from dev group for this specific question, in case other
>>> users have similar issue.
>>>
>>>
>>>
>>> I'm implementing my own BinaryQueryResponseWriter. I've implemented
>> the
>>> interface and successfully plugged it into the Solr configuration.
>>> However, the application always calls the Writer method on the
>> interface
>>> instead of the OutputStream method.
>>>
>>>
>>>
>>> So, how does Solr determine *which* one to call? Is there a setting
>>> somewhere I am missing maybe?
>>>
>>>
>>>
>>> For troubleshooting purposes, I am using 1.3.0 release version. If I
>> try
>>> using the BinaryResponseWriter (javabin) as the wt, I get the
>> exception
>>> indicating that Solr is doing the same thing with that writer as
>>> well.
>>> This leads me to believe I am somehow misconfigured, OR this isn't
>>> supported with 1.3.0 release.
>>>
>>>
>>>
>>> -Todd
>>>
>>>
>>
>>
>> -- 
>> Regards,
>> Shalin Shekhar Mangar.
>
> --
> Grant Ingersoll
> Lucene Boot Camp Training Nov. 3-4, 2008, ApacheCon US New Orleans.
> http://www.lucenebootcamp.com
>
>
> Lucene Helpful Hints:
> http://wiki.apache.org/lucene-java/BasicsOfPerformance
> http://wiki.apache.org/lucene-java/LuceneFAQ
>
>
>
>
>
>
>
>
>
>




Re: Create custom facets after building index

2008-10-21 Thread Vincent Pérès

Hello !

Thank you it is working. I've done a query, and my facet query is :

"facet_queries":{
"published_year_facet:[1999 TO 2005]":95,
"rating_facet:[3 TO 3.99]":25,
"rating_facet:[1 TO 1.99]":1},

Is it possible to 'group' kind of queries (published together, rating
together etc.) ? Or I have to match every query with regex ?

Thanks !


-- 
View this message in context: 
http://www.nabble.com/Create-custom-facets-after-building-index-tp20086166p20091818.html
Sent from the Solr - User mailing list archive at Nabble.com.



Hierarchical Faceting

2008-10-21 Thread Sachit P. Menon
Hi,

I have gone through the archive in search of Hierarchical Faceting but was not 
clear as what should I exactly do to achieve that.

Suppose, I have 3 categories like politics, science and sports. In the schema, 
I am defining a field type called 'Category'. I don't have a sub category field 
type (and don't want to have one).
Now, Cricket and Football are some categories which can be considered to be 
under sports.
When I search for something and if it is present in the 'sports' category, then 
it should show me the facets of cricket and football too.

My question is:
Do I need to specify cricket, football also as categories or sub categories of 
sports (for which I don't want to make a separate field)?
And if I make these as categories only, then how will I achieve the drilling 
down of the data to cricket or football.



Thanks and Regards
Sachit P. Menon| Programmer Analyst| MindTree Ltd. |West Campus, Phase-1, 
Global Village, RVCE Post, Mysore Road, Bangalore-560 059, INDIA |Voice +91 80 
26264000 |Extn  64907|Fax +91 80 26264100 | Mob : +91 
9986747356|www.mindtree.com
 |



This message (including attachment if any) is confidential and may be 
privileged. If you have received this message by mistake please notify the 
sender by return e-mail and delete this message from your system. Any 
unauthorized use or dissemination of this message in whole or in part is 
strictly prohibited. E-mail may contain viruses. Before opening attachments 
please check them for viruses and defects. While MindTree Limited (MindTree) 
has put in place checks to minimize the risks, MindTree will not be responsible 
for any viruses or defects or any forwarded attachments emanating either from 
within MindTree or outside.

Please note that e-mails are susceptible to change and MindTree shall not be 
liable for any improper, untimely or incomplete transmission.

MindTree reserves the right to monitor and review the content of all messages 
sent to or from MindTree e-mail address. Messages sent to or from this e-mail 
address may be stored on the MindTree e-mail system or else where.


Issues with facet

2008-10-21 Thread prerna07

Hi,

On using Facet in solr query I am facing various issues.

Scenario 1:
I have 11 Index with tag : productIndex 

my search query is appended by facet  parameters :
facet=true&facet.field=Index_Type_s&qt=dismaxrequest

The facet node i am getting in solr result is :
 
- 
- 
  11 
  11 
  11 
  
  

According to my understanding I should get only one result, which should be
like the below mentioned node

- 
 11 
  

Scenario 2: 

My index has following fields :
 In Search of the Shape of the Universe,
mathamatics 

My search Query is : 
facet=true&facet.field=productDescription_s&qt=dismaxrequest

The result Solr is giving displaying :


- 
  1 
  1 
  2 
  1 
  1 


I am not able to figure out the facet results. It does noyt contain any 
result of Universe, It also removes characters from matahmatics and shape.

Please help me understanding the issue and let me know if any change in
schema / solrConfig can solve the issue.

Thanks,
Prerna






-- 
View this message in context: 
http://www.nabble.com/Issues-with-facet-tp20090842p20090842.html
Sent from the Solr - User mailing list archive at Nabble.com.



solr1.3 / tomcat55 / MySql but character_set_client && character_set_connection LATIN1

2008-10-21 Thread sunnyfr

Hi,

How can I do to manage that ?? 
| character_set_client| latin1  
 
| 
| character_set_connection| latin1  
 
| 
| character_set_database  | utf8
 
| 
| character_set_filesystem| binary  
 
| 
| character_set_results   | latin1  
 
| 
| character_set_server| utf8
 
| 
| character_set_system| utf8
 
| 
| character_sets_dir  |
/usr/local/mysql-5.0.51b-sphinx/share/mysql/charsets/ | 
| collation_connection| latin1_swedish_ci   
 
| 
| collation_database  | utf8_general_ci 
 
| 
| collation_server| utf8_general_ci

Thanks a lot,

-- 
View this message in context: 
http://www.nabble.com/solr1.3---tomcat55---MySql-but-character_set_clientcharacter_set_connection---LATIN1-tp20090455p20090455.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: tomcat55/solr1.3 - Indexing data, doesnt take in consideration utf8!

2008-10-21 Thread sunnyfr

It actually come from the database mysql's variable :
| character_set_client| latin1  
 
| 
| character_set_connection| latin1  
 
| 

so I don't know really now how to configure my datasource to point in latin1
and not utf8.


sunnyfr wrote:
> 
> Hi Jerome,
> 
> I tried to chat with you but you wasn't there or ...?? lol on your
> website.
> 
> Ok I tried what you did and my file bring me back in gedit :
> 
> 
> 0 name="QTime">0 name="q">ALL start="0">2006-10-10T05:29:32Z name="description_ja">All Japan Women's Pro-wrestling
> 

WWWA Champion Title Match >

豐田真奈美 VS 井上京子 >

813343 name="language">JA40 name="spell">Toyota Manami VS Inoue Kyoko name="stat_views">1422 > > and just that in open office : > > > 0 name="QTime">0 name="q">ALL start="0">2006-10-10T05:29:32Z name="description_ja">All Japan Women's Pro-wrestling > > :( don't know! > > > Jérôme Etévé wrote: >> >> Looks like you have a double encoding problem. >> >> It might be because you fetch UTF-8 binary data from mysql (I know >> that for instance the perl driver has an issue with that) and you then >> encode it a second time in UTF-8 when you post to solr. >> >> Make sure the string you're getting from mysql are actually proper >> unicode strings and not the raw UTF-8 encoded binary form. >> >> You may want to have a look at >> http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-charsets.html >> for the proper option to use with your connection. >> >> What you can try to check you're posting actual UTF-8 data to solr is >> to dump your xml post in a file (don't forget to set the input >> encoding to UTF-8 ). Then you can check if this file is readable with >> any UTF-8 aware editor. >> >> Cheers, >> >> Jerome. >> >> >> On Tue, Oct 21, 2008 at 10:43 AM, sunnyfr <[EMAIL PROTECTED]> wrote: >>> >>> Hi, >>> >>> I've solr 1.3 and tomcat55. >>> When I try to index a bit of data and I request ALL, obviously my accent >>> and >>> UTF8 encoding is not took in consideration. >>> >>> 2006-12-14T15:28:27Z >>> >>> Le 1er film de Goro Miyazaki (fils de Hayao) >>> je suis allÃ(c)e ... >>> >>> 渡邊 å‰ å· vs 三ç"°ä¸‹ç"° 1 >>> >>> >>> My database Mysql is well in UTF8, if I request data manually from mysql >>> I >>> will get accent even japan characters properly >>> >>> I index my data, my data-config is : >>> >> driver="com.mysql.jdbc.Driver" >>> url="jdbc:mysql://master-spare.videos.com/videos" >>> user="solr" >>> password="pass" >>> batchSize="-1" >>> responseBuffering="adaptive"/> >>> >>> My schema config file start by : >>> >>> I've add in my server.xml : because my localhost point on 8180 >>>>> maxThreads="150" minSpareThreads="25" maxSpareThreads="75" >>> enableLookups="false" redirectPort="8443" >>> acceptCount="100" >>> connectionTimeout="2" disableUploadTimeout="true" >>> URIEncoding="UTF-8" useBodyEncodingForURI="true" /> >>> >>> What can I check? >>> I'm using a linux server. >>> If I do dpkg-reconfigure -plow locales >>> Generating locales... >>> fr_BE.UTF-8... up-to-date >>> fr_CA.UTF-8... up-to-date >>> fr_CH.UTF-8... up-to-date >>> fr_FR.UTF-8... up-to-date >>> fr_LU.UTF-8... up-to-date >>> Generation complete. >>> >>> Would that be a problem, I would say no but maybe, do I miss a >>> package??? >>> >>> >>> >>> -- >>> View this message in context: >>> http://www.nabble.com/tomcat55-solr1.3---Indexing-data%2C-doesnt-take-in-consideration-utf8%21-tp20086167p20086167.html >>> Sent from the Solr - User mailing list archive at Nabble.com. >>> >>> >> >> >> >> -- >> Jerome Eteve. >> >> Chat with me live at http://www.eteve.net >> >> [EMAIL PROTECTED] >> >> > > -- View this message in context: http://www.nabble.com/tomcat55-solr1.3---Indexing-data%2C-doesnt-take-in-consideration-utf8%21-tp20086167p20090130.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: tomcat55/solr1.3 - Indexing data, doesnt take in consideration utf8!

2008-10-21 Thread sunnyfr

Hi Jerome,

I tried to chat with you but you wasn't there or ...?? lol on your website.

Ok I tried what you did and my file bring me back in gedit :


00ALL2006-10-10T05:29:32ZAll Japan Women's Pro-wrestling


WWWA Champion Title Match

豐田真奈美 VS 井上京子

813343JA40Toyota Manami VS Inoue Kyoko1422 and just that in open office : 00ALL2006-10-10T05:29:32ZAll Japan Women's Pro-wrestling :( don't know! Jérôme Etévé wrote: > > Looks like you have a double encoding problem. > > It might be because you fetch UTF-8 binary data from mysql (I know > that for instance the perl driver has an issue with that) and you then > encode it a second time in UTF-8 when you post to solr. > > Make sure the string you're getting from mysql are actually proper > unicode strings and not the raw UTF-8 encoded binary form. > > You may want to have a look at > http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-charsets.html > for the proper option to use with your connection. > > What you can try to check you're posting actual UTF-8 data to solr is > to dump your xml post in a file (don't forget to set the input > encoding to UTF-8 ). Then you can check if this file is readable with > any UTF-8 aware editor. > > Cheers, > > Jerome. > > > On Tue, Oct 21, 2008 at 10:43 AM, sunnyfr <[EMAIL PROTECTED]> wrote: >> >> Hi, >> >> I've solr 1.3 and tomcat55. >> When I try to index a bit of data and I request ALL, obviously my accent >> and >> UTF8 encoding is not took in consideration. >> >> 2006-12-14T15:28:27Z >> >> Le 1er film de Goro Miyazaki (fils de Hayao) >> je suis allÃ(c)e ... >> >> 渡邊 å‰ å· vs 三ç"°ä¸‹ç"° 1 >> >> >> My database Mysql is well in UTF8, if I request data manually from mysql >> I >> will get accent even japan characters properly >> >> I index my data, my data-config is : >> > driver="com.mysql.jdbc.Driver" >> url="jdbc:mysql://master-spare.videos.com/videos" >> user="solr" >> password="pass" >> batchSize="-1" >> responseBuffering="adaptive"/> >> >> My schema config file start by : >> >> I've add in my server.xml : because my localhost point on 8180 >>> maxThreads="150" minSpareThreads="25" maxSpareThreads="75" >> enableLookups="false" redirectPort="8443" acceptCount="100" >> connectionTimeout="2" disableUploadTimeout="true" >> URIEncoding="UTF-8" useBodyEncodingForURI="true" /> >> >> What can I check? >> I'm using a linux server. >> If I do dpkg-reconfigure -plow locales >> Generating locales... >> fr_BE.UTF-8... up-to-date >> fr_CA.UTF-8... up-to-date >> fr_CH.UTF-8... up-to-date >> fr_FR.UTF-8... up-to-date >> fr_LU.UTF-8... up-to-date >> Generation complete. >> >> Would that be a problem, I would say no but maybe, do I miss a package??? >> >> >> >> -- >> View this message in context: >> http://www.nabble.com/tomcat55-solr1.3---Indexing-data%2C-doesnt-take-in-consideration-utf8%21-tp20086167p20086167.html >> Sent from the Solr - User mailing list archive at Nabble.com. >> >> > > > > -- > Jerome Eteve. > > Chat with me live at http://www.eteve.net > > [EMAIL PROTECTED] > > -- View this message in context: http://www.nabble.com/tomcat55-solr1.3---Indexing-data%2C-doesnt-take-in-consideration-utf8%21-tp20086167p20088857.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: error with delta import

2008-10-21 Thread Florian Aumeier

hello everybody

thank you all for your help and ideas it works now.

what are we doing wrong?
Florian



actually, I am not sure what we did wrong. After we started it again 
from scratch and with the simplified query it all worked as expected.


Regards
Florian


Re: tomcat55/solr1.3 - Indexing data, doesnt take in consideration utf8!

2008-10-21 Thread Jérôme Etévé
Looks like you have a double encoding problem.

It might be because you fetch UTF-8 binary data from mysql (I know
that for instance the perl driver has an issue with that) and you then
encode it a second time in UTF-8 when you post to solr.

Make sure the string you're getting from mysql are actually proper
unicode strings and not the raw UTF-8 encoded binary form.

You may want to have a look at
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-charsets.html
for the proper option to use with your connection.

What you can try to check you're posting actual UTF-8 data to solr is
to dump your xml post in a file (don't forget to set the input
encoding to UTF-8 ). Then you can check if this file is readable with
any UTF-8 aware editor.

Cheers,

Jerome.


On Tue, Oct 21, 2008 at 10:43 AM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I've solr 1.3 and tomcat55.
> When I try to index a bit of data and I request ALL, obviously my accent and
> UTF8 encoding is not took in consideration.
> 
> 2006-12-14T15:28:27Z
> 
> Le 1er film de Goro Miyazaki (fils de Hayao)
> je suis allÃ(c)e  ...
> 
> 渡邊 å‰ å·  vs 三ç"°ä¸‹ç"° 1
>
>
> My database Mysql is well in UTF8, if I request data manually from mysql I
> will get accent even japan characters properly
>
> I index my data, my data-config is :
>driver="com.mysql.jdbc.Driver"
>  url="jdbc:mysql://master-spare.videos.com/videos"
>  user="solr"
>  password="pass"
>  batchSize="-1"
>  responseBuffering="adaptive"/>
>
> My schema config file start by : 
>
> I've add in my server.xml : because my localhost point on 8180
>   maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
>   enableLookups="false" redirectPort="8443" acceptCount="100"
>   connectionTimeout="2" disableUploadTimeout="true"
> URIEncoding="UTF-8" useBodyEncodingForURI="true" />
>
> What can I check?
> I'm using a linux server.
> If I do dpkg-reconfigure -plow locales
> Generating locales...
>  fr_BE.UTF-8... up-to-date
>  fr_CA.UTF-8... up-to-date
>  fr_CH.UTF-8... up-to-date
>  fr_FR.UTF-8... up-to-date
>  fr_LU.UTF-8... up-to-date
> Generation complete.
>
> Would that be a problem, I would say no but maybe, do I miss a package???
>
>
>
> --
> View this message in context: 
> http://www.nabble.com/tomcat55-solr1.3---Indexing-data%2C-doesnt-take-in-consideration-utf8%21-tp20086167p20086167.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>



-- 
Jerome Eteve.

Chat with me live at http://www.eteve.net

[EMAIL PROTECTED]


Re: Create custom facets after building index

2008-10-21 Thread Shalin Shekhar Mangar
You should use facet.query which gives the number of documents matching the
query. For example:

facet.query=rating:[0 TO 0.99]&facet.query=rating:[1 TO 1.99] etc.

http://wiki.apache.org/solr/SimpleFacetParameters

On Tue, Oct 21, 2008 at 3:12 PM, Vincent Pérès <[EMAIL PROTECTED]>wrote:

>
> Hello,
>
> I would like to create a custom facet for a 'rating_facets'.
> The rates are like that : 1, 1.2, 5, 5.78 etc.
> Is it possible to tell solr to create a 'custom' facet :
> [0 to 0.99] is 0
> [ 1 to 1.99] is 1
> etc. (and get them back into xml results with number of results by value)
> Or I have to specify this way on my schema.xml?
>
> Thank you !
> Vincent
> --
> View this message in context:
> http://www.nabble.com/Create-custom-facets-after-building-index-tp20086166p20086166.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>


-- 
Regards,
Shalin Shekhar Mangar.


tomcat55/solr1.3 - Indexing data, doesnt take in consideration utf8!

2008-10-21 Thread sunnyfr

Hi,

I've solr 1.3 and tomcat55.
When I try to index a bit of data and I request ALL, obviously my accent and
UTF8 encoding is not took in consideration.

2006-12-14T15:28:27Z

Le 1er film de Goro Miyazaki (fils de Hayao)
je suis allée  ...

渡邊 前川 vs 三田下田 1


My database Mysql is well in UTF8, if I request data manually from mysql I
will get accent even japan characters properly

I index my data, my data-config is :
  

My schema config file start by : 

I've add in my server.xml : because my localhost point on 8180


What can I check?
I'm using a linux server.
If I do dpkg-reconfigure -plow locales
Generating locales...
  fr_BE.UTF-8... up-to-date
  fr_CA.UTF-8... up-to-date
  fr_CH.UTF-8... up-to-date
  fr_FR.UTF-8... up-to-date
  fr_LU.UTF-8... up-to-date
Generation complete.

Would that be a problem, I would say no but maybe, do I miss a package???



-- 
View this message in context: 
http://www.nabble.com/tomcat55-solr1.3---Indexing-data%2C-doesnt-take-in-consideration-utf8%21-tp20086167p20086167.html
Sent from the Solr - User mailing list archive at Nabble.com.



Create custom facets after building index

2008-10-21 Thread Vincent Pérès

Hello,

I would like to create a custom facet for a 'rating_facets'. 
The rates are like that : 1, 1.2, 5, 5.78 etc.
Is it possible to tell solr to create a 'custom' facet :
[0 to 0.99] is 0
[ 1 to 1.99] is 1 
etc. (and get them back into xml results with number of results by value)
Or I have to specify this way on my schema.xml?

Thank you !
Vincent
-- 
View this message in context: 
http://www.nabble.com/Create-custom-facets-after-building-index-tp20086166p20086166.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: Japonish language seems to don't work on solr 1.3

2008-10-21 Thread sunnyfr

And even accent doesn't work :( it's really a problem with my utf8
je suis allée le voir et comme d'abitude les paysages sont magnifiques,
mais l'histoire n'est p




Maybe it comes from my data-config ?? 

  

??


sunnyfr wrote:
> 
> Hi Todd,
> 
> I definitely no idea if I request MySql database I've the good characters
> :
> My japon file : select title from video where title like '%画%' and
> video.language='ja' limit 2;
> 
> [EMAIL PROTECTED]:/# mysql -A -pass -u solr --h vip.videos.com dailymotion
>  title
> 恐怖映画、こわくない版
> 映画のミステイク・ムービー
> 
> Any idea?
> Thanks a lot
> 
> 
> 
> 
> Feak, Todd wrote:
>> 
>> That looks like the data in the index is incorrectly encoded. 
>> 
>> If the inserts into your index came in via HTTP GET and your Tomcat
>> wasn't configured for UTF-8 at the time, I could see it going into the
>> index corrupted. But I'm not sure if that's even possible (depends on
>> Update)
>> 
>> Is it hard to re-create your index after that configuration change? If
>> it's a quick thing to do, it may be worth doing again to eliminate as a
>> possibility.
>> 
>> -Todd Feak
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Japan-language-seems-to-don%27t-work-on-solr-1.3-tp20070938p20086141.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: Japonish language seems to don't work on solr 1.3

2008-10-21 Thread sunnyfr

Maybe it comes from my data-config ?? 

  

??


Hi Todd,

I definitely no idea if I request MySql database I've the good characters :
My japon file : select title from video where title like '%画%' and
video.language='ja' limit 2;

[EMAIL PROTECTED]:/# mysql -A -pass -u solr --h vip.videos.com dailymotion
 
> That looks like the data in the index is incorrectly encoded. 
> 
> If the inserts into your index came in via HTTP GET and your Tomcat wasn't
> configured for UTF-8 at the time, I could see it going into the index
> corrupted. But I'm not sure if that's even possible (depends on Update)
> 
> Is it hard to re-create your index after that configuration change? If
> it's a quick thing to do, it may be worth doing again to eliminate as a
> possibility.
> 
> -Todd Feak
> 

-- 
View this message in context: 
http://www.nabble.com/Japan-language-seems-to-don%27t-work-on-solr-1.3-tp20070938p20086079.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: Japonish language seems to don't work on solr 1.3

2008-10-21 Thread sunnyfr

Hi Todd,

I definitely no idea if I request MySql database I've the good characters :
My japon file : select title from video where title like '%画%' and
video.language='ja' limit 2;

[EMAIL PROTECTED]:/# mysql -A -pass -u solr --h vip.videos.com dailymotion
http://www.nabble.com/Japan-language-seems-to-don%27t-work-on-solr-1.3-tp20070938p20085577.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Sorting performance

2008-10-21 Thread christophe
I'm now considering if Solr (Lucene) is a good choice when we have a 
huge number of indexed document and a large number of new documents 
needs to be indexed everyday.


Maybe I'm wrong, but my feeling is that the way the sort caches are 
handled (recreated after new commit, not shared between Searcher), the 
solution does not scale. And it is not just a memory issue (memory is 
cheap), but more the lack of update of an existing cache.


I'm testing if I can sort on a field that might be faster to cache: any 
hints on this ? Would that make a difference if  I use a field with less 
different values than a timestamp ? I'm looking for some details on how 
the cache is populated on the first query. Also, for the code insiders 
;-), would that be difficult to change this caching mechanism to allow 
update and reuse of an existing cache ?


Thanks for your help
Christophe

christophe wrote:
The problem is that I will have hundreds of users doing queries, and a 
continuous flow of document coming in.
So a delay in warming up a cache "could" be acceptable if I do it a 
few times per day. But not on a too regular basis (right now, the 
first query that loads the cache takes 150s).


However: I'm not sure why it looks not to be a good idea to update the 
caches when updates are committed ? Any centralized cache (memcached 
is a good one) that is maintained up to date by the update/commit 
process would be great. Config options could then let to the user to 
decide if the cache is shared between servers or not. Creating a new 
cache and then swap it will double the necessary memory.


I also have a related questions regarding readers: a new reader is 
opened when documents are committed. And the cache is associated with 
the reader (if I got it right). Are all user requests served by this 
reader ? How does that scale if I have many concurrent users ?


C.

Norberto Meijome wrote:

On Mon, 20 Oct 2008 16:28:23 +0300
christophe <[EMAIL PROTECTED]> wrote:

 
Hum. this mean I have to wait before I index new documents and 
avoid indexing when they are created (I have about 50 000 new 
documents created each day and I was planning to make those 
searchable ASAP).



you can always index + optimize out of band in a 'master' / RW server 
, and

then send the updated index to your slave (the one actually serving the
requests).
This *will NOT* remove the need to refresh your cache, but it will 
remove any

delay introduced by commit/indexing + optimise.

 
Too bad there is no way to have a centralized cache that can be 
shared AND updated when new documents are created.



hmm not sure it makes sense like that... but maybe along the lines of 
having an
active cache that is used to serve queries, and new ones being 
prepared, and

then swapped when ready.
Speaking of which (or not :P) , has anyone thought about / done any 
work on
using memcached for these internal solr caches? I guess it would make 
sense for

setups with several slaves ( or even a master updating memcached
too...)...though for a setup with shards it would be slightly more 
involved
(although it *could* be used to support several slaves per 'data 
shard' ).


All the best,
B
_
{Beto|Norberto|Numard} Meijome

RTFM and STFW before anything bad happens.

I speak for myself, not my employer. Contents may be hot. Slippery 
when wet.
Reading disclaimers makes you go blind. Writing them is worse. You 
have been

Warned.