Re: Problem indexing on Oracle DB

2008-12-01 Thread Noble Paul നോബിള്‍ नोब्ळ्
I have raised a new issue
https://issues.apache.org/jira/browse/SOLR-891

On Tue, Dec 2, 2008 at 9:54 AM, Noble Paul നോബിള്‍ नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> Hi Joel,
> DIH does not translate Clob automatically to text.
>
> We can open that as an issue.
> meanwhile you can write a transformer of your own to read Clob and
> convert to text.
> http://wiki.apache.org/solr/DataImportHandler#head-4756038c418ab3fa389efc822277a7a789d27688
>
>
> On Tue, Dec 2, 2008 at 2:57 AM, Joel Karlsson <[EMAIL PROTECTED]> wrote:
>> Thanks for your reply!
>>
>> I'm already using the DataImportHandler for indexing. Do I still have to
>> convert the Clob myself or are there any built-in functions that I've
>> missed?
>>
>> // Joel
>>
>>
>> 2008/12/1 Yonik Seeley <[EMAIL PROTECTED]>
>>
>>> If you are querying Oracle yourself and using something like SolrJ,
>>> then you must convert the Clob yourself into a String representation.
>>>
>>> Also, did you look at Solr's DataImportHandler?
>>>
>>> -Yonik
>>>
>>> On Mon, Dec 1, 2008 at 3:11 PM, Joel Karlsson <[EMAIL PROTECTED]>
>>> wrote:
>>> > Hello everyone,
>>> >
>>> > I'm trying to index on an Oracle DB, but can't seem to find any built in
>>> > support for objects of type oracle.sql.Clob. The field I try to put the
>>> data
>>> > into is of type text, but after indexing it only contains the
>>> Clob-objects
>>> > string representation, i.e. something like [EMAIL PROTECTED] Anyone
>>> who
>>> > knows how to get Solr to index the content of these objects rather than
>>> its
>>> > string representation??
>>> >
>>> > Thanks in advance! // Joel
>>> >
>>>
>>
>
>
>
> --
> --Noble Paul
>



-- 
--Noble Paul


Re: Problem indexing on Oracle DB

2008-12-01 Thread Noble Paul നോബിള്‍ नोब्ळ्
Hi Joel,
DIH does not translate Clob automatically to text.

We can open that as an issue.
meanwhile you can write a transformer of your own to read Clob and
convert to text.
http://wiki.apache.org/solr/DataImportHandler#head-4756038c418ab3fa389efc822277a7a789d27688


On Tue, Dec 2, 2008 at 2:57 AM, Joel Karlsson <[EMAIL PROTECTED]> wrote:
> Thanks for your reply!
>
> I'm already using the DataImportHandler for indexing. Do I still have to
> convert the Clob myself or are there any built-in functions that I've
> missed?
>
> // Joel
>
>
> 2008/12/1 Yonik Seeley <[EMAIL PROTECTED]>
>
>> If you are querying Oracle yourself and using something like SolrJ,
>> then you must convert the Clob yourself into a String representation.
>>
>> Also, did you look at Solr's DataImportHandler?
>>
>> -Yonik
>>
>> On Mon, Dec 1, 2008 at 3:11 PM, Joel Karlsson <[EMAIL PROTECTED]>
>> wrote:
>> > Hello everyone,
>> >
>> > I'm trying to index on an Oracle DB, but can't seem to find any built in
>> > support for objects of type oracle.sql.Clob. The field I try to put the
>> data
>> > into is of type text, but after indexing it only contains the
>> Clob-objects
>> > string representation, i.e. something like [EMAIL PROTECTED] Anyone
>> who
>> > knows how to get Solr to index the content of these objects rather than
>> its
>> > string representation??
>> >
>> > Thanks in advance! // Joel
>> >
>>
>



-- 
--Noble Paul


Re: Function Queries

2008-12-01 Thread outre

Just to confirm. The query works if written as in Yonik's comment. 

Thanks 

Yonik Seeley wrote:
> 
> On Fri, Nov 28, 2008 at 8:33 PM, outre <[EMAIL PROTECTED]> wrote:
>>
>> Hi,
>>
>> I was wondering if function queries are supported in SOLR1.3?
>>
>> I looked thru http://wiki.apache.org/solr/FunctionQuery, and tried to run
>> an
>> example on my SOLR setup. It doesn't seem though that _val_ hook has any
>> effect on sorting, and "score" parameter doesn't seem to return computed
>> values.
>>
>> In my index I have fields "price" (double) and "store"(string), and just
>> for
>> the sake of an example all I am doing is multiplying the price by 2. So
>> my
>> query looks like this:
>> q=store:"adidas"&_val_:"product(price,2)"&fl=store,price,score
> 
> _val_ is a magic field name for the lucene/solr query parser.
> So try something with _val_ in the query string:
> q=_val_:"product(price,2)"
> 
> -Yonik
> 
>> Based on wiki, I'd expect to see doubled price in the score field, but I
>> seem to be getting SOLR document scores. Am I misunderstanding something
>> here?
>>
>> Thanks
>> --
>> View this message in context:
>> http://www.nabble.com/Function-Queries-tp20742850p20742850.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Function-Queries-tp20742850p20785707.html
Sent from the Solr - User mailing list archive at Nabble.com.



RE: maxWarmingSearchers

2008-12-01 Thread dudes dudes

thanks for your explanation  and time :)



> Subject: RE: maxWarmingSearchers
> Date: Mon, 1 Dec 2008 13:57:59 -0800
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> 
> The commit after each one may be hurting you.
> 
> I believe that a new searcher is created after each commit. That searcher 
> then runs through its warm up, which can be costly depending on your warming 
> settings. Even if it's not overly costly, creating another one while the 
> first one is running makes both of them run just a bit slower. Then creating 
> a third exacerbates it, etc. If you are commiting faster then it can warm, 
> you will get the pile-up of searchers you are seeing. And the more that pile 
> up, the longer it takes each one to finish up.
> 
> I would suggest trying to group those 4-10 documents into a single update job 
> and doing a single commit. That way only 1 searcher is created per 4 minute 
> window.
> 
> Also (sorry I forgot this earlier) you can see how long your searcher is 
> spending warming up by looking at the stats page under the admin. 
> (/admin/stats.jsp) There is timing information on how long it took for the 
> searcher and caches to warm up.
> 
> -Todd Feak
> 
> -Original Message-
> From: dudes dudes [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 01, 2008 1:46 PM
> To: solr-user@lucene.apache.org
> Subject: RE: maxWarmingSearchers
> 
> 
> 
> 
> > Subject: RE: maxWarmingSearchers
> > Date: Mon, 1 Dec 2008 13:35:53 -0800
> > From: [EMAIL PROTECTED]
> > To: solr-user@lucene.apache.org
> > 
> > Ok sounds reasonable. When you index/update those 4-10 documents, are
> > you doing a single commit? OR are you doing a commit after each one?
> 
> well, commits after each one..
> 
> > How big is your index? How big are your documents? Ballpark figures are
> > ok.
> 
> more than couple of MBs 
> 
> one final piece of information: I only have 2 G of RAM on that machine( linux 
> on VMware environment ) and increased the memory of tomcat to 1 G
> 
> thanks
> 
> 
> > 
> > -ToddFeak
> > 
> > -Original Message-
> > From: dudes dudes [mailto:[EMAIL PROTECTED] 
> > Sent: Monday, December 01, 2008 1:24 PM
> > To: solr-user@lucene.apache.org
> > Subject: RE: maxWarmingSearchers
> > 
> > 
> > Hi ToddFeak, 
> > 
> > thanks for your response... 
> > 
> > solr version is 1.3. Roughly about every 4 minutes there are
> > indexing/updaing of 4 to 10 documents that is from multiple clients
> > to one master server... 
> > 
> > It is also worth  mentioning that I have 
> > 
> > 
> > 
> > 
> > 
> > postCommit uncommented under solrconfig ... QueryCache and
> > FilterCache settings are left as default 
> > 
> > thanks
> > ak
> > 
> > 
> > 
> > 
> > 
> > 
> > > Subject: RE: maxWarmingSearchers
> > > Date: Mon, 1 Dec 2008 13:13:15 -0800
> > > From: [EMAIL PROTECTED]
> > > To: solr-user@lucene.apache.org
> > > 
> > > Probably going to need a bit more information.
> > > 
> > > Such as: 
> > > What version of Solr and a little info on doc count, index size, etc.
> > > How often are you sending updates to your Master? 
> > > How often are you committing? 
> > > What are your QueryCache and FilterCache settings for autowarm?
> > > Do you have queries set up for newSearcher and firstSearcher?
> > > 
> > > To start looking for your problem, you usually get a pile up of
> > > searchers if you are committing too fast, and/or the warming of new
> > > searchers is taking an extraordinary long time. If is happening in a
> > > repeatable fashion, increasing the number of warming searchers
> > probably
> > > won't fix the issue, just delay it.
> > > 
> > > -ToddFeak
> > > 
> > > -Original Message-
> > > From: dudes dudes [mailto:[EMAIL PROTECTED] 
> > > Sent: Monday, December 01, 2008 12:13 PM
> > > To: solr-user@lucene.apache.org
> > > Subject: maxWarmingSearchers
> > > 
> > > 
> > > Hello all, 
> > > 
> > > I'm having this issue and I hope I get some help.. :)
> > > 
> > > This following happens quite often ... even though searching  and
> > > indexing are on a safe side... 
> > > 
> > > SolrException: HTTP code=503, reason=Error opening new searcher.
> > > exceeded
> > > 
> > > limit of maxWarmingSearchers=4, try again later.
> > > 
> > > I have increased the value of  maxWarmingSearchers to 8 and I still
> > > experience the same problem 
> > > 
> > > This issue is happening to the master solr server  changing
> > > maxWarmingSearchers  to higher value would help overcoming this issue
> > ?
> > > or I should consider some other points ?
> > > 
> > > Another question is ? from your experience, do you think such error
> > > introduces server crash ? 
> > > 
> > > 
> > > thanks for your time..
> > > ak 
> > > 
> > > 
> > > 
> > > _
> > > Get a bird's eye view of the world with Multimap
> > > http://clk.atdmt.com/GBL/go/115454059/direct/01/
> > 
> > _

RE: maxWarmingSearchers

2008-12-01 Thread Feak, Todd
The commit after each one may be hurting you.

I believe that a new searcher is created after each commit. That searcher then 
runs through its warm up, which can be costly depending on your warming 
settings. Even if it's not overly costly, creating another one while the first 
one is running makes both of them run just a bit slower. Then creating a third 
exacerbates it, etc. If you are commiting faster then it can warm, you will get 
the pile-up of searchers you are seeing. And the more that pile up, the longer 
it takes each one to finish up.

I would suggest trying to group those 4-10 documents into a single update job 
and doing a single commit. That way only 1 searcher is created per 4 minute 
window.

Also (sorry I forgot this earlier) you can see how long your searcher is 
spending warming up by looking at the stats page under the admin. 
(/admin/stats.jsp) There is timing information on how long it took for the 
searcher and caches to warm up.

-Todd Feak

-Original Message-
From: dudes dudes [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 01, 2008 1:46 PM
To: solr-user@lucene.apache.org
Subject: RE: maxWarmingSearchers




> Subject: RE: maxWarmingSearchers
> Date: Mon, 1 Dec 2008 13:35:53 -0800
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> 
> Ok sounds reasonable. When you index/update those 4-10 documents, are
> you doing a single commit? OR are you doing a commit after each one?

well, commits after each one..

> How big is your index? How big are your documents? Ballpark figures are
> ok.

more than couple of MBs 

one final piece of information: I only have 2 G of RAM on that machine( linux 
on VMware environment ) and increased the memory of tomcat to 1 G

thanks


> 
> -ToddFeak
> 
> -Original Message-
> From: dudes dudes [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 01, 2008 1:24 PM
> To: solr-user@lucene.apache.org
> Subject: RE: maxWarmingSearchers
> 
> 
> Hi ToddFeak, 
> 
> thanks for your response... 
> 
> solr version is 1.3. Roughly about every 4 minutes there are
> indexing/updaing of 4 to 10 documents that is from multiple clients
> to one master server... 
> 
> It is also worth  mentioning that I have 
> 
> 
>   
>   
>   
>   postCommit uncommented under solrconfig ... QueryCache and
> FilterCache settings are left as default 
> 
> thanks
> ak
> 
> 
> 
> 
> 
> 
> > Subject: RE: maxWarmingSearchers
> > Date: Mon, 1 Dec 2008 13:13:15 -0800
> > From: [EMAIL PROTECTED]
> > To: solr-user@lucene.apache.org
> > 
> > Probably going to need a bit more information.
> > 
> > Such as: 
> > What version of Solr and a little info on doc count, index size, etc.
> > How often are you sending updates to your Master? 
> > How often are you committing? 
> > What are your QueryCache and FilterCache settings for autowarm?
> > Do you have queries set up for newSearcher and firstSearcher?
> > 
> > To start looking for your problem, you usually get a pile up of
> > searchers if you are committing too fast, and/or the warming of new
> > searchers is taking an extraordinary long time. If is happening in a
> > repeatable fashion, increasing the number of warming searchers
> probably
> > won't fix the issue, just delay it.
> > 
> > -ToddFeak
> > 
> > -Original Message-
> > From: dudes dudes [mailto:[EMAIL PROTECTED] 
> > Sent: Monday, December 01, 2008 12:13 PM
> > To: solr-user@lucene.apache.org
> > Subject: maxWarmingSearchers
> > 
> > 
> > Hello all, 
> > 
> > I'm having this issue and I hope I get some help.. :)
> > 
> > This following happens quite often ... even though searching  and
> > indexing are on a safe side... 
> > 
> > SolrException: HTTP code=503, reason=Error opening new searcher.
> > exceeded
> > 
> > limit of maxWarmingSearchers=4, try again later.
> > 
> > I have increased the value of  maxWarmingSearchers to 8 and I still
> > experience the same problem 
> > 
> > This issue is happening to the master solr server  changing
> > maxWarmingSearchers  to higher value would help overcoming this issue
> ?
> > or I should consider some other points ?
> > 
> > Another question is ? from your experience, do you think such error
> > introduces server crash ? 
> > 
> > 
> > thanks for your time..
> > ak 
> > 
> > 
> > 
> > _
> > Get a bird's eye view of the world with Multimap
> > http://clk.atdmt.com/GBL/go/115454059/direct/01/
> 
> _
> Get Windows Live Messenger on your Mobile
> http://clk.atdmt.com/UKM/go/msnnkmgl001001ukm/direct/01/

_
Imagine a life without walls.  See the possibilities. 
http://clk.atdmt.com/UKM/go/122465943/direct/01/


RE: maxWarmingSearchers

2008-12-01 Thread dudes dudes



> Subject: RE: maxWarmingSearchers
> Date: Mon, 1 Dec 2008 13:35:53 -0800
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> 
> Ok sounds reasonable. When you index/update those 4-10 documents, are
> you doing a single commit? OR are you doing a commit after each one?

well, commits after each one..

> How big is your index? How big are your documents? Ballpark figures are
> ok.

more than couple of MBs 

one final piece of information: I only have 2 G of RAM on that machine( linux 
on VMware environment ) and increased the memory of tomcat to 1 G

thanks


> 
> -ToddFeak
> 
> -Original Message-
> From: dudes dudes [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 01, 2008 1:24 PM
> To: solr-user@lucene.apache.org
> Subject: RE: maxWarmingSearchers
> 
> 
> Hi ToddFeak, 
> 
> thanks for your response... 
> 
> solr version is 1.3. Roughly about every 4 minutes there are
> indexing/updaing of 4 to 10 documents that is from multiple clients
> to one master server... 
> 
> It is also worth  mentioning that I have 
> 
> 
>   
>   
>   
>   postCommit uncommented under solrconfig ... QueryCache and
> FilterCache settings are left as default 
> 
> thanks
> ak
> 
> 
> 
> 
> 
> 
> > Subject: RE: maxWarmingSearchers
> > Date: Mon, 1 Dec 2008 13:13:15 -0800
> > From: [EMAIL PROTECTED]
> > To: solr-user@lucene.apache.org
> > 
> > Probably going to need a bit more information.
> > 
> > Such as: 
> > What version of Solr and a little info on doc count, index size, etc.
> > How often are you sending updates to your Master? 
> > How often are you committing? 
> > What are your QueryCache and FilterCache settings for autowarm?
> > Do you have queries set up for newSearcher and firstSearcher?
> > 
> > To start looking for your problem, you usually get a pile up of
> > searchers if you are committing too fast, and/or the warming of new
> > searchers is taking an extraordinary long time. If is happening in a
> > repeatable fashion, increasing the number of warming searchers
> probably
> > won't fix the issue, just delay it.
> > 
> > -ToddFeak
> > 
> > -Original Message-
> > From: dudes dudes [mailto:[EMAIL PROTECTED] 
> > Sent: Monday, December 01, 2008 12:13 PM
> > To: solr-user@lucene.apache.org
> > Subject: maxWarmingSearchers
> > 
> > 
> > Hello all, 
> > 
> > I'm having this issue and I hope I get some help.. :)
> > 
> > This following happens quite often ... even though searching  and
> > indexing are on a safe side... 
> > 
> > SolrException: HTTP code=503, reason=Error opening new searcher.
> > exceeded
> > 
> > limit of maxWarmingSearchers=4, try again later.
> > 
> > I have increased the value of  maxWarmingSearchers to 8 and I still
> > experience the same problem 
> > 
> > This issue is happening to the master solr server  changing
> > maxWarmingSearchers  to higher value would help overcoming this issue
> ?
> > or I should consider some other points ?
> > 
> > Another question is ? from your experience, do you think such error
> > introduces server crash ? 
> > 
> > 
> > thanks for your time..
> > ak 
> > 
> > 
> > 
> > _
> > Get a bird's eye view of the world with Multimap
> > http://clk.atdmt.com/GBL/go/115454059/direct/01/
> 
> _
> Get Windows Live Messenger on your Mobile
> http://clk.atdmt.com/UKM/go/msnnkmgl001001ukm/direct/01/

_
Imagine a life without walls.  See the possibilities. 
http://clk.atdmt.com/UKM/go/122465943/direct/01/

RE: maxWarmingSearchers

2008-12-01 Thread Feak, Todd
Ok sounds reasonable. When you index/update those 4-10 documents, are
you doing a single commit? OR are you doing a commit after each one?

How big is your index? How big are your documents? Ballpark figures are
ok.

-ToddFeak

-Original Message-
From: dudes dudes [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 01, 2008 1:24 PM
To: solr-user@lucene.apache.org
Subject: RE: maxWarmingSearchers


Hi ToddFeak, 

thanks for your response... 

solr version is 1.3. Roughly about every 4 minutes there are
indexing/updaing of 4 to 10 documents that is from multiple clients
to one master server... 

It is also worth  mentioning that I have 





postCommit uncommented under solrconfig ... QueryCache and
FilterCache settings are left as default 

thanks
ak






> Subject: RE: maxWarmingSearchers
> Date: Mon, 1 Dec 2008 13:13:15 -0800
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> 
> Probably going to need a bit more information.
> 
> Such as: 
> What version of Solr and a little info on doc count, index size, etc.
> How often are you sending updates to your Master? 
> How often are you committing? 
> What are your QueryCache and FilterCache settings for autowarm?
> Do you have queries set up for newSearcher and firstSearcher?
> 
> To start looking for your problem, you usually get a pile up of
> searchers if you are committing too fast, and/or the warming of new
> searchers is taking an extraordinary long time. If is happening in a
> repeatable fashion, increasing the number of warming searchers
probably
> won't fix the issue, just delay it.
> 
> -ToddFeak
> 
> -Original Message-
> From: dudes dudes [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 01, 2008 12:13 PM
> To: solr-user@lucene.apache.org
> Subject: maxWarmingSearchers
> 
> 
> Hello all, 
> 
> I'm having this issue and I hope I get some help.. :)
> 
> This following happens quite often ... even though searching  and
> indexing are on a safe side... 
> 
> SolrException: HTTP code=503, reason=Error opening new searcher.
> exceeded
> 
> limit of maxWarmingSearchers=4, try again later.
> 
> I have increased the value of  maxWarmingSearchers to 8 and I still
> experience the same problem 
> 
> This issue is happening to the master solr server  changing
> maxWarmingSearchers  to higher value would help overcoming this issue
?
> or I should consider some other points ?
> 
> Another question is ? from your experience, do you think such error
> introduces server crash ? 
> 
> 
> thanks for your time..
> ak 
> 
> 
> 
> _
> Get a bird's eye view of the world with Multimap
> http://clk.atdmt.com/GBL/go/115454059/direct/01/

_
Get Windows Live Messenger on your Mobile
http://clk.atdmt.com/UKM/go/msnnkmgl001001ukm/direct/01/


Re: Problem indexing on Oracle DB

2008-12-01 Thread Joel Karlsson
Thanks for your reply!

I'm already using the DataImportHandler for indexing. Do I still have to
convert the Clob myself or are there any built-in functions that I've
missed?

// Joel


2008/12/1 Yonik Seeley <[EMAIL PROTECTED]>

> If you are querying Oracle yourself and using something like SolrJ,
> then you must convert the Clob yourself into a String representation.
>
> Also, did you look at Solr's DataImportHandler?
>
> -Yonik
>
> On Mon, Dec 1, 2008 at 3:11 PM, Joel Karlsson <[EMAIL PROTECTED]>
> wrote:
> > Hello everyone,
> >
> > I'm trying to index on an Oracle DB, but can't seem to find any built in
> > support for objects of type oracle.sql.Clob. The field I try to put the
> data
> > into is of type text, but after indexing it only contains the
> Clob-objects
> > string representation, i.e. something like [EMAIL PROTECTED] Anyone
> who
> > knows how to get Solr to index the content of these objects rather than
> its
> > string representation??
> >
> > Thanks in advance! // Joel
> >
>


RE: maxWarmingSearchers

2008-12-01 Thread dudes dudes

Hi ToddFeak, 

thanks for your response... 

solr version is 1.3. Roughly about every 4 minutes there are indexing/updaing 
of 4 to 10 documents that is from multiple clients to one master server... 

It is also worth  mentioning that I have 





postCommit uncommented under solrconfig ... QueryCache and FilterCache 
settings are left as default 

thanks
ak






> Subject: RE: maxWarmingSearchers
> Date: Mon, 1 Dec 2008 13:13:15 -0800
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> 
> Probably going to need a bit more information.
> 
> Such as: 
> What version of Solr and a little info on doc count, index size, etc.
> How often are you sending updates to your Master? 
> How often are you committing? 
> What are your QueryCache and FilterCache settings for autowarm?
> Do you have queries set up for newSearcher and firstSearcher?
> 
> To start looking for your problem, you usually get a pile up of
> searchers if you are committing too fast, and/or the warming of new
> searchers is taking an extraordinary long time. If is happening in a
> repeatable fashion, increasing the number of warming searchers probably
> won't fix the issue, just delay it.
> 
> -ToddFeak
> 
> -Original Message-
> From: dudes dudes [mailto:[EMAIL PROTECTED] 
> Sent: Monday, December 01, 2008 12:13 PM
> To: solr-user@lucene.apache.org
> Subject: maxWarmingSearchers
> 
> 
> Hello all, 
> 
> I'm having this issue and I hope I get some help.. :)
> 
> This following happens quite often ... even though searching  and
> indexing are on a safe side... 
> 
> SolrException: HTTP code=503, reason=Error opening new searcher.
> exceeded
> 
> limit of maxWarmingSearchers=4, try again later.
> 
> I have increased the value of  maxWarmingSearchers to 8 and I still
> experience the same problem 
> 
> This issue is happening to the master solr server  changing
> maxWarmingSearchers  to higher value would help overcoming this issue ?
> or I should consider some other points ?
> 
> Another question is ? from your experience, do you think such error
> introduces server crash ? 
> 
> 
> thanks for your time..
> ak 
> 
> 
> 
> _
> Get a bird's eye view of the world with Multimap
> http://clk.atdmt.com/GBL/go/115454059/direct/01/

_
Get Windows Live Messenger on your Mobile
http://clk.atdmt.com/UKM/go/msnnkmgl001001ukm/direct/01/

RE: maxWarmingSearchers

2008-12-01 Thread Feak, Todd
Probably going to need a bit more information.

Such as: 
What version of Solr and a little info on doc count, index size, etc.
How often are you sending updates to your Master? 
How often are you committing? 
What are your QueryCache and FilterCache settings for autowarm?
Do you have queries set up for newSearcher and firstSearcher?

To start looking for your problem, you usually get a pile up of
searchers if you are committing too fast, and/or the warming of new
searchers is taking an extraordinary long time. If is happening in a
repeatable fashion, increasing the number of warming searchers probably
won't fix the issue, just delay it.

-ToddFeak

-Original Message-
From: dudes dudes [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 01, 2008 12:13 PM
To: solr-user@lucene.apache.org
Subject: maxWarmingSearchers


Hello all, 

I'm having this issue and I hope I get some help.. :)

This following happens quite often ... even though searching  and
indexing are on a safe side... 

SolrException: HTTP code=503, reason=Error opening new searcher.
exceeded

limit of maxWarmingSearchers=4, try again later.

I have increased the value of  maxWarmingSearchers to 8 and I still
experience the same problem 

This issue is happening to the master solr server  changing
maxWarmingSearchers  to higher value would help overcoming this issue ?
or I should consider some other points ?

Another question is ? from your experience, do you think such error
introduces server crash ? 


thanks for your time..
ak 



_
Get a bird's eye view of the world with Multimap
http://clk.atdmt.com/GBL/go/115454059/direct/01/


Re: Problem indexing on Oracle DB

2008-12-01 Thread Yonik Seeley
If you are querying Oracle yourself and using something like SolrJ,
then you must convert the Clob yourself into a String representation.

Also, did you look at Solr's DataImportHandler?

-Yonik

On Mon, Dec 1, 2008 at 3:11 PM, Joel Karlsson <[EMAIL PROTECTED]> wrote:
> Hello everyone,
>
> I'm trying to index on an Oracle DB, but can't seem to find any built in
> support for objects of type oracle.sql.Clob. The field I try to put the data
> into is of type text, but after indexing it only contains the Clob-objects
> string representation, i.e. something like [EMAIL PROTECTED] Anyone who
> knows how to get Solr to index the content of these objects rather than its
> string representation??
>
> Thanks in advance! // Joel
>


maxWarmingSearchers

2008-12-01 Thread dudes dudes

Hello all, 

I'm having this issue and I hope I get some help.. :)

This following happens quite often ... even though searching  and indexing are 
on a safe side... 

SolrException: HTTP code=503, reason=Error opening new searcher. exceeded

limit of maxWarmingSearchers=4, try again later.

I have increased the value of  maxWarmingSearchers to 8 and I still experience 
the same problem 

This issue is happening to the master solr server  changing 
maxWarmingSearchers  to higher value would help overcoming this issue ? or I 
should consider some other points ?

Another question is ? from your experience, do you think such error introduces 
server crash ? 


thanks for your time..
ak 



_
Get a bird’s eye view of the world with Multimap
http://clk.atdmt.com/GBL/go/115454059/direct/01/

Problem indexing on Oracle DB

2008-12-01 Thread Joel Karlsson
Hello everyone,

I'm trying to index on an Oracle DB, but can't seem to find any built in
support for objects of type oracle.sql.Clob. The field I try to put the data
into is of type text, but after indexing it only contains the Clob-objects
string representation, i.e. something like [EMAIL PROTECTED] Anyone who
knows how to get Solr to index the content of these objects rather than its
string representation??

Thanks in advance! // Joel


Re: admin/luke and EmbeddedSolrServer

2008-12-01 Thread Ryan McKinley

sure:


LukeRequest luke = new LukeRequest();
luke.setShowSchema( false );
LukeResponse rsp = luke.process( server );



On Dec 1, 2008, at 11:42 AM, Matt Mitchell wrote:

Is it possible to send a request to admin/luke using the  
EmbeddedSolrServer?




Re: Dealing with field values as key/value pairs

2008-12-01 Thread Noble Paul നോബിള്‍ नोब्ळ्
In the end lucene stores stuff as strings.

Even if you do store your data as map FieldType , Solr May not be able
to treat it like a map.
So it is fine to put is the map as one single string

On Mon, Dec 1, 2008 at 10:07 PM, Stephane Bailliez <[EMAIL PROTECTED]> wrote:
> Hi all,
>
>
> I'm looking for ideas about how to best deal with a situation where I need
> to deal with storing key/values pairs in the index for consumption in the
> client.
>
>
> Typical example would be to have a document with multiple genres where for
> simplicity reasons i'd like to send both the 'id' and the 'human readable
> label' (might not be the best example since one would immediatly say 'what
> about localization', but in that case assume it's an entity such as company
> name or a person name).
>
> So say I have
>
> field1 = { 'key1':'this is value1', 'key2':'this is value2' }
>
>
> I was thinking the easiest (not the prettiest) solution would be to store it
> as effectively a string 'key:this is the value' and then have the client
> deal with this 'format' and then parse it based on ':' pattern
>
> Another alternative I was thinking may have been to use a custom field that
> effectively would make the field value as a map key/value for the writer but
> I'm not so sure it can really be done, haven't investigated that one deeply.
>
> Any feedback would be welcome, solution might even be simpler and cleaner
> than what I'm mentioning above, but my brain is mushy in the last couple of
> weeks.
>
> -- stephane
>
>



-- 
--Noble Paul


Help with Solr configuration for LuSql performance comparison

2008-12-01 Thread Glen Newton
Hello,

I am putting together some performance comparisons of LuSql[1] and
Solr's Data Import Request Handler[2], JdbcDataSource[3]. I want to
make sure I am comparing apples with apples, so would appreciate the
community helping me to make sure I am doing so.

First, LuSql default uses Lucene's StandardAnalyzer[4]. The Javadocs
indicates it uses StandardTokenizer[5], StandardFilter[6],
LowerCaseFilter[7], and StopFilter[8]. I have created a fieldType in
my Solr configuration's schema.xml that I hope is the equivalent to
this:


  




  


Is this equivalent?

Queries
The two queries I am using in the evaluation are using the MySQL
database of 6.4m journal article metadata records[9] I've used in
previous Lucene indexing and searching[10].

Here is the LuSql command line for the first query:
 java ca.nrc.cisti.lusql.core.LuSqlMain -q "select  Publisher.name as
pub, Journal.title as jo, Article.rawUrl as textpath, Journal.issn,
Volume.number as vol,Volume.coverYear as year, Issue.number as iss,
Article.id as id,Article.title as ti, Article.abstract,
Article.startPage as startPage,Article.endPage as endPage from
Publisher, Journal, Volume, Issue, Article where Publisher.id =
Journal.publisherId and Journal.id = Volume.journalId and Volume.id =
Issue.volumeId and Issue.id = Article.issueId" -c
"jdbc:mysql://blue01/dartejos?user=USER&password=PASS" -n 50
-v -l testsolr0

Here is the corresponding Solr data-config.xml file:






















Here is the LuSql command line for the second query:

 java ca.nrc.cisti.lusql.core.LuSqlMain -q "select  Publisher.name as
pub, Journal.title as jo, Article.rawUrl as textpath, Journal.issn,
Volume.number as vol,Volume.coverYear as year, Issue.number as iss,
Article.id as id,Article.title as ti, Article.abstract,
Article.startPage as startPage,Article.endPage as endPage from
Publisher, Journal, Volume, Issue, Article where Publisher.id =
Journal.publisherId and Journal.id = Volume.journalId and Volume.id =
Issue.volumeId and Issue.id = Article.issueId" -c
"jdbc:mysql://blue01/dartejos?user=USER&password=PASS" -n 50
-v -l testsolr1 -Q "id|select Keyword.string as keyword from
ArticleKeywordJoin, Keyword where ArticleKeywordJoin.keywordId=@  and
ArticleKeywordJoin.articleId =  Keyword.id"  -Q "id|select
concat(lastName,\', \', firstName) as fullAuthor   from
ArticleAuthorJoin, Author where ArticleAuthorJoin.articleId = @   and
ArticleAuthorJoin.authorId = Author.id"  -Q "id|select
referencedArticleId as citedId   from Reference where
Reference.referencingArticleId = @"

Here is the corresponding Solr data-config.xml file:



















 













Does this configuration look right, i.e. does it represent an
equivalent or close-enough configuration for a valid and useful
performance comparison of LuSql and Solr Data Import Request Handler
JdbcDataSource comparison.

Note: I am using Solr 1.3, however I have replaced the Lucene 2.9 jars
in webapps/solr/WEB-INF/lib with Lucene 1.4 jars, which I am also
using with LuSql.

The tests will involve measuring the time to complete indexing of the
above two queries with varying heap sizes.

thanks,

Glen

[1]http://lab.cisti-icist.nrc-cnrc.gc.ca/cistilabswiki/index.php/LuSql
[2]http://wiki.apache.org/solr/DataImportHandler
[3]http://wiki.apache.org/solr/DataImportHandler#head-210d4735264367dd07c61ec67dacb8581c57eb17
[4]http://lucene.apache.org/java/2_3_1/api/org/apache/lucene/analysis/standard/StandardAnalyzer.html
[5]http://lucene.apache.org/java/2_3_1/api/org/apache/lucene/analysis/standard/StandardTokenizer.html
[6]http://lucene.apache.org/java/2_3_1/api/org/apache/lucene/analysis/standard/StandardFilter.html
[7]http://lucene.apache.org/java/2_3_1/api/org/apache/lucene/analysis/LowerCaseFilter.html
[8]http://lucene.apache.org/java/2_3_1/api/org/apache/lucene/analysis/StopFilter.html
[9]http://zzzoot.blogspot.com/2008/04/lucene-indexing-performance-benchmarks.html
[10]http://zzzoot.blogspot.com/search?q=lucene

-- 

-


Dealing with field values as key/value pairs

2008-12-01 Thread Stephane Bailliez

Hi all,


I'm looking for ideas about how to best deal with a situation where I 
need to deal with storing key/values pairs in the index for consumption 
in the client.



Typical example would be to have a document with multiple genres where 
for simplicity reasons i'd like to send both the 'id' and the 'human 
readable label' (might not be the best example since one would 
immediatly say 'what about localization', but in that case assume it's 
an entity such as company name or a person name).


So say I have

field1 = { 'key1':'this is value1', 'key2':'this is value2' }


I was thinking the easiest (not the prettiest) solution would be to 
store it as effectively a string 'key:this is the value' and then have 
the client deal with this 'format' and then parse it based on 
':' pattern


Another alternative I was thinking may have been to use a custom field 
that effectively would make the field value as a map key/value for the 
writer but I'm not so sure it can really be done, haven't investigated 
that one deeply.


Any feedback would be welcome, solution might even be simpler and 
cleaner than what I'm mentioning above, but my brain is mushy in the 
last couple of weeks.


-- stephane



admin/luke and EmbeddedSolrServer

2008-12-01 Thread Matt Mitchell
Is it possible to send a request to admin/luke using the EmbeddedSolrServer?


Re: broken socket in Jetty causing invalid XML ?

2008-12-01 Thread Anoop Bhatti
I increased this param to 10MB and still got the same exception.  I
doubt my HTTP requests are exceeding 10MB.  I can rerun everything
again and log the sizes of the requests, just to be 100% sure, but
this will take some time.  The stacktrace appears when
the Lucene index is around 30 GB.  Could the size of the indexes be
the cause of the problem?  Any other ideas?


Thanks again,

Anoop Bhatti
--
Committed to open source technology.




On Mon, Nov 24, 2008 at 11:44 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> I thought the Jetty maxFormContentSize was only for form data (not for
> the POST body).
> Does increasing this param help?
>
> -Yonik
>
>
> On Mon, Nov 24, 2008 at 2:45 PM, Anoop Bhatti <[EMAIL PROTECTED]> wrote:
>> Hello Solr Community,
>>
>> I'm getting the stracktrace below when adding docs using the
>> CommonsHttpSolrServer.add(Collection
>> docs)
>> method.  The server doesn't seem to be able to recover from this error.
>> We are adding a collection with 1,000 SolrInputDocument's at a time.
>> I'm using Solr 1.3.0 and Java 1.6.0_07.
>>
>> It seems that this problem occurs in Jetty when the TCP connection is
>> broken while the stream (from the add(...) method) is being read.  The
>> XML read from the broken stream is not valid.  Is this a correct
>> diagnosis?
>>
>> Could this stacktrace be occurring when the max POST size has been
>> exceeded?  I'm referring to the example/etc/jetty.xml file, which has
>> the setting:
>>  
>>
>>  org.mortbay.jetty.Request.maxFormContentSize
>>  100
>>
>> Right now maxFormContentSize is set to the default 1 MB on my server.
>>
>> Also, in some cases I have two clients, could Jetty be blocking one
>> client and causing it to finally timeout?
>>
>> This stacktrace doesn't happen right away, is occurs once the Lucene
>> indexes are about 30 GB.
>> Could the periodic merging of segments be the culprit?
>>
>> I was also thinking that the problem could be with writing back the
>> response (the UpdateRequest).
>>
>> Here's the gist of my Java client code:
>>
>> CommonsHttpSolrServer solrServer = new CommonsHttpSolrServer(solrServerURL);
>>solrServer.setConnectionTimeout(100);
>>solrServer.setDefaultMaxConnectionsPerHost(100);
>>solrServer.setMaxTotalConnections(100);
>>solrServer.add(solrDocs); //the collection of docs
>>solrServer.commit();
>>
>> And here's the stacktrace:
>>
>> Nov 20, 2008 5:25:33 PM org.apache.solr.core.SolrCore execute
>> INFO: [] webapp=/solr path=/update params={wt=javabin&version=2.2}
>> status=0 QTime=469
>> Nov 20, 2008 5:25:37 PM org.apache.solr.common.SolrException log
>> SEVERE: com.ctc.wstx.exc.WstxIOException: null
>>at com.ctc.wstx.sr.StreamScanner.throwFromIOE(StreamScanner.java:708)
>>at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1086)
>>at 
>> org.apache.solr.handler.XmlUpdateRequestHandler.readDoc(XmlUpdateRequestHandler.java:321)
>>at 
>> org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:195)
>>at 
>> org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123)
>>at 
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>>at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
>>at 
>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
>>at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
>>at 
>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
>>at 
>> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
>>at 
>> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>>at 
>> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>>at 
>> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
>>at 
>> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
>>at 
>> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
>>at 
>> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>>at 
>> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
>>at org.mortbay.jetty.Server.handle(Server.java:285)
>>at 
>> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
>>at 
>> org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:835)
>>at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:641)
>>at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:202)
>>at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
>>at 
>> org.mortbay.jetty.bio.SocketConnector$Connection.ru

Re: NIO not working yet

2008-12-01 Thread Yonik Seeley
On Sun, Nov 30, 2008 at 10:43 PM, Jon Baer <[EMAIL PROTECTED]> wrote:
> Sorry missed that (and probably dumb question), does that -D flag work for
> setting as a RAMDirectory as well?

Nope... that's implemented only in FSDirectory to specify a specific
subclass implementation (which RAMDirectory is not).

> - Jon
>
> On Nov 30, 2008, at 8:42 PM, Yonik Seeley wrote:
>
>> OK, the development version of Solr should now be fixed (i.e. NIO
>> should be the default for non-Windows platforms).  The next nightly
>> build (Dec-01-2008) should have the changes.
>>
>> -Yonik
>>
>> On Wed, Nov 12, 2008 at 2:59 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
>>>
>>> NIO support in the latest Solr development versions does not work yet
>>> (I previously advised that some people with possible lock contention
>>> problems try it out).  We'll let you know when it's fixed, but in the
>>> meantime you can always set the system property
>>> "org.apache.lucene.FSDirectory.class" to
>>> "org.apache.lucene.store.NIOFSDirectory" to try it out.
>>>
>>> for example:
>>>
>>> java
>>> -Dorg.apache.lucene.FSDirectory.class=org.apache.lucene.store.NIOFSDirectory
>>> -jar start.jar
>>>
>>> -Yonik
>
>


Re: Upgrade from 1.2 to 1.3 gives 3x slowdown + script!

2008-12-01 Thread Fergus McMenemie
Hello Grant,
>
>Haven't forgotten about you, but I've been traveling and then into  
>some US Holidays here.
Happy thanks giving!

>
>To confirm I am understanding, you are seeing a slowdown between 1.3- 
>dev from April and one from September, right?
Yep.

Here are the MD5 hashes:-
fergus: md5 *.war
MD5 (solr-bc.war) = 8d4f95628d6978c959d63d304788bc25
MD5 (solr-nightly.war) = 10281455a66b0035ee1f805496d880da

This is the META-INF/MANIFEST.MF from a recent nightly build. (slow)
  Manifest-Version: 1.0
  Ant-Version: Apache Ant 1.7.0
  Created-By: 1.5.0_06-b05 (Sun Microsystems Inc.)
  Extension-Name: org.apache.solr
  Specification-Title: Apache Solr Search Server
  Specification-Version: 1.3.0.2008.11.13.08.16.12
  Specification-Vendor: The Apache Software Foundation
  Implementation-Title: org.apache.solr
  Implementation-Version: nightly exported - yonik - 2008-11-13 08:16:12
  Implementation-Vendor: The Apache Software Foundation
  X-Compile-Source-JDK: 1.5
  X-Compile-Target-JDK: 1.5

This is  war file we were given on the course
  Manifest-Version: 1.0
  Ant-Version: Apache Ant 1.7.0
  Created-By: 1.5.0_13-121 ("Apple Computer, Inc.")
  Extension-Name: org.apache.solr
  Specification-Title: Apache Solr Search Server
  Specification-Version: 1.2.2008.04.04.08.09.14
  Specification-Vendor: The Apache Software Foundation
  Implementation-Title: org.apache.solr
  Implementation-Version: 1.3-dev exported - erik - 2008-04-04 08:09:14
  Implementation-Vendor: The Apache Software Foundation
  X-Compile-Source-JDK: 1.5
  X-Compile-Target-JDK: 1.5

I have copied both war files to a web site

http://www.twig.me.uk/solr/solr-bc.war (solr 1.3 dev == bootcamp)

http://www.twig.me.uk/solr/solr-nightly.war (nightly)


Regards Fergus.

>Can you produce an MD5 hash of the WAR file or something, such that I  
>can know I have the exact bits.  Better yet, perhaps you can put those  
>files up somewhere where they can be downloaded.
>
>Thanks,
>Grant
>
>On Nov 26, 2008, at 10:54 AM, Fergus McMenemie wrote:
>
>> Hello Grant,
>>
>> Not much good with Java profilers (yet!) so I thought I
>> would send a script!
>>
>> Details... details! Having decided to produce a script to
>> replicate the 1.2 vis 1.3 speed problem. The required rigor
>> revealed a lot more.
>>
>> 1) The faster version I have previously referred to as 1.2,
>>   was actually a "1.3-dev" I had downloaded as part of the
>>   solr bootcamp class at ApacheCon Europe 2008. The ID
>>   string in the CHANGES.txt document is:-
>>   $Id: CHANGES.txt 643465 2008-04-01 16:10:19Z gsingers $
>>
>> 2) I did actually download and speed test a version of 1.2
>>   from the internet. It's CHANGES.txt id is:-
>>   $Id: CHANGES.txt 543263 2007-05-31 21:19:02Z yonik $
>>   Speed wise it was about the same as 1.3 at 64min. It also
>>   had lots of char set issues and is ignored from now on.
>>
>> 3) The version I was planning to use, till I found this,
>>   speed issue was the "latest" official version:-
>>   $Id: CHANGES.txt 694377 2008-09-11 17:40:11Z klaas $
>>   I also verified the behavior with a nightly build.
>>   $Id: CHANGES.txt 712457 2008-11-09 01:24:11Z koji $
>>
>> Anyway, The following script indexes the content in 22min
>> for the 1.3-dev version and takes 68min for the newer releases
>> of 1.3. I took the conf directory from the 1.3dev (bootcamp)
>> release and used it replace the conf directory from the
>> official 1.3 release. The 3x slow down was still there; it is
>> not a configuration issue!
>> =
>>
>>
>>
>>
>>
>>
>> #! /bin/bash
>>
>> # This script assumes a /usr/local/tomcat link to whatever version
>> # of tomcat you have installed. I have "apache-tomcat-5.5.20" Also
>> # /usr/local/tomcat/conf/Catalina/localhost contains no solr.xml.
>> # All the following was done as root.
>>
>>
>> # I have a directory /usr/local/ts which contains four versions of  
>> solr. The
>> # "official" 1.2 along with two 1.3 releases and a version of 1.2 or  
>> a 1.3beata
>> # I got while attending a solr bootcamp. I indexed the same content  
>> using the
>> # different versions of solr as follows:
>> cd /usr/local/ts
>> if [ "" ]
>> then
>>   echo "Starting from a-fresh"
>>   sleep 5 # allow time for me to interrupt!
>>   cp -Rp apache-solr-bc/example/solr  ./solrbc  #bc = bootcamp
>>   cp -Rp apache-solr-nightly/example/solr ./solrnightly
>>   cp -Rp apache-solr-1.3.0/example/solr   ./solr13
>>
>>   # the gaz is regularly updated and its name keeps changing :-) The  
>> page
>>   # http://earth-info.nga.mil/gns/html/namefiles.htm has a link to  
>> the latest
>>   # version.
>>   curl "http://earth-info.nga.mil/gns/html/geonames_dd_dms_date_20081118.zip 
>> " > geonames.zip
>>   unzip -q geonames.zip
>>   # delete corrupt blips!
>>   perl -i -n -e 'print unless
>>   ($. > 2128495 and $. < 2128505) or
>>   ($. > 5944254 and $. < 5944260)
>>   ;' geonames_dd_dms_date_20081118.txt
>>   #following was used to detect bad short rec

Re: Very bad performance

2008-12-01 Thread Cedric Houis

Hi Yonik.

You have really made a great job.=)

Here are my benchmark results: 

With the current class : 
- 10 users : 860 queries / Average time 0.511 sec 
- 50 users :  1335 queries / Average time 8.264 sec 
- 100 users : 1358 queries / Average time 18.703 sec

With the new class
FullText + Facets : 
- 10 users : 940 queries / Average time 0.109 sec 
- 50 users :  4605 queries / Average time 0.206 sec 
- 100 users : 6090 queries / Average time 1.605 sec

Thanks again,

  Cédric



Yonik Seeley wrote:
> 
> On Mon, Nov 24, 2008 at 9:19 AM, Cedric Houis <[EMAIL PROTECTED]>
> wrote:
>> I've made the test with the latest nightly build of Solr. Performances
>> are
>> similar.
> 
> Yep, see
> http://www.nabble.com/NIO-not-working-yet-to20468152.html#a20468152
> 
>> You've said that someone will work to improve faceted search. Could you
>> tell
>> me where I can tract those evolutions?
> 
> Coming soon... see
> https://issues.apache.org/jira/browse/SOLR-475
> 
> -Yonik
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Very-bad-performance-tp20366783p20768334.html
Sent from the Solr - User mailing list archive at Nabble.com.