What is the different?

2011-07-21 Thread cnyee
Hi,

I have two queries:

(1) q = (change management)
(2) q = (change management) AND domain_ids:(0^1.3 OR 1)

The purpose of the (2) is to boost the records with domain_ids=0.
In my database all records has domain_ids = 0 or 1, so domains_ids:(0 or 1)
will always returns the full database.

Now my questions is - query (2) returns 5000+ results, but query (1) returns
700+ results.

Can somebody enlighten me on what is the reasons behind such a vast
different in number of results?

Many thanks in advance.

Yee

--
View this message in context: 
http://lucene.472066.n3.nabble.com/What-is-the-different-tp3190278p3190278.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: XInclude Multiple Elements

2011-07-21 Thread Michael Sokolov
The various XInclude specs were never really fully implemented by XML 
parsers.   IMO it's really best for including whole XML files. If I 
remember right, the situation is that the xpointer() scheme (the most 
flexible) wasn't implemented.  There are two other schemes for 
addressing content within a document.  One of them (the sort of 
"default" scheme - see 
http://www.w3.org/TR/2003/REC-xptr-framework-20030325/) relies on 
identifiers in the document, but to get these identifiers "identified", 
you have to run the document through a DTD or Schema validation step as 
part of parsing.  I never did get that to work, but if you're diligent 
it seems possible.  The other scheme (the element() scheme) allows you 
to address child nodes by simple paths - like XPath, but much more 
limited syntax (see http://www.w3.org/TR/2003/REC-xptr-element-20030325/)


-Mike

I did attempt the xpointer="xpointer(//requestHandler)" syntax, and
received this error: 2011-07-13 18:49:06,640 [main] WARN
org.apache.solr.core.Config - XML parse warning in
"solrres:/solrconfig.xml", line 3, column 133: SchemeUnsupported: The
XPointer scheme 'xpointer' is not supported. This matches what the
wiki page indicates, and the Xerces FAQ confirms, which is that Xerces
does not support the xpointer() scheme.  I was not able to find any
indication that there was any Java libraries available that do support
the xpointer scheme.  If anyone knows of one, and how to configure
Solr to use it, that would almost certainly fix my problem.

I do understand that Solr isn't doing anything special with XInclude.
But from my attempts to understand the state of XInclude on the JVM, I
am unable to identify a useful technique for taking advantage of it to
share configuration in Solr.  My hope was that someone who had used it
successfully could indicate either something I missed about how to
make it work, or a useful pattern for working within the limitations
of the available functionality.

Stephen Duncan Jr
www.stephenduncanjr.com




Re: fieldCache problem OOM exception

2011-07-21 Thread Santiago Bazerque
Hello Erick,

I have a 1.7MM documents, 3.6GB index. I also hava an unusual amount of
dynamic fields, that I use for sorting. My FieldCache currently has about
13.000 entries, even though my index only has 1-3 queries per second. Each
query sorts by two dynamic fields, and facets on 3-4 fields that are fixed.
These latter fields are always in the field cache, what I find suspicious is
the other ~13.000 that are sitting there.

I am using a 32GB heap, and I am seeing periodical OOM errors (I didn't spot
a regular pattern as Bernd did, but haven't increased RAM as methodically as
he has).

If you need any more info, I'll be glad to post it to the list.

Best,
Santiago

On Fri, Jun 17, 2011 at 9:13 AM, Erick Erickson wrote:

> Sorry, it was late last night when I typed that...
>
> Basically, if you sort and facet on #all# the fields you mentioned, it
> should populate
> the cache in one go. If the problem is that you just have too many unique
> terms
> for all those operations, then it should go bOOM.
>
> But, frankly, that's unlikely, I'm just suggesting that to be sure the
> easy case isn't
> the problem. Take a memory snapshot at that point just to see, it should be
> a
> high-water mark.
>
> The fact that you increase the heap and can then run for longer is
> extremely
> suspicious, and really smells like a memory issue, so we'd like to pursue
> it.
>
> I'd be really interested if anyone else is seeing anything similar,
> these are the
> scary ones...
>
> Best
> Erick
>
> On Fri, Jun 17, 2011 at 3:09 AM, Bernd Fehling
>  wrote:
> > Hi Erik,
> > I will take some memory snapshots during the next week,
> > but how can it be to get OOMs with one query?
> >
> > - I started with 6g for JVM --> 1 day until OOM.
> > - increased to 8 g --> 2 days until OOM
> > - increased to 10g --> 3.5 days until OOM
> > - increased to 16g --> 5 days until OOM
> > - currently 20g --> about 7 days until OOM
> >
> > Starting the system takes about 3.5g and goes up to about 4g after a
> while.
> >
> > The only dirty workaround so far is to restart the whole system after 5
> > days.
> > Not really nice.
> >
> > The problem seams to be fieldCache which is under the hood of jetty.
> > Do you know of any sizing features for fieldCache to limit the memory
> > consumption?
> >
> > Regards
> > Bernd
> >
> > Am 17.06.2011 03:37, schrieb Erick Erickson:
> >>
> >> Well, if my theory is right, you should be able to generate OOMs at will
> >> by
> >> sorting and faceting on all your fields in one query.
> >>
> >> But Lucene's cache should be garbage collected, can you take some memory
> >> snapshots during the week? It should hit a point and stay steady there.
> >>
> >> How much memory are you giving your JVM? It looks like a lot given your
> >> memory snapshot.
> >>
> >> Best
> >> Erick
> >>
> >> On Thu, Jun 16, 2011 at 3:01 AM, Bernd Fehling
> >>   wrote:
> >>>
> >>> Hi Erik,
> >>>
> >>> yes I'm sorting and faceting.
> >>>
> >>> 1) Fields for sorting:
> >>>   sort=f_dccreator_sort, sort=f_dctitle, sort=f_dcyear
> >>>   The parameter "facet.sort=" is empty, only using parameter "sort=".
> >>>
> >>> 2) Fields for faceting:
> >>>   f_dcperson, f_dcsubject, f_dcyear, f_dccollection, f_dclang,
> >>> f_dctypenorm,
> >>> f_dccontenttype
> >>>   Other faceting parameters:
> >>>
> >>>
> >>>
> ...&facet=true&facet.mincount=1&facet.limit=100&facet.sort=&facet.prefix=&...
> >>>
> >>> 3) The LukeRequestHandler takes too long for my huge index so this is
> >>> from
> >>>   the standalone luke (compiled for solr3.2):
> >>>   f_dccreator_sort = 10.029.196
> >>>   f_dctitle= 21.514.939
> >>>   f_dcyear =  1.471
> >>>   f_dcperson   = 14.138.165
> >>>   f_dcsubject  =  8.012.319
> >>>   f_dccollection   =  1.863
> >>>   f_dclang =299
> >>>   f_dctypenorm = 14
> >>>   f_dccontenttype  =497
> >>>
> >>> numDocs:28.940.964
> >>> numTerms:  686.813.235
> >>> optimized:true
> >>> hasDeletions:false
> >>>
> >>> What can you read/calculate from this values?
> >>>
> >>> Is my index to big for Lucene/Solr?
> >>>
> >>> What I don't understand, why fieldCache is not garbage collected
> >>> and therefore reduced in size from time to time.
> >>>
> >>> Regards
> >>> Bernd
> >>>
> >>> Am 15.06.2011 17:50, schrieb Erick Erickson:
> 
>  The first question I have is whether you're sorting and/or
>  faceting on many unique string values? I'm guessing
>  that sometime you are. So, some questions to help
>  pin it down:
>  1>what fields are you sorting on?
>  2>what fields are you faceting on?
>  3>how many unique terms in each (see the solr admin page).
> 
>  Best
>  Erick
> 
>  On Wed, Jun 15, 2011 at 8:22 AM, Bernd Fehling
>  wrote:
> >
> > Dear list,
> >
> > after getting OOM exception after one week of operation with
> > solr 3.2 I used MemoryAnalyzer for the heapdumpfile.
> > It looks like the field

Re: Determine which field term was found?

2011-07-21 Thread Jonathan Rochkind

I've had this problem too, although never come up with a good solution.

I've wondered, is there any clever way to use the highlighter to 
accomplish tasks like this, or is that more trouble than any help it'll 
get you?


Jonathan

On 7/21/2011 5:27 PM, Yonik Seeley wrote:

On Thu, Jul 21, 2011 at 4:47 PM, Olson, Ron  wrote:

Is there an easy way to find out which field matched a term in an OR query using Solr? I have a 
document with names in two multi-valued fields and I am searching for "Smith", using the 
query "A_NAMES:smith OR B_NAMES:smith". I figure I could loop through both result arrays, 
but that seems weird to me to have to search again for the value in a result.

That's pretty much the way lucene currently works - you don't know
what fields match a query.
If the query is simple, looping over the returned stored fields is
probably your best bet.

There are a couple other tricks you could use (although they are not
necessarily better):
1) with grouping by query (a trunk feature) you can essentially return
both queries with one request:
   q=*:*&group=true&group.query=A_NAMES:smith&group.query=B_NAMES:smith
   and optionally add a "group.query=A_NAMES:smith OR B_NAMES:smith" if
you need the combined list
2) use pseudo-fields (also trunk) in conjunction with the termfreq
function (the number of times a term appears in a field).  This
obviously only works with term queries.
   fl=*,count1:termfreq(A_NAMES,'smith'),count2:termfreq(B_NAMES,'smith')
   You can use parameter substitution to pull out the actual term and
simplify the query:
   fl=*,count1:termfreq(A_NAMES,$term),count2:termfreq(B_NAMES,$term)&term=smith


-Yonik
http://www.lucidimagination.com



Re: previous and next rows of current record

2011-07-21 Thread Jonathan Rochkind

I think maybe I know what you mean.

You have a result set generated by a query. You have an item detail page 
in your web app -- on that item detail page, you want to give 
next/previous buttons for "current search results".


If that's it, read on (although news isn't good), if that's not it, 
ignore me.


There is no good way to do it. Although it's not really so much a solr 
problem.  As far as Solr is concerned, if you know the query, and you 
know the current row into the query i, then just ask Solr for 
rows=1&start=$(i=1) to get previous, or i+1 to get next. (You can't send 
$(i-1) or $(i+1) to Solr that's just short hand, your app would have to 
calculate em and send the literals).


The problem is architecting a web app so when you are on an item detail 
page, the app knows what the "current" Solr query was, and what the "i" 
index into it was.


The app I work on wants to provide this feature too, but I am so unhappy 
with what it currently does (it is both ugly AND does not actually work 
at all right on several very common cases), that I am definitely not 
going to provide it as an example.  But if you are willing to have your 
web app send the "current" search and the index in the URL to the item 
detail page, that'd certainly make it easier.


It's not so much a Solr problem -- the answer in Solr is pretty clear. 
Keep track of what index into your results you are on, and then just ask 
for one previous or more.  But there's no great way to make a web app 
taht actually does that without horrid urls.  There's nothing built into 
Solr to help you. Solr is pretty much sessionless/stateless, it's got no 
idea what the 'current' search for your particular session is.




On 7/21/2011 2:38 PM, Bob Sandiford wrote:

But - what is it that makes '9' the next id after '5'?  why not '6'?  Or 
'91238412'? or '4'?

i.e. you still haven't answered the question about what 'next' and 'previous' 
really means...

But - if you already know that '9' is the next page, why not just do another 
query with id '9' to get the next record?

Bob Sandiford | Lead Software Engineer | SirsiDynix
P: 800.288.8020 X6943 | bob.sandif...@sirsidynix.com
www.sirsidynix.com



-Original Message-
From: Jonty Rhods [mailto:jonty.rh...@gmail.com]
Sent: Thursday, July 21, 2011 2:33 PM
To: solr-user@lucene.apache.org
Subject: Re: previous and next rows of current record

Hi

in my case there is no id sequence. id is generated sequence wise for
all category. but when we filter by category then same id become
random. If i m on detail page which have id 5 and nrxt id is 9 so on
same page my requirment is to get next id is 9.

On Thursday, July 21, 2011, Bob Sandiford
  wrote:

Well, it sort of depends on what you mean by the 'previous' and the

'next' record.

Do you have some type of sequencing built into your concept of your

solr / lucene indexes?  Do you have sequential id's?

i.e. What's the use case, and what's the data available to support

your use case?

Bob Sandiford | Lead Software Engineer | SirsiDynix
P: 800.288.8020 X6943 | bob.sandif...@sirsidynix.com
www.sirsidynix.com


-Original Message-
From: Jonty Rhods [mailto:jonty.rh...@gmail.com]
Sent: Thursday, July 21, 2011 2:18 PM
To: solr-user@lucene.apache.org
Subject: Re: previous and next rows of current record

Pls help..

On Thursday, July 21, 2011, Jonty Rhods

wrote:

Hi,

Is there any special query in solr to get the previous and next

record of current record.I am getting single record detail using id
from solr server. I need to get  next and previous on detail page.

regardsJonty









RE: Determine which field term was found?

2011-07-21 Thread Olson, Ron
Hmm, okay, well, if that's the way it works, then I'll loop through the arrays, 
as the query is pretty much as described.

Related to what you said about how lucene works, do you think this is 
functionality something worth opening an enhancement request for, or is it such 
a tiny corner-case as to not be worth it?

Thanks a lot for the help!

Ron

-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Thursday, July 21, 2011 4:27 PM
To: solr-user@lucene.apache.org
Subject: Re: Determine which field term was found?

On Thu, Jul 21, 2011 at 4:47 PM, Olson, Ron  wrote:
> Is there an easy way to find out which field matched a term in an OR query 
> using Solr? I have a document with names in two multi-valued fields and I am 
> searching for "Smith", using the query "A_NAMES:smith OR B_NAMES:smith". I 
> figure I could loop through both result arrays, but that seems weird to me to 
> have to search again for the value in a result.

That's pretty much the way lucene currently works - you don't know
what fields match a query.
If the query is simple, looping over the returned stored fields is
probably your best bet.

There are a couple other tricks you could use (although they are not
necessarily better):
1) with grouping by query (a trunk feature) you can essentially return
both queries with one request:
  q=*:*&group=true&group.query=A_NAMES:smith&group.query=B_NAMES:smith
  and optionally add a "group.query=A_NAMES:smith OR B_NAMES:smith" if
you need the combined list
2) use pseudo-fields (also trunk) in conjunction with the termfreq
function (the number of times a term appears in a field).  This
obviously only works with term queries.
  fl=*,count1:termfreq(A_NAMES,'smith'),count2:termfreq(B_NAMES,'smith')
  You can use parameter substitution to pull out the actual term and
simplify the query:
  fl=*,count1:termfreq(A_NAMES,$term),count2:termfreq(B_NAMES,$term)&term=smith


-Yonik
http://www.lucidimagination.com


DISCLAIMER: This electronic message, including any attachments, files or 
documents, is intended only for the addressee and may contain CONFIDENTIAL, 
PROPRIETARY or LEGALLY PRIVILEGED information.  If you are not the intended 
recipient, you are hereby notified that any use, disclosure, copying or 
distribution of this message or any of the information included in or with it 
is  unauthorized and strictly prohibited.  If you have received this message in 
error, please notify the sender immediately by reply e-mail and permanently 
delete and destroy this message and its attachments, along with any copies 
thereof. This message does not create any contractual obligation on behalf of 
the sender or Law Bulletin Publishing Company.
Thank you.


Disabling Coord on Solr queries

2011-07-21 Thread Ran Peled
I am looking for the simplest way to disable coord in Solr queries.  I have
found out Lucene allows this by construction of a BooleanQuery with
diableCoord=false:
public *BooleanQuery*(boolean disableCoord)

Is there any way to activate this functionality directly from a Solr query?

Thanks,
Ran


Re: Determine which field term was found?

2011-07-21 Thread Yonik Seeley
On Thu, Jul 21, 2011 at 4:47 PM, Olson, Ron  wrote:
> Is there an easy way to find out which field matched a term in an OR query 
> using Solr? I have a document with names in two multi-valued fields and I am 
> searching for "Smith", using the query "A_NAMES:smith OR B_NAMES:smith". I 
> figure I could loop through both result arrays, but that seems weird to me to 
> have to search again for the value in a result.

That's pretty much the way lucene currently works - you don't know
what fields match a query.
If the query is simple, looping over the returned stored fields is
probably your best bet.

There are a couple other tricks you could use (although they are not
necessarily better):
1) with grouping by query (a trunk feature) you can essentially return
both queries with one request:
  q=*:*&group=true&group.query=A_NAMES:smith&group.query=B_NAMES:smith
  and optionally add a "group.query=A_NAMES:smith OR B_NAMES:smith" if
you need the combined list
2) use pseudo-fields (also trunk) in conjunction with the termfreq
function (the number of times a term appears in a field).  This
obviously only works with term queries.
  fl=*,count1:termfreq(A_NAMES,'smith'),count2:termfreq(B_NAMES,'smith')
  You can use parameter substitution to pull out the actual term and
simplify the query:
  fl=*,count1:termfreq(A_NAMES,$term),count2:termfreq(B_NAMES,$term)&term=smith


-Yonik
http://www.lucidimagination.com


Determine which field term was found?

2011-07-21 Thread Olson, Ron
Hi all-

Is there an easy way to find out which field matched a term in an OR query 
using Solr? I have a document with names in two multi-valued fields and I am 
searching for "Smith", using the query "A_NAMES:smith OR B_NAMES:smith". I 
figure I could loop through both result arrays, but that seems weird to me to 
have to search again for the value in a result.

Thanks for any info,

Ron

DISCLAIMER: This electronic message, including any attachments, files or 
documents, is intended only for the addressee and may contain CONFIDENTIAL, 
PROPRIETARY or LEGALLY PRIVILEGED information.  If you are not the intended 
recipient, you are hereby notified that any use, disclosure, copying or 
distribution of this message or any of the information included in or with it 
is  unauthorized and strictly prohibited.  If you have received this message in 
error, please notify the sender immediately by reply e-mail and permanently 
delete and destroy this message and its attachments, along with any copies 
thereof. This message does not create any contractual obligation on behalf of 
the sender or Law Bulletin Publishing Company.
Thank you.


Ask again: set queryNorm for boost score to 1?

2011-07-21 Thread Elaine Li
I am asking the question again. Hope someone knows the answer. Basically i
just don't want the boosting scoring(generated from the formula) to be
normalized. Can I do it without hacking the source code?

Elaine

On Wed, Jul 20, 2011 at 3:07 PM, Elaine Li  wrote:

> Hi Folks,
>
> My boost function bf=div(product(num_clicks,0.3),sum(num_clicks,25))
> I would like to directly add the score of it to the final scoring instead
> of letting it be normalized by the queryNorm value.
> Is there anyway to do it?
>
> Thanks.
>
> Elaine
>


Using Scriptransformer to send a HTTP Request

2011-07-21 Thread sabman
I am using solr to index RSS feeds and I am using DataImportHandler to parse
the urls and then index them. Now I have implemented a web service that
takes a url and creates an thumbnail image and stores it in a local
directory.

So here is what I want to do: After the url is parsed, I want to send a Http
request to the web service with the URL. ScriptTransformer seemed the way to
go and here is how my data-config.xml file looks.




  



  



  




  

.



As you can see from the data-config file, I am currently testing to see if
this would work by hard coding a dummy URL. 

url.openConnection().connect(); Should make the HTTP Request. But the image
is not generated.

I see no compile errors. I tried the example script of printing out a
message 

var v = new java.lang.Runnable() {
run: function() {
print('PRINTING'); }
   }
   v.run();

And it worked. 

I even played around with the function names to force it throw some compile
errors and it did throw errors which shows that it is able to create the
objects of class type URL and URL Connection.

Any suggestions?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-Scriptransformer-to-send-a-HTTP-Request-tp3189479p3189479.html
Sent from the Solr - User mailing list archive at Nabble.com.


dih fetching but not adding records to index

2011-07-21 Thread abhayd
hi 
I m trying to load data into solr index from a xml file using dih

my promotions.xml file
--


3


4
  

-
schema.xml has
   

and dih config file is as follows
-











After full index load i get message

Indexing completed. Added/Updated: 0 documents. Deleted 0 documents.
Requests: 0, Fetched: 1, Skipped: 0, Processed: 0

And nothing is added to solr iindex. Any idea whats happening? I dont see
any error messages either


--
View this message in context: 
http://lucene.472066.n3.nabble.com/dih-fetching-but-not-adding-records-to-index-tp3189438p3189438.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: previous and next rows of current record

2011-07-21 Thread Bob Sandiford
But - what is it that makes '9' the next id after '5'?  why not '6'?  Or 
'91238412'? or '4'?

i.e. you still haven't answered the question about what 'next' and 'previous' 
really means...

But - if you already know that '9' is the next page, why not just do another 
query with id '9' to get the next record?

Bob Sandiford | Lead Software Engineer | SirsiDynix
P: 800.288.8020 X6943 | bob.sandif...@sirsidynix.com
www.sirsidynix.com


> -Original Message-
> From: Jonty Rhods [mailto:jonty.rh...@gmail.com]
> Sent: Thursday, July 21, 2011 2:33 PM
> To: solr-user@lucene.apache.org
> Subject: Re: previous and next rows of current record
> 
> Hi
> 
> in my case there is no id sequence. id is generated sequence wise for
> all category. but when we filter by category then same id become
> random. If i m on detail page which have id 5 and nrxt id is 9 so on
> same page my requirment is to get next id is 9.
> 
> On Thursday, July 21, 2011, Bob Sandiford
>  wrote:
> > Well, it sort of depends on what you mean by the 'previous' and the
> 'next' record.
> >
> > Do you have some type of sequencing built into your concept of your
> solr / lucene indexes?  Do you have sequential id's?
> >
> > i.e. What's the use case, and what's the data available to support
> your use case?
> >
> > Bob Sandiford | Lead Software Engineer | SirsiDynix
> > P: 800.288.8020 X6943 | bob.sandif...@sirsidynix.com
> > www.sirsidynix.com
> >
> >> -Original Message-
> >> From: Jonty Rhods [mailto:jonty.rh...@gmail.com]
> >> Sent: Thursday, July 21, 2011 2:18 PM
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: previous and next rows of current record
> >>
> >> Pls help..
> >>
> >> On Thursday, July 21, 2011, Jonty Rhods 
> wrote:
> >> > Hi,
> >> >
> >> > Is there any special query in solr to get the previous and next
> >> record of current record.I am getting single record detail using id
> >> from solr server. I need to get  next and previous on detail page.
> >> >
> >> > regardsJonty
> >> >
> >
> >
> >




Re: previous and next rows of current record

2011-07-21 Thread Jonty Rhods
Hi

in my case there is no id sequence. id is generated sequence wise for
all category. but when we filter by category then same id become
random. If i m on detail page which have id 5 and nrxt id is 9 so on
same page my requirment is to get next id is 9.

On Thursday, July 21, 2011, Bob Sandiford  wrote:
> Well, it sort of depends on what you mean by the 'previous' and the 'next' 
> record.
>
> Do you have some type of sequencing built into your concept of your solr / 
> lucene indexes?  Do you have sequential id's?
>
> i.e. What's the use case, and what's the data available to support your use 
> case?
>
> Bob Sandiford | Lead Software Engineer | SirsiDynix
> P: 800.288.8020 X6943 | bob.sandif...@sirsidynix.com
> www.sirsidynix.com
>
>> -Original Message-
>> From: Jonty Rhods [mailto:jonty.rh...@gmail.com]
>> Sent: Thursday, July 21, 2011 2:18 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: previous and next rows of current record
>>
>> Pls help..
>>
>> On Thursday, July 21, 2011, Jonty Rhods  wrote:
>> > Hi,
>> >
>> > Is there any special query in solr to get the previous and next
>> record of current record.I am getting single record detail using id
>> from solr server. I need to get  next and previous on detail page.
>> >
>> > regardsJonty
>> >
>
>
>


Re: commit time and lock

2011-07-21 Thread Jonty Rhods
Actually i m worried about the response time. i k commiting around 500
docs in every 5 minutes. as i know,correct me if i m wrong; at the
time of commiting solr server stop responding. my concern is how to
minimize the response time so user not need to wait. or any other
logic will require for my case. please suggest.

regards
jonty

On Tuesday, June 21, 2011, Erick Erickson  wrote:
> What is it you want help with? You haven't told us what the
> problem you're trying to solve is. Are you asking how to
> speed up indexing? What have you tried? Have you
> looked at: http://wiki.apache.org/solr/FAQ#Performance?
>
> Best
> Erick
>
> On Tue, Jun 21, 2011 at 2:16 AM, Jonty Rhods  wrote:
>> I am using solrj to index the data. I have around 5 docs indexed. As at
>> the time of commit due to lock server stop giving response so I was
>> calculating commit time:
>>
>> double starttemp = System.currentTimeMillis();
>> server.add(docs);
>> server.commit();
>> System.out.println("total time in commit = " + (System.currentTimeMillis() -
>> starttemp)/1000);
>>
>> It taking around 9 second to commit the 5000 docs with 15 fields. However I
>> am not confirm the lock time of index whether it is start
>> since server.add(docs); time or server.commit(); time only.
>>
>> If I am changing from above to following
>>
>> server.add(docs);
>> double starttemp = System.currentTimeMillis();
>> server.commit();
>> System.out.println("total time in commit = " + (System.currentTimeMillis() -
>> starttemp)/1000);
>>
>> then commit time becomes less then 1 second. I am not sure which one is
>> right.
>>
>> please help.
>>
>> regards
>> Jonty
>>
>


RE: previous and next rows of current record

2011-07-21 Thread Bob Sandiford
Well, it sort of depends on what you mean by the 'previous' and the 'next' 
record.

Do you have some type of sequencing built into your concept of your solr / 
lucene indexes?  Do you have sequential id's?

i.e. What's the use case, and what's the data available to support your use 
case?

Bob Sandiford | Lead Software Engineer | SirsiDynix
P: 800.288.8020 X6943 | bob.sandif...@sirsidynix.com
www.sirsidynix.com

> -Original Message-
> From: Jonty Rhods [mailto:jonty.rh...@gmail.com]
> Sent: Thursday, July 21, 2011 2:18 PM
> To: solr-user@lucene.apache.org
> Subject: Re: previous and next rows of current record
> 
> Pls help..
> 
> On Thursday, July 21, 2011, Jonty Rhods  wrote:
> > Hi,
> >
> > Is there any special query in solr to get the previous and next
> record of current record.I am getting single record detail using id
> from solr server. I need to get  next and previous on detail page.
> >
> > regardsJonty
> >




Re: random record from solr server

2011-07-21 Thread Jonty Rhods
Thanks..

On Monday, July 18, 2011, Ahmet Arslan  wrote:
>> How can I get random 100 record from last two days record
>> from solr server.
>>
>> I am using solr 3.1
>
> Hello, add this random field definition to you schema.xml
> 
> 
>
> Generate some seed value ( e.g. 125) at query time,
>
> and issue a query something like this:
>
> q:add_date[NOW-2DAYS TO *]&sort=random_125&start=0&rows=100
>
> If you use different seed values each time you will get random 100 record in 
> each request. I assume you have date field to store add date or similar.
>


Re: previous and next rows of current record

2011-07-21 Thread Jonty Rhods
Pls help..

On Thursday, July 21, 2011, Jonty Rhods  wrote:
> Hi,
>
> Is there any special query in solr to get the previous and next record of 
> current record.I am getting single record detail using id from solr server. I 
> need to get  next and previous on detail page.
>
> regardsJonty
>


Re: any detailed tutorials on plugin development?

2011-07-21 Thread Dmitry Kan
one option would be checking out the source code and reading the example
request handler implementation, which you could use it as a reference for
your own.

On Wed, Jul 20, 2011 at 4:59 AM, deniz  wrote:

> gosh sorry for my typo in msg first... i just realized it now... well
> anyway...
>
> i would like to find a detailed tutorial about how to implement an analyzer
> or a request handler plugin... but all i have got is nothing from the
> documentation of solr wiki...
>
> -
> Zeki ama calismiyor... Calissa yapar...
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/any-detailed-tutorials-on-plugin-development-tp3177821p3184160.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Regards,

Dmitry Kan


Re: Updating fields in an existing document

2011-07-21 Thread Benson Margulies
A followup. The wiki has a whole discussion of the 'update' XML
message. But solrj has nothing like it. Does that really exist? Is
there a reason to use it? If I just 'add' the document a second time,
it will replace?

On Wed, Jul 20, 2011 at 7:08 PM, Jonathan Rochkind  wrote:
> Nope, you're not missing anything, there's no way to alter a document in an 
> index but reindexing the whole document. Solr's architecture would make it 
> difficult (although never say impossible) to do otherwise. But you're right 
> it would be convenient for people other than you.
>
> Reindexing a single document ought not to be slow, although if you have many 
> of them at once it could be, or if you end up needing to very frequently 
> commit to an index it can indeed cause problems.
> 
> From: Benson Margulies [bimargul...@gmail.com]
> Sent: Wednesday, July 20, 2011 6:05 PM
> To: solr-user
> Subject: Updating fields in an existing document
>
> We find ourselves in the following quandry:
>
> At initial index time, we store a value in a field, and we use it for
> facetting. So it, seemingly, has to be there as a field.
>
> However, from time to time, something happens that causes us to want
> to change this value. As far as we know, this requires us to
> completely re-index the document, which is slow.
>
> It struck me that we can't be the only people to go down this road, so
> I write to inquire if we are missing something.
>


Re: Geospatial queries in Solr

2011-07-21 Thread Jamie Johnson
Yes this is just running the sample.  The query is as follows

http://localhost:8080/solr/select?debugQuery=true&q=point%3A%22IsWithin%28POLYGON%28%28-74.527709960938+40.350830078125%2C-74.4892578125+39.730102539062%2C-75.263793945313+39.653198242187%2C-74.527709960938+40.350830078125%29%29%29%22&rows=10&fl=id%2Cname%2Csource%2Cscore

which returns Philadelphia which is outside of my polygon, but inside
the bounding box which contains my polygon.

On Thu, Jul 21, 2011 at 1:05 PM, Smiley, David W.  wrote:
> Is this happening reproducibly from the demo app? Please try and reproduce it 
> there; if you can't then the problem is somewhere in your setup, I figure.  
> If you can reproduce it in the demo then please send me a direct email with 
> the polygon shape, and your expectation of a particular point that should fit.
>
> ~ David
>
> On Jul 21, 2011, at 12:40 PM, Jamie Johnson wrote:
>
>> I think I've missed something.  From what I'm seeing it appears that a
>> bounding box is being built from my polygon and any points in that
>> bounding box are returned.  This makes sense from the debug which says
>> the query is
>> +(+point__x:[-75.267333984375 TO -74.569702148438]
>> +point__y:[39.512329101563 TO 40.523071289063])
>> +DistanceValueSource(org.apache.lucene.spatial.base.distance.EuclidianDistanceCalculator@c7d9406)
>>
>> Given that does point support doing what I am trying to do or should I
>> be using another field type?
>>
>> I understand that you're busy so no rush on this.
>>
>> On Thu, Jul 21, 2011 at 11:48 AM, Jamie Johnson  wrote:
>>> Thanks for the reply.
>>>
>>> My setup has a point in the field and a shape as the query.  Given
>>> this it sounds as if I can get more precise results by changing the
>>> distErrPct on a query parameter.  I'll give this a whirl.  Again thank
>>> you.
>>>
>>>
>>> On Thu, Jul 21, 2011 at 11:13 AM, Smiley, David W.  
>>> wrote:
 If you are talking about indexed shapes, then there is an attribute on the 
 field type definition in your schema called "distErrPct".  Reasonable 
 values are between .01 and .20, in my opinion.  The default is .025, but 
 try setting it to .01.  For points, use the "maxDetailKm" parameter, which 
 is the kilometer detail level.  By default, that parameter is .001 -- 1 
 meter.

 If you are talking about your query shape, then this same parameter can be 
 supplied as a request parameter.  Again, the default is .025. The 
 RecursiveGridFieldType can handle infinite query side precision, so you 
 can supply 0 and still get reasonable performance. However if your 
 indexing side is a certain precision, then there's little point in using 
 more precision on the query side since in-effect it's as accurate as your 
 index side.

 If you're wondering more about the meaning of distErrPct, see this snippet 
 from SpatialArgs.java:
  /**
   * The fraction of the distance from the center of the query shape to its 
 nearest edge that is considered acceptable
   * error. The algorithm for computing the distance to the nearest edge is 
 actually a little different. It normalizes
   * the shape to a square given it's bounding box area:
   * sqrt(shape.bbox.area)/2
   * And the error distance is beyond the shape such that the shape is a 
 minimum shape.
   */
  public Double getDistPrecision() {

 ~ David

 On Jul 20, 2011, at 5:44 PM, Jamie Johnson wrote:

> Thanks David.  When trying to execute queries on a complex irregular
> polygon (say the shape of NJ) I'm getting results which are actually
> outside of that polygon. Is there a setting which controls this
> resolution?
>
> On Wed, Jul 20, 2011 at 2:53 PM, Smiley, David W.  
> wrote:
>> The notion of a "system property" is a java concept; google it and 
>> you'll learn more.
>>
>> BTW, despite my responsiveness in helping right now; I'm pretty busy 
>> this week so this won't necessarily last long.
>> ~ David
>>
>> On Jul 20, 2011, at 2:43 PM, Jamie Johnson wrote:
>>
>>> Where do you set that?
>>>
>>> On Wed, Jul 20, 2011 at 2:37 PM, Smiley, David W.  
>>> wrote:
 You can set the system property SpatialContextProvider to 
 com.googlecode.lucene.spatial.base.context.JtsSpatialContext

 ~ David

 On Jul 20, 2011, at 2:02 PM, Jamie Johnson wrote:

> So I've pulled the latest and can run the example, I've tried to move
> my config over and am having a bit of an issue when executing queries,
> specifically I get this:
>
> Unable to read: POLYGON((...
>
> looking at the code it's usign the simple spatial context, how do I
> specify JtsSpatialContext?
>
> On Wed, Jul 20, 2011 at 12:13 PM, Jamie Johnson  
> wrote:
>> Thanks for the update David, I'll giv

Re: Geospatial queries in Solr

2011-07-21 Thread Smiley, David W.
Is this happening reproducibly from the demo app? Please try and reproduce it 
there; if you can't then the problem is somewhere in your setup, I figure.  If 
you can reproduce it in the demo then please send me a direct email with the 
polygon shape, and your expectation of a particular point that should fit.

~ David

On Jul 21, 2011, at 12:40 PM, Jamie Johnson wrote:

> I think I've missed something.  From what I'm seeing it appears that a
> bounding box is being built from my polygon and any points in that
> bounding box are returned.  This makes sense from the debug which says
> the query is
> +(+point__x:[-75.267333984375 TO -74.569702148438]
> +point__y:[39.512329101563 TO 40.523071289063])
> +DistanceValueSource(org.apache.lucene.spatial.base.distance.EuclidianDistanceCalculator@c7d9406)
> 
> Given that does point support doing what I am trying to do or should I
> be using another field type?
> 
> I understand that you're busy so no rush on this.
> 
> On Thu, Jul 21, 2011 at 11:48 AM, Jamie Johnson  wrote:
>> Thanks for the reply.
>> 
>> My setup has a point in the field and a shape as the query.  Given
>> this it sounds as if I can get more precise results by changing the
>> distErrPct on a query parameter.  I'll give this a whirl.  Again thank
>> you.
>> 
>> 
>> On Thu, Jul 21, 2011 at 11:13 AM, Smiley, David W.  wrote:
>>> If you are talking about indexed shapes, then there is an attribute on the 
>>> field type definition in your schema called "distErrPct".  Reasonable 
>>> values are between .01 and .20, in my opinion.  The default is .025, but 
>>> try setting it to .01.  For points, use the "maxDetailKm" parameter, which 
>>> is the kilometer detail level.  By default, that parameter is .001 -- 1 
>>> meter.
>>> 
>>> If you are talking about your query shape, then this same parameter can be 
>>> supplied as a request parameter.  Again, the default is .025. The 
>>> RecursiveGridFieldType can handle infinite query side precision, so you can 
>>> supply 0 and still get reasonable performance. However if your indexing 
>>> side is a certain precision, then there's little point in using more 
>>> precision on the query side since in-effect it's as accurate as your index 
>>> side.
>>> 
>>> If you're wondering more about the meaning of distErrPct, see this snippet 
>>> from SpatialArgs.java:
>>>  /**
>>>   * The fraction of the distance from the center of the query shape to its 
>>> nearest edge that is considered acceptable
>>>   * error. The algorithm for computing the distance to the nearest edge is 
>>> actually a little different. It normalizes
>>>   * the shape to a square given it's bounding box area:
>>>   * sqrt(shape.bbox.area)/2
>>>   * And the error distance is beyond the shape such that the shape is a 
>>> minimum shape.
>>>   */
>>>  public Double getDistPrecision() {
>>> 
>>> ~ David
>>> 
>>> On Jul 20, 2011, at 5:44 PM, Jamie Johnson wrote:
>>> 
 Thanks David.  When trying to execute queries on a complex irregular
 polygon (say the shape of NJ) I'm getting results which are actually
 outside of that polygon. Is there a setting which controls this
 resolution?
 
 On Wed, Jul 20, 2011 at 2:53 PM, Smiley, David W.  
 wrote:
> The notion of a "system property" is a java concept; google it and you'll 
> learn more.
> 
> BTW, despite my responsiveness in helping right now; I'm pretty busy this 
> week so this won't necessarily last long.
> ~ David
> 
> On Jul 20, 2011, at 2:43 PM, Jamie Johnson wrote:
> 
>> Where do you set that?
>> 
>> On Wed, Jul 20, 2011 at 2:37 PM, Smiley, David W.  
>> wrote:
>>> You can set the system property SpatialContextProvider to 
>>> com.googlecode.lucene.spatial.base.context.JtsSpatialContext
>>> 
>>> ~ David
>>> 
>>> On Jul 20, 2011, at 2:02 PM, Jamie Johnson wrote:
>>> 
 So I've pulled the latest and can run the example, I've tried to move
 my config over and am having a bit of an issue when executing queries,
 specifically I get this:
 
 Unable to read: POLYGON((...
 
 looking at the code it's usign the simple spatial context, how do I
 specify JtsSpatialContext?
 
 On Wed, Jul 20, 2011 at 12:13 PM, Jamie Johnson  
 wrote:
> Thanks for the update David, I'll give that a try now.
> 
> On Wed, Jul 20, 2011 at 10:58 AM, Smiley, David W. 
>  wrote:
>> Ryan just updated LSP for Lucene/Solr trunk compatibility so you 
>> should do a "mvn clean install" and you'll be back in business.
>> 
>> On Jul 20, 2011, at 10:37 AM, Jamie Johnson wrote:
>> 
>>> Thanks for responding so quickly, I don't mind waiting a bit.  I'll
>>> hang out until the updates have been  made.  Thanks again.
>>> 
>>> On Tue, Jul 19, 2011 at 3:51 PM, Smiley, David W. 
>>>  wrote:
>

Re: Geospatial queries in Solr

2011-07-21 Thread Jamie Johnson
I think I've missed something.  From what I'm seeing it appears that a
bounding box is being built from my polygon and any points in that
bounding box are returned.  This makes sense from the debug which says
the query is
+(+point__x:[-75.267333984375 TO -74.569702148438]
+point__y:[39.512329101563 TO 40.523071289063])
+DistanceValueSource(org.apache.lucene.spatial.base.distance.EuclidianDistanceCalculator@c7d9406)

Given that does point support doing what I am trying to do or should I
be using another field type?

I understand that you're busy so no rush on this.

On Thu, Jul 21, 2011 at 11:48 AM, Jamie Johnson  wrote:
> Thanks for the reply.
>
> My setup has a point in the field and a shape as the query.  Given
> this it sounds as if I can get more precise results by changing the
> distErrPct on a query parameter.  I'll give this a whirl.  Again thank
> you.
>
>
> On Thu, Jul 21, 2011 at 11:13 AM, Smiley, David W.  wrote:
>> If you are talking about indexed shapes, then there is an attribute on the 
>> field type definition in your schema called "distErrPct".  Reasonable values 
>> are between .01 and .20, in my opinion.  The default is .025, but try 
>> setting it to .01.  For points, use the "maxDetailKm" parameter, which is 
>> the kilometer detail level.  By default, that parameter is .001 -- 1 meter.
>>
>> If you are talking about your query shape, then this same parameter can be 
>> supplied as a request parameter.  Again, the default is .025. The 
>> RecursiveGridFieldType can handle infinite query side precision, so you can 
>> supply 0 and still get reasonable performance. However if your indexing side 
>> is a certain precision, then there's little point in using more precision on 
>> the query side since in-effect it's as accurate as your index side.
>>
>> If you're wondering more about the meaning of distErrPct, see this snippet 
>> from SpatialArgs.java:
>>  /**
>>   * The fraction of the distance from the center of the query shape to its 
>> nearest edge that is considered acceptable
>>   * error. The algorithm for computing the distance to the nearest edge is 
>> actually a little different. It normalizes
>>   * the shape to a square given it's bounding box area:
>>   * sqrt(shape.bbox.area)/2
>>   * And the error distance is beyond the shape such that the shape is a 
>> minimum shape.
>>   */
>>  public Double getDistPrecision() {
>>
>> ~ David
>>
>> On Jul 20, 2011, at 5:44 PM, Jamie Johnson wrote:
>>
>>> Thanks David.  When trying to execute queries on a complex irregular
>>> polygon (say the shape of NJ) I'm getting results which are actually
>>> outside of that polygon. Is there a setting which controls this
>>> resolution?
>>>
>>> On Wed, Jul 20, 2011 at 2:53 PM, Smiley, David W.  wrote:
 The notion of a "system property" is a java concept; google it and you'll 
 learn more.

 BTW, despite my responsiveness in helping right now; I'm pretty busy this 
 week so this won't necessarily last long.
 ~ David

 On Jul 20, 2011, at 2:43 PM, Jamie Johnson wrote:

> Where do you set that?
>
> On Wed, Jul 20, 2011 at 2:37 PM, Smiley, David W.  
> wrote:
>> You can set the system property SpatialContextProvider to 
>> com.googlecode.lucene.spatial.base.context.JtsSpatialContext
>>
>> ~ David
>>
>> On Jul 20, 2011, at 2:02 PM, Jamie Johnson wrote:
>>
>>> So I've pulled the latest and can run the example, I've tried to move
>>> my config over and am having a bit of an issue when executing queries,
>>> specifically I get this:
>>>
>>> Unable to read: POLYGON((...
>>>
>>> looking at the code it's usign the simple spatial context, how do I
>>> specify JtsSpatialContext?
>>>
>>> On Wed, Jul 20, 2011 at 12:13 PM, Jamie Johnson  
>>> wrote:
 Thanks for the update David, I'll give that a try now.

 On Wed, Jul 20, 2011 at 10:58 AM, Smiley, David W.  
 wrote:
> Ryan just updated LSP for Lucene/Solr trunk compatibility so you 
> should do a "mvn clean install" and you'll be back in business.
>
> On Jul 20, 2011, at 10:37 AM, Jamie Johnson wrote:
>
>> Thanks for responding so quickly, I don't mind waiting a bit.  I'll
>> hang out until the updates have been  made.  Thanks again.
>>
>> On Tue, Jul 19, 2011 at 3:51 PM, Smiley, David W. 
>>  wrote:
>>> Hi Jamie.
>>> I work on LSP; it can index polygons and query for them. Although 
>>> the capability is there, we have more testing & benchmarking to do, 
>>> and then we need to put together a tutorial to explain how to use 
>>> it at the Solr layer.  I recently cleaned up the READMEs a bit.  
>>> Try downloading the trunk codebase, and follow the README.  It 
>>> points to another README which shows off a demo webapp.  At the 
>>> conclusion of this, you'll need to ex

Solr Spell checker PROBLEMS

2011-07-21 Thread magoelem
Hi, 
  
I am using Wordnet dictionary for spelling suggestions. 
  
i do not get any suggestion,can anyone help, 

My schema is configured as:schema.xml 

 
 
 

   

 
   


 
  
 

   

 
   

 
  
  

 
 



 My spell check component is configured as:sorlconfig.xml 


 
 
  textSpell  

- 
  solr.FileBasedSpellChecker  
  file  
  dictionary.txt  
  UTF-8  
  ./spellcheckerFile  
  

  

 
 
  true  
  true  
  default
  true  
  true
  2  
  

  spellcheck  
  
  
/ 

With the above configuration, I do not get any spelling suggestions. 
Can anyoneplease help. 

here is my out results when i heat this: 

http://localhost:8080/solr/spell/?q=hell
ultrashar&spellcheck=true&spellcheck.collate=true&spellcheck.build=true
 
 

  false  
  
  


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Spell-checker-PROBLEMS-tp3188639p3188639.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Index Solr Logs

2011-07-21 Thread O. Klein
http://karussell.wordpress.com/2010/10/27/feeding-solr-with-its-own-logs/
http://karussell.wordpress.com/2010/10/27/feeding-solr-with-its-own-logs/ 

might be useful if you don't want to use Loggly

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Index-Solr-Logs-tp3109956p3188928.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Geospatial queries in Solr

2011-07-21 Thread Jamie Johnson
Thanks for the reply.

My setup has a point in the field and a shape as the query.  Given
this it sounds as if I can get more precise results by changing the
distErrPct on a query parameter.  I'll give this a whirl.  Again thank
you.


On Thu, Jul 21, 2011 at 11:13 AM, Smiley, David W.  wrote:
> If you are talking about indexed shapes, then there is an attribute on the 
> field type definition in your schema called "distErrPct".  Reasonable values 
> are between .01 and .20, in my opinion.  The default is .025, but try setting 
> it to .01.  For points, use the "maxDetailKm" parameter, which is the 
> kilometer detail level.  By default, that parameter is .001 -- 1 meter.
>
> If you are talking about your query shape, then this same parameter can be 
> supplied as a request parameter.  Again, the default is .025. The 
> RecursiveGridFieldType can handle infinite query side precision, so you can 
> supply 0 and still get reasonable performance. However if your indexing side 
> is a certain precision, then there's little point in using more precision on 
> the query side since in-effect it's as accurate as your index side.
>
> If you're wondering more about the meaning of distErrPct, see this snippet 
> from SpatialArgs.java:
>  /**
>   * The fraction of the distance from the center of the query shape to its 
> nearest edge that is considered acceptable
>   * error. The algorithm for computing the distance to the nearest edge is 
> actually a little different. It normalizes
>   * the shape to a square given it's bounding box area:
>   * sqrt(shape.bbox.area)/2
>   * And the error distance is beyond the shape such that the shape is a 
> minimum shape.
>   */
>  public Double getDistPrecision() {
>
> ~ David
>
> On Jul 20, 2011, at 5:44 PM, Jamie Johnson wrote:
>
>> Thanks David.  When trying to execute queries on a complex irregular
>> polygon (say the shape of NJ) I'm getting results which are actually
>> outside of that polygon. Is there a setting which controls this
>> resolution?
>>
>> On Wed, Jul 20, 2011 at 2:53 PM, Smiley, David W.  wrote:
>>> The notion of a "system property" is a java concept; google it and you'll 
>>> learn more.
>>>
>>> BTW, despite my responsiveness in helping right now; I'm pretty busy this 
>>> week so this won't necessarily last long.
>>> ~ David
>>>
>>> On Jul 20, 2011, at 2:43 PM, Jamie Johnson wrote:
>>>
 Where do you set that?

 On Wed, Jul 20, 2011 at 2:37 PM, Smiley, David W.  
 wrote:
> You can set the system property SpatialContextProvider to 
> com.googlecode.lucene.spatial.base.context.JtsSpatialContext
>
> ~ David
>
> On Jul 20, 2011, at 2:02 PM, Jamie Johnson wrote:
>
>> So I've pulled the latest and can run the example, I've tried to move
>> my config over and am having a bit of an issue when executing queries,
>> specifically I get this:
>>
>> Unable to read: POLYGON((...
>>
>> looking at the code it's usign the simple spatial context, how do I
>> specify JtsSpatialContext?
>>
>> On Wed, Jul 20, 2011 at 12:13 PM, Jamie Johnson  
>> wrote:
>>> Thanks for the update David, I'll give that a try now.
>>>
>>> On Wed, Jul 20, 2011 at 10:58 AM, Smiley, David W.  
>>> wrote:
 Ryan just updated LSP for Lucene/Solr trunk compatibility so you 
 should do a "mvn clean install" and you'll be back in business.

 On Jul 20, 2011, at 10:37 AM, Jamie Johnson wrote:

> Thanks for responding so quickly, I don't mind waiting a bit.  I'll
> hang out until the updates have been  made.  Thanks again.
>
> On Tue, Jul 19, 2011 at 3:51 PM, Smiley, David W.  
> wrote:
>> Hi Jamie.
>> I work on LSP; it can index polygons and query for them. Although 
>> the capability is there, we have more testing & benchmarking to do, 
>> and then we need to put together a tutorial to explain how to use it 
>> at the Solr layer.  I recently cleaned up the READMEs a bit.  Try 
>> downloading the trunk codebase, and follow the README.  It points to 
>> another README which shows off a demo webapp.  At the conclusion of 
>> this, you'll need to examine the tests and webapp a bit to figure 
>> out how to apply it in your app.  We don't yet have a tutorial as 
>> the framework has been in flux  although it has stabilized a good 
>> deal.
>>
>> Oh... by the way, this works off of Lucene/Solr trunk.  Within the 
>> past week there was a major change to trunk and LSP won't compile 
>> until we make updates.  Either Ryan McKinley or I will get to that 
>> by the end of the week.  So unless you have access to 2-week old 
>> maven artifacts of Lucene/Solr, you're stuck right now.
>>
>> ~ David Smiley
>> Author: http://www.packtpub.com/solr-1-4-enterprise-search-server/
>>

Re: Geospatial queries in Solr

2011-07-21 Thread Smiley, David W.
If you are talking about indexed shapes, then there is an attribute on the 
field type definition in your schema called "distErrPct".  Reasonable values 
are between .01 and .20, in my opinion.  The default is .025, but try setting 
it to .01.  For points, use the "maxDetailKm" parameter, which is the kilometer 
detail level.  By default, that parameter is .001 -- 1 meter.

If you are talking about your query shape, then this same parameter can be 
supplied as a request parameter.  Again, the default is .025. The 
RecursiveGridFieldType can handle infinite query side precision, so you can 
supply 0 and still get reasonable performance. However if your indexing side is 
a certain precision, then there's little point in using more precision on the 
query side since in-effect it's as accurate as your index side.

If you're wondering more about the meaning of distErrPct, see this snippet from 
SpatialArgs.java:
  /**
   * The fraction of the distance from the center of the query shape to its 
nearest edge that is considered acceptable
   * error. The algorithm for computing the distance to the nearest edge is 
actually a little different. It normalizes
   * the shape to a square given it's bounding box area:
   * sqrt(shape.bbox.area)/2
   * And the error distance is beyond the shape such that the shape is a 
minimum shape.
   */
  public Double getDistPrecision() {

~ David

On Jul 20, 2011, at 5:44 PM, Jamie Johnson wrote:

> Thanks David.  When trying to execute queries on a complex irregular
> polygon (say the shape of NJ) I'm getting results which are actually
> outside of that polygon. Is there a setting which controls this
> resolution?
> 
> On Wed, Jul 20, 2011 at 2:53 PM, Smiley, David W.  wrote:
>> The notion of a "system property" is a java concept; google it and you'll 
>> learn more.
>> 
>> BTW, despite my responsiveness in helping right now; I'm pretty busy this 
>> week so this won't necessarily last long.
>> ~ David
>> 
>> On Jul 20, 2011, at 2:43 PM, Jamie Johnson wrote:
>> 
>>> Where do you set that?
>>> 
>>> On Wed, Jul 20, 2011 at 2:37 PM, Smiley, David W.  wrote:
 You can set the system property SpatialContextProvider to 
 com.googlecode.lucene.spatial.base.context.JtsSpatialContext
 
 ~ David
 
 On Jul 20, 2011, at 2:02 PM, Jamie Johnson wrote:
 
> So I've pulled the latest and can run the example, I've tried to move
> my config over and am having a bit of an issue when executing queries,
> specifically I get this:
> 
> Unable to read: POLYGON((...
> 
> looking at the code it's usign the simple spatial context, how do I
> specify JtsSpatialContext?
> 
> On Wed, Jul 20, 2011 at 12:13 PM, Jamie Johnson  wrote:
>> Thanks for the update David, I'll give that a try now.
>> 
>> On Wed, Jul 20, 2011 at 10:58 AM, Smiley, David W.  
>> wrote:
>>> Ryan just updated LSP for Lucene/Solr trunk compatibility so you should 
>>> do a "mvn clean install" and you'll be back in business.
>>> 
>>> On Jul 20, 2011, at 10:37 AM, Jamie Johnson wrote:
>>> 
 Thanks for responding so quickly, I don't mind waiting a bit.  I'll
 hang out until the updates have been  made.  Thanks again.
 
 On Tue, Jul 19, 2011 at 3:51 PM, Smiley, David W.  
 wrote:
> Hi Jamie.
> I work on LSP; it can index polygons and query for them. Although the 
> capability is there, we have more testing & benchmarking to do, and 
> then we need to put together a tutorial to explain how to use it at 
> the Solr layer.  I recently cleaned up the READMEs a bit.  Try 
> downloading the trunk codebase, and follow the README.  It points to 
> another README which shows off a demo webapp.  At the conclusion of 
> this, you'll need to examine the tests and webapp a bit to figure out 
> how to apply it in your app.  We don't yet have a tutorial as the 
> framework has been in flux  although it has stabilized a good deal.
> 
> Oh... by the way, this works off of Lucene/Solr trunk.  Within the 
> past week there was a major change to trunk and LSP won't compile 
> until we make updates.  Either Ryan McKinley or I will get to that by 
> the end of the week.  So unless you have access to 2-week old maven 
> artifacts of Lucene/Solr, you're stuck right now.
> 
> ~ David Smiley
> Author: http://www.packtpub.com/solr-1-4-enterprise-search-server/
> 
> On Jul 19, 2011, at 3:03 PM, Jamie Johnson wrote:
> 
>> I have looked at the code being shared on the
>> lucene-spatial-playground and was wondering if anyone could provide
>> some details as to its state.  Specifically I'm looking to add
>> geospatial support to my application based on a user provided 
>> polygon,
>> is this currently possible using t

Display term frequency / phrase freqency for documents

2011-07-21 Thread Matthew Haynes
Hi there,

I'd like to expose the termFrequency / phraseFrequency to the end user in my
application. For example I would like to be able to say "Your search term
appears X times in this document".

I can see these figures exposed via debugQuery=on, where I get output like
this ...

  tf(phraseFreq=1.0)

Or this ...

 tf(termFreq(transcript:ratifi)=4

Is there anyway to expose these figures in XML nodes though? I could parse
them from the debug output but that feels very hack !

Any help is much appreciated,

Many thanks

Matt


Re: Java replication takes slaves down

2011-07-21 Thread Jonathan Rochkind
How often do you replicate? Could it be a too-frequent-commit problem? 
(a replication is a commit to the slave).


On 7/21/2011 4:39 AM, Alexander Valet | edelight wrote:

Hi everybody,

we are using Solr 1.4.1 as our search backend and are replicating (Java based) 
from one master to four slaves.
When our index data grew in size (optimized around 4,5 GB) lately we started 
having huge trouble to spread a new index to
the slaves. They run on 100% CPU and are not able to serve request anymore. We 
have to kill the
Java process to start them again...

Does anybody have a similar experience? Any hints or ideas on how to set up 
proper replication?


Thanks,
Alex






Re: use case: structured DB records with a bunch of related files

2011-07-21 Thread Erick Erickson
I suspect you'll have to use Tika to parse the attachments, and as you
do add the info that'll allow you to display the link to the meta-data that
Tika generates. I'm in a bit of a rush, but one approach would be to use
SolrJ to do your indexing database querying, and you can ask Tika
from the SolrJ to parse the attachments and actually assemble the document
for the attachment and send it to Solr from the Tika output, add whatever
meta-data you want for a link back to the DB record and index the doc.

Hope this helps
Erick

On Tue, Jul 19, 2011 at 12:50 PM, Travis Low  wrote:
> Greetings.  I have a bunch of highly structured DB records, and I'm pretty
> clear on how to index those.  However, each of those records may have any
> number of related documents (Word, Excel, PDF, PPT, etc.).  All of this
> information will change over time.
>
> Can someone point me to a use case or some good reading to get me started on
> configuring Solr to index the DB records and files in such a way as to
> relate the two types of information?  By "relate", I mean that if there's a
> hit in a related file, then I need to show the user a link to the DB record
> as well as a link to the file.
>
> Thanks in advance.
>
> cheers,
>
> Travis
>
> --
>
> **
>
> *Travis Low, Director of Development*
>
>
> ** * *
>
> *Centurion Research Solutions, LLC*
>
> *14048 ParkEast Circle *•* Suite 100 *•* Chantilly, VA 20151*
>
> *703-956-6276 *•* 703-378-4474 (fax)*
>
> *http://www.centurionresearch.com* 
>
> **The information contained in this email message is confidential and
> protected from disclosure.  If you are not the intended recipient, any use
> or dissemination of this communication, including attachments, is strictly
> prohibited.  If you received this email message in error, please delete it
> and immediately notify the sender.
>
> This email message and any attachments have been scanned and are believed to
> be free of malicious software and defects that might affect any computer
> system in which they are received and opened. No responsibility is accepted
> by Centurion Research Solutions, LLC for any loss or damage arising from the
> content of this email.
>


Re: Schema design/data import

2011-07-21 Thread Stefan Matheis

Hey Travis,

after reading your Mail .. and thinking a bit of it, i'm not sure if i 
would go with Nutch. Nutch is [from my understanding] more a crawler .. 
meant to crawl external / unknown sites.


But, if it got this correct, you have a complete knowledge of your data 
and could solr exactly tell what to index - and how the things are 
related to each other.


So i guess .. the more relevant Question would be: How would you / 
people use Solr to search for your records? Will the search the content 
of the attached documents/files? Or is it more a structured search with 
Filters? And, what are they expecting to see in the end? Something like 
DocumentCloud (https://www.documentcloud.org/public/search/palin) ?


If i've got something wrong .. please tell me :)

Regards
Stefan

Am 20.07.2011 19:28, schrieb Travis Low:

Greetings.  I am struggling to design a schema and a data import/update
strategy for some semi-complicated data.  I would appreciate any input.

What we have is a bunch of database records that may or may not have files
attached.  Sometimes no files, sometimes 50.

The requirement is to index the database records AND the documents, and the
search results would be just links to the database records.

I'd love to crawl the site with Nutch and be done with it, but we have a
complicated search form with various codes and attributes for the database
records, so we need a detailed schema that will loosely correspond to boxes
on the search form.  I don't think we could easily do that if we just crawl
the site.  But with a detailed schema, I'm having trouble understanding how
we could import and index from the database, and also index the related
files, and have the same schema being populated, especially with the number
of related documents being variable (maybe index them all to one field?).

We have a lot of flexibility on how we can build this, so I'm open to any
suggestions or pointers for further reading.  I've spent a fair amount of
time on the wiki but I didn't see anything that seemed directly relevant.

An additional difficulty, that I am willing to overlook for the first cut,
is that some of these files are zipped, and some of the zip files may
contain other zip files, to maybe 3 or 4 levels deep.

Help, please?

cheers,

Travis





Frange Function Query

2011-07-21 Thread Rohit Gupta
Hi,

I have the following query in which I am using !frange function twice and the 
query is not returning any results. However if i use a single !frange function 
then the results come for the same query. 

Is it now possible to execute two franges in a single query?

q="woolmark"&fq={!frange l=33787806 u=33787918}id&fq={!frange 
l=40817415}id&fq=createdOnGMTDate:[2011-07-01T14%3A30%3A00Z+TO+2011-07-21T14%3A30%3A00Z]


Regards,
Rohit

Re: Java replication takes slaves down

2011-07-21 Thread Andrea Gazzarini
We are using a similar architecture but with two slaves, the index is around 
9GB * and we don't have such problem...

Each slave is running on a separate machine so we have three nodes in total (1 
indexer + 2 searcher)...initially it was everything on a single node and it was 
working without any problem 

Don't know...try with some tool like jstack in order to see what the JVM is 
doing...

Regards
Andrea

--Original Message--
From: Alexander Valet | edelight
To: solr-user@lucene.apache.org
ReplyTo: solr-user@lucene.apache.org
Subject: Java replication takes slaves down
Sent: Jul 21, 2011 10:39

Hi everybody,

we are using Solr 1.4.1 as our search backend and are replicating (Java based) 
from one master to four slaves.
When our index data grew in size (optimized around 4,5 GB) lately we started 
having huge trouble to spread a new index to
the slaves. They run on 100% CPU and are not able to serve request anymore. We 
have to kill the
Java process to start them again...

Does anybody have a similar experience? Any hints or ideas on how to set up 
proper replication?


Thanks,
Alex
 






previous and next rows of current record

2011-07-21 Thread Jonty Rhods
Hi,

Is there any special query in solr to get the previous and next record of
current record.
I am getting single record detail using id from solr server. I need to get
 next and previous on detail page.

regards
Jonty


Java replication takes slaves down

2011-07-21 Thread Alexander Valet | edelight
Hi everybody,

we are using Solr 1.4.1 as our search backend and are replicating (Java based) 
from one master to four slaves.
When our index data grew in size (optimized around 4,5 GB) lately we started 
having huge trouble to spread a new index to
the slaves. They run on 100% CPU and are not able to serve request anymore. We 
have to kill the
Java process to start them again...

Does anybody have a similar experience? Any hints or ideas on how to set up 
proper replication?


Thanks,
Alex
 




Re: How can i find a document by a special id?

2011-07-21 Thread Per Newgro
Sorry for being that stupid. I've modified the wrong schema.

So the "solr.WordDelimiterFilterFactory" works as expected and solved my 
problem. I've added the line


to my schema and test is green.

Thanks all for helping me
Per


 Original-Nachricht 
> Datum: Thu, 21 Jul 2011 09:53:23 +0200
> Von: "Per Newgro" 
> An: solr-user@lucene.apache.org
> Betreff: Re: How can i find a document by a special id?

> The problem is that i didn't store the mediacode in a field. Because the
> code is used frequently for getting the customer source.
> 
> So far i've found the "solr.WordDelimiterFilterFactory" which is (from
> Wiki) the way to go. The problem seems to be that i'm searching a "longer"
> string then i've indexed. I only index the numeric id (12345).
> But query string is BR12345. I don't get any results. Can i fine-tune
> the WDFF somehow? 
> 
> By using the admin/analysis.jsp 
> 
> Index Analyzer
> org.apache.solr.analysis.StandardTokenizerFactory
> {luceneMatchVersion=LUCENE_33}
> position  1
> term text BR12345
> startOffset   0
> endOffset 7
> type  
> org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt,
> ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
> position  1
> term text BR12345
> startOffset   0
> endOffset 7
> type  
> org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1,
> generateNumberParts=1, catenateWords=1, luceneMatchVersion=LUCENE_33,
> generateWordParts=1, catenateAll=0, catenateNumbers=1}
> position  1   2
> term text BR  12345
> startOffset   0   2
> endOffset 2   7
> type
> org.apache.solr.analysis.LowerCaseFilterFactory
> {luceneMatchVersion=LUCENE_33}
> position  1   2
> term text br  12345
> startOffset   0   2
> endOffset 2   7
> type
> Query Analyzer
> org.apache.solr.analysis.StandardTokenizerFactory
> {luceneMatchVersion=LUCENE_33}
> position  1
> term text BR12345
> startOffset   0
> endOffset 7
> type  
> org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt,
> ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
> position  1
> term text BR12345
> startOffset   0
> endOffset 7
> type  
> org.apache.solr.analysis.SynonymFilterFactory {synonyms=synonyms.txt,
> expand=true, ignoreCase=true, luceneMatchVersion=LUCENE_33}
> position  1
> term text BR12345
> startOffset   0
> endOffset 7
> type  
> org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1,
> generateNumberParts=1, catenateWords=0, luceneMatchVersion=LUCENE_33,
> generateWordParts=1, catenateAll=0, catenateNumbers=0}
> position  1   2
> term text BR  12345
> startOffset   0   2
> endOffset 2   7
> type
> org.apache.solr.analysis.LowerCaseFilterFactory
> {luceneMatchVersion=LUCENE_33}
> position  1   2
> term text br  12345
> startOffset   0   2
> endOffset 2   7
> type
> 
> My field type is here
> 
> schema.xml
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" enablePositionIncrements="true" />
> 
>  ignoreCase="true" expand="false"/>
>  generateWordParts="1" generateNumberParts="1" catenateWords="1" 
> catenateNumbers="1"
> catenateAll="0" splitOnCaseChange="1"/>
> 
>   
>   
> 
>  words="stopwords.txt" enablePositionIncrements="true" />
>  ignoreCase="true" expand="true"/>
>generateWordParts="1"
> generateNumberParts="1" catenateWords="0" catenateNumbers="1" catenateAll="1"
> splitOnCaseChange="1"/>
> 
>   
> 
> 
> 
> Thanks
> Per
> 
>  Original-Nachricht 
> > Datum: Wed, 20 Jul 2011 17:03:40 -0400
> > Von: Bill Bell 
> > An: "solr-user@lucene.apache.org" 
> > CC: "solr-user@lucene.apache.org" 
> > Betreff: Re: How can i find a document by a special id?
> 
> > Why not just search the 2 fields?
> > 
> > q=*:*&fq=mediacode:AB OR id:123456
> > 
> > You could take the user input and replace it:
> > 
> > q=*:*&fq=mediacode:$input OR id:$input
> > 
> > Of course you can also use dismax and wrap with an OR.
> > 
> > Bill Bell
> > Sent from mobile
> > 
> > 
> > On Jul 20, 2011, at 3:38 PM, Chris Hostetter 
> > wrote:
> > 
> > > 
> > > : Am 20.07.2011 19:23, schrieb Kyle Lee:
> > > : > Is the mediacode always alphabetic, and is the ID always numeric?
> > > : > 
> > > : No sadly not. We expose our products on "too" many medias :-).
> > > 
> > > If i'm understanding you correctly, you're saying even the prefix "AB"
> > is 
> > > not special, that there could be any number of prefixes identifying 
> > > differnet "mediacodes" ? and the product ids aren't all numeric?
> > > 
> > > your question seems  absurd.  
> > > 
> > > I can only assume that I am horribly missunderstanding your situation.
>  
> > > (which is very easy to do when you

Re: How can i find a document by a special id?

2011-07-21 Thread Per Newgro
The problem is that i didn't store the mediacode in a field. Because the code 
is used frequently for getting the customer source.

So far i've found the "solr.WordDelimiterFilterFactory" which is (from Wiki) 
the way to go. The problem seems to be that i'm searching a "longer" string 
then i've indexed. I only index the numeric id (12345).
But query string is BR12345. I don't get any results. Can i fine-tune
the WDFF somehow? 

By using the admin/analysis.jsp 

Index Analyzer
org.apache.solr.analysis.StandardTokenizerFactory {luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
type
org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt, 
ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
type
org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1, 
generateNumberParts=1, catenateWords=1, luceneMatchVersion=LUCENE_33, 
generateWordParts=1, catenateAll=0, catenateNumbers=1}
position1   2
term text   BR  12345
startOffset 0   2
endOffset   2   7
type  
org.apache.solr.analysis.LowerCaseFilterFactory {luceneMatchVersion=LUCENE_33}
position1   2
term text   br  12345
startOffset 0   2
endOffset   2   7
type  
Query Analyzer
org.apache.solr.analysis.StandardTokenizerFactory {luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
type
org.apache.solr.analysis.StopFilterFactory {words=stopwords.txt, 
ignoreCase=true, enablePositionIncrements=true, luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
type
org.apache.solr.analysis.SynonymFilterFactory {synonyms=synonyms.txt, 
expand=true, ignoreCase=true, luceneMatchVersion=LUCENE_33}
position1
term text   BR12345
startOffset 0
endOffset   7
type
org.apache.solr.analysis.WordDelimiterFilterFactory {splitOnCaseChange=1, 
generateNumberParts=1, catenateWords=0, luceneMatchVersion=LUCENE_33, 
generateWordParts=1, catenateAll=0, catenateNumbers=0}
position1   2
term text   BR  12345
startOffset 0   2
endOffset   2   7
type  
org.apache.solr.analysis.LowerCaseFilterFactory {luceneMatchVersion=LUCENE_33}
position1   2
term text   br  12345
startOffset 0   2
endOffset   2   7
type  

My field type is here

schema.xml

  






  
  





  



Thanks
Per

 Original-Nachricht 
> Datum: Wed, 20 Jul 2011 17:03:40 -0400
> Von: Bill Bell 
> An: "solr-user@lucene.apache.org" 
> CC: "solr-user@lucene.apache.org" 
> Betreff: Re: How can i find a document by a special id?

> Why not just search the 2 fields?
> 
> q=*:*&fq=mediacode:AB OR id:123456
> 
> You could take the user input and replace it:
> 
> q=*:*&fq=mediacode:$input OR id:$input
> 
> Of course you can also use dismax and wrap with an OR.
> 
> Bill Bell
> Sent from mobile
> 
> 
> On Jul 20, 2011, at 3:38 PM, Chris Hostetter 
> wrote:
> 
> > 
> > : Am 20.07.2011 19:23, schrieb Kyle Lee:
> > : > Is the mediacode always alphabetic, and is the ID always numeric?
> > : > 
> > : No sadly not. We expose our products on "too" many medias :-).
> > 
> > If i'm understanding you correctly, you're saying even the prefix "AB"
> is 
> > not special, that there could be any number of prefixes identifying 
> > differnet "mediacodes" ? and the product ids aren't all numeric?
> > 
> > your question seems  absurd.  
> > 
> > I can only assume that I am horribly missunderstanding your situation.  
> > (which is very easy to do when you only have a single contrieved piece
> of 
> > example data to go on)
> > 
> > As a general rule, it's not a good idea to think about Solr in the same 
> > way as a relational database, but Perhaps if you imagine for a moment
> that 
> > your Solr index *was* a (read only) relational database, with each 
> > solr field corrisponding to a column in your DB, and then you described
> in 
> > psuedo-code/sql how you would go about doing the types of id lookups you
> > want to do, it might give us a better idea of your situation so we can 
> > suggest an approach for dealing with it.
> > 
> > 
> > -Hoss

-- 
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!   
Jetzt informieren: http://www.gmx.net/de/go/freephone


filter query parameter not working as expected

2011-07-21 Thread elisabeth benoit
Hello,

There is something I don't quite get with fq parameter.

I have this query

select?&q=VINCI Park&fq=WAY_ANALYZED:de l hotel de ville AND
(TOWN_ANALYZED:paris OR DEPARTEMENT_ANALYZED:paris)&rows=200&fl=*,score

and two answers. One having WAY_ANALYZED = 48 r de l'hôtel de ville, which
is ok

and the other called Vinci Park but having WAY_ANALYZED = 143 r lecourbe.

Is there something I didn't understand about fq parameter?

I'm using Solr 3.2.

Thanks,
Elisabeth Benoit


Getting a wierd Class Not Found Exception: SolrParams

2011-07-21 Thread Sowmya V.B.
Hi All

I have been getting this wierd error since yday evening, whose cause I am
not able to figure out.
I made a webinterface to read and display Solr Results, which is a servlet
that calls Solr Servlet.
I am

I give the query to Solr, using:
MultiMapSolrParams solrparamsmini =
SolrRequestParsers.parseQueryString(queryrequest.toString());
-where queryrequest contains all the ingredients of a Solr query.

Eg:   StringBuffer queryrequest = new StringBuffer();
queryrequest.append("&q=" + query);

queryrequest.append("&start=0&rows=30&hl=true&hl.fl=text&hl.frag=500&defType=dismax");

queryrequest.append("&bq="+Field1+":["+frompercent+"%20TO%20"+topercent+"]");

It compiles and builds without errors, but I get this error
"java.lang.ClassNotFoundException:
org.apache.solr.common.params.SolrParams", when I run the app.
But, I dont use SolrParams class anywhere in my code!

Here is the stack trace:
INFO: Server startup in 1953 ms
Jul 21, 2011 8:52:20 AM org.apache.catalina.core.ApplicationContext log
INFO: Marking servlet solrsearch as unavailable
Jul 21, 2011 8:52:20 AM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Allocate exception for servlet solrsearch
java.lang.ClassNotFoundException: org.apache.solr.common.params.SolrParams
at
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1676)
at
org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1521)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
at java.lang.Class.getConstructor0(Class.java:2699)
at java.lang.Class.newInstance0(Class.java:326)
at java.lang.Class.newInstance(Class.java:308)
at
org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:119)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1062)
at
org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:813)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:135)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:462)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:395)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:250)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:166)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)


Anyone had this kind of issue before?
-- 
Sowmya V.B.

Losing optimism is blasphemy!
http://vbsowmya.wordpress.com